scipy.optimize.newton_krylov#

scipy.optimize.newton_krylov(F, xin, iter=None, rdiff=None, method='lgmres', inner_maxiter=20, inner_M=None, outer_k=10, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search='armijo', callback=None, **kw)#

Find a root of a function, using Krylov approximation for inverse Jacobian.

This method is suitable for solving large-scale problems.

Parameters:
Ffunction(x) -> f

Function whose root to find; should take and return an array-like object.

xinarray_like

Initial guess for the solution

rdifffloat, optional

Relative step size to use in numerical differentiation.

methodstr or callable, optional

Krylov method to use to approximate the Jacobian. Can be a string, or a function implementing the same interface as the iterative solvers in scipy.sparse.linalg. If a string, needs to be one of: 'lgmres', 'gmres', 'bicgstab', 'cgs', 'minres', 'tfqmr'.

The default is scipy.sparse.linalg.lgmres.

inner_maxiterint, optional

Parameter to pass to the “inner” Krylov solver: maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved.

inner_MLinearOperator or InverseJacobian

Preconditioner for the inner Krylov iteration. Note that you can use also inverse Jacobians as (adaptive) preconditioners. For example,

>>> from scipy.optimize import BroydenFirst, KrylovJacobian
>>> from scipy.optimize import InverseJacobian
>>> jac = BroydenFirst()
>>> kjac = KrylovJacobian(inner_M=InverseJacobian(jac))

If the preconditioner has a method named ‘update’, it will be called as update(x, f) after each nonlinear step, with x giving the current point, and f the current function value.

outer_kint, optional

Size of the subspace kept across LGMRES nonlinear iterations. See scipy.sparse.linalg.lgmres for details.

inner_kwargskwargs

Keyword parameters for the “inner” Krylov solver (defined with method). Parameter names must start with the inner_ prefix which will be stripped before passing on the inner method. See, e.g., scipy.sparse.linalg.gmres for details.

iterint, optional

Number of iterations to make. If omitted (default), make as many as required to meet tolerances.

verbosebool, optional

Print status to stdout on every iteration.

maxiterint, optional

Maximum number of iterations to make. If more are needed to meet convergence, NoConvergence is raised.

f_tolfloat, optional

Absolute tolerance (in max-norm) for the residual. If omitted, default is 6e-6.

f_rtolfloat, optional

Relative tolerance for the residual. If omitted, not used.

x_tolfloat, optional

Absolute minimum step size, as determined from the Jacobian approximation. If the step size is smaller than this, optimization is terminated as successful. If omitted, not used.

x_rtolfloat, optional

Relative minimum step size. If omitted, not used.

tol_normfunction(vector) -> scalar, optional

Norm to use in convergence check. Default is the maximum norm.

line_search{None, ‘armijo’ (default), ‘wolfe’}, optional

Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. Defaults to ‘armijo’.

callbackfunction, optional

Optional callback function. It is called on every iteration as callback(x, f) where x is the current solution and f the corresponding residual.

Returns:
solndarray

An array (of similar array type as x0) containing the final solution.

Raises:
NoConvergence

When a solution was not found.

See also

root

Interface to root finding algorithms for multivariate functions. See method='krylov' in particular.

scipy.sparse.linalg.gmres
scipy.sparse.linalg.lgmres

Notes

This function implements a Newton-Krylov solver. The basic idea is to compute the inverse of the Jacobian with an iterative Krylov method. These methods require only evaluating the Jacobian-vector products, which are conveniently approximated by a finite difference:

\[J v \approx (f(x + \omega*v/|v|) - f(x)) / \omega\]

Due to the use of iterative matrix inverses, these methods can deal with large nonlinear problems.

SciPy’s scipy.sparse.linalg module offers a selection of Krylov solvers to choose from. The default here is lgmres, which is a variant of restarted GMRES iteration that reuses some of the information obtained in the previous Newton steps to invert Jacobians in subsequent steps.

For a review on Newton-Krylov methods, see for example [1], and for the LGMRES sparse inverse method, see [2].

References

[1]

C. T. Kelley, Solving Nonlinear Equations with Newton’s Method, SIAM, pp.57-83, 2003. DOI:10.1137/1.9780898718898.ch3

[2]

D.A. Knoll and D.E. Keyes, J. Comp. Phys. 193, 357 (2004). DOI:10.1016/j.jcp.2003.08.010

[3]

A.H. Baker and E.R. Jessup and T. Manteuffel, SIAM J. Matrix Anal. Appl. 26, 962 (2005). DOI:10.1137/S0895479803422014

Examples

The following functions define a system of nonlinear equations

>>> def fun(x):
...     return [x[0] + 0.5 * x[1] - 1.0,
...             0.5 * (x[1] - x[0]) ** 2]

A solution can be obtained as follows.

>>> from scipy import optimize
>>> sol = optimize.newton_krylov(fun, [0, 0])
>>> sol
array([0.66731771, 0.66536458])