scipy.sparse.linalg.lsqr¶

scipy.sparse.linalg.
lsqr
(A, b, damp=0.0, atol=1e08, btol=1e08, conlim=100000000.0, iter_lim=None, show=False, calc_var=False, x0=None)[source]¶ Find the leastsquares solution to a large, sparse, linear system of equations.
The function solves
Ax = b
ormin b  Ax^2
ormin Ax  b^2 + d^2 x^2
.The matrix A may be square or rectangular (overdetermined or underdetermined), and may have any rank.
1. Unsymmetric equations  solve A*x = b 2. Linear least squares  solve A*x = b in the leastsquares sense 3. Damped least squares  solve ( A )*x = ( b ) ( damp*I ) ( 0 ) in the leastsquares sense
Parameters: A : {sparse matrix, ndarray, LinearOperator}
Representation of an mbyn matrix. It is required that the linear operator can produce
Ax
andA^T x
.b : array_like, shape (m,)
Righthand side vector
b
.damp : float
Damping coefficient.
atol, btol : float, optional
Stopping tolerances. If both are 1.0e9 (say), the final residual norm should be accurate to about 9 digits. (The final x will usually have fewer correct digits, depending on cond(A) and the size of damp.)
conlim : float, optional
Another stopping tolerance. lsqr terminates if an estimate of
cond(A)
exceeds conlim. For compatible systemsAx = b
, conlim could be as large as 1.0e+12 (say). For leastsquares problems, conlim should be less than 1.0e+8. Maximum precision can be obtained by settingatol = btol = conlim = zero
, but the number of iterations may then be excessive.iter_lim : int, optional
Explicit limitation on number of iterations (for safety).
show : bool, optional
Display an iteration log.
calc_var : bool, optional
Whether to estimate diagonals of
(A'A + damp^2*I)^{1}
.x0 : array_like, shape (n,), optional
Initial guess of x, if None zeros are used.
New in version 1.0.0.
Returns: x : ndarray of float
The final solution.
istop : int
Gives the reason for termination. 1 means x is an approximate solution to Ax = b. 2 means x approximately solves the leastsquares problem.
itn : int
Iteration number upon termination.
r1norm : float
norm(r)
, wherer = b  Ax
.r2norm : float
sqrt( norm(r)^2 + damp^2 * norm(x)^2 )
. Equal to r1norm ifdamp == 0
.anorm : float
Estimate of Frobenius norm of
Abar = [[A]; [damp*I]]
.acond : float
Estimate of
cond(Abar)
.arnorm : float
Estimate of
norm(A'*r  damp^2*x)
.xnorm : float
norm(x)
var : ndarray of float
If
calc_var
is True, estimates all diagonals of(A'A)^{1}
(ifdamp == 0
) or more generally(A'A + damp^2*I)^{1}
. This is well defined if A has full column rank ordamp > 0
. (Not sure what var means ifrank(A) < n
anddamp = 0.
)Notes
LSQR uses an iterative method to approximate the solution. The number of iterations required to reach a certain accuracy depends strongly on the scaling of the problem. Poor scaling of the rows or columns of A should therefore be avoided where possible.
For example, in problem 1 the solution is unaltered by rowscaling. If a row of A is very small or large compared to the other rows of A, the corresponding row of ( A b ) should be scaled up or down.
In problems 1 and 2, the solution x is easily recovered following columnscaling. Unless better information is known, the nonzero columns of A should be scaled so that they all have the same Euclidean norm (e.g., 1.0).
In problem 3, there is no freedom to rescale if damp is nonzero. However, the value of damp should be assigned only after attention has been paid to the scaling of A.
The parameter damp is intended to help regularize illconditioned systems, by preventing the true solution from being very large. Another aid to regularization is provided by the parameter acond, which may be used to terminate iterations before the computed solution becomes very large.
If some initial estimate
x0
is known and ifdamp == 0
, one could proceed as follows: Compute a residual vector
r0 = b  A*x0
.  Use LSQR to solve the system
A*dx = r0
.  Add the correction dx to obtain a final solution
x = x0 + dx
.
This requires that
x0
be available before and after the call to LSQR. To judge the benefits, suppose LSQR takes k1 iterations to solve A*x = b and k2 iterations to solve A*dx = r0. If x0 is “good”, norm(r0) will be smaller than norm(b). If the same stopping tolerances atol and btol are used for each system, k1 and k2 will be similar, but the final solution x0 + dx should be more accurate. The only way to reduce the total work is to use a larger stopping tolerance for the second system. If some value btol is suitable for A*x = b, the larger value btol*norm(b)/norm(r0) should be suitable for A*dx = r0.Preconditioning is another way to reduce the number of iterations. If it is possible to solve a related system
M*x = b
efficiently, where M approximates A in some helpful way (e.g. M  A has low rank or its elements are small relative to those of A), LSQR may converge more rapidly on the systemA*M(inverse)*z = b
, after which x can be recovered by solving M*x = z.If A is symmetric, LSQR should not be used!
Alternatives are the symmetric conjugategradient method (cg) and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that applies to any symmetric A and will converge more rapidly than LSQR. If A is positive definite, there are other implementations of symmetric cg that require slightly less work per iteration than SYMMLQ (but will take the same number of iterations).
References
[R35] C. C. Paige and M. A. Saunders (1982a). “LSQR: An algorithm for sparse linear equations and sparse least squares”, ACM TOMS 8(1), 4371. [R36] C. C. Paige and M. A. Saunders (1982b). “Algorithm 583. LSQR: Sparse linear equations and least squares problems”, ACM TOMS 8(2), 195209. [R37] M. A. Saunders (1995). “Solution of sparse rectangular systems using LSQR and CRAIG”, BIT 35, 588604. Examples
>>> from scipy.sparse import csc_matrix >>> from scipy.sparse.linalg import lsqr >>> A = csc_matrix([[1., 0.], [1., 1.], [0., 1.]], dtype=float)
The first example has the trivial solution [0, 0]
>>> b = np.array([0., 0., 0.], dtype=float) >>> x, istop, itn, normr = lsqr(A, b)[:4] The exact solution is x = 0 >>> istop 0 >>> x array([ 0., 0.])
The stopping code istop=0 returned indicates that a vector of zeros was found as a solution. The returned solution x indeed contains [0., 0.]. The next example has a nontrivial solution:
>>> b = np.array([1., 0., 1.], dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b)[:4] >>> istop 1 >>> x array([ 1., 1.]) >>> itn 1 >>> r1norm 4.440892098500627e16
As indicated by istop=1,
lsqr
found a solution obeying the tolerance limits. The given solution [1., 1.] obviously solves the equation. The remaining return values include information about the number of iterations (itn=1) and the remaining difference of left and right side of the solved equation. The final example demonstrates the behavior in the case where there is no solution for the equation:>>> b = np.array([1., 0.01, 1.], dtype=float) >>> x, istop, itn, r1norm = lsqr(A, b)[:4] >>> istop 2 >>> x array([ 1.00333333, 0.99666667]) >>> A.dot(x)b array([ 0.00333333, 0.00333333, 0.00333333]) >>> r1norm 0.005773502691896255
istop indicates that the system is inconsistent and thus x is rather an approximate solution to the corresponding leastsquares problem. r1norm contains the norm of the minimal residual that was found.
 Compute a residual vector