lobpcg#
- scipy.sparse.linalg.lobpcg(A, X, B=None, M=None, Y=None, tol=None, maxiter=None, largest=True, verbosityLevel=0, retLambdaHistory=False, retResidualNormsHistory=False, restartControl=20)[source]#
Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).
LOBPCG is a preconditioned eigensolver for large real symmetric and complex Hermitian definite generalized eigenproblems.
- Parameters:
- A{sparse matrix, ndarray, LinearOperator, callable object}
The Hermitian linear operator of the problem, usually given by a sparse matrix. Often called the “stiffness matrix”.
- Xndarray, float32 or float64
Initial approximation to the
k
eigenvectors (non-sparse). If A hasshape=(n,n)
then X must haveshape=(n,k)
.- B{sparse matrix, ndarray, LinearOperator, callable object}
Optional. By default
B = None
, which is equivalent to identity. The right hand side operator in a generalized eigenproblem if present. Often called the “mass matrix”. Must be Hermitian positive definite.- M{sparse matrix, ndarray, LinearOperator, callable object}
Optional. By default
M = None
, which is equivalent to identity. Preconditioner aiming to accelerate convergence.- Yndarray, float32 or float64, default: None
An
n-by-sizeY
ndarray of constraints withsizeY < n
. The iterations will be performed in theB
-orthogonal complement of the column-space of Y. Y must be full rank if present.- tolscalar, optional
The default is
tol=n*sqrt(eps)
. Solver tolerance for the stopping criterion.- maxiterint, default: 20
Maximum number of iterations.
- largestbool, default: True
When True, solve for the largest eigenvalues, otherwise the smallest.
- verbosityLevelint, optional
By default
verbosityLevel=0
no output. Controls the solver standard/screen output.- retLambdaHistorybool, default: False
Whether to return iterative eigenvalue history.
- retResidualNormsHistorybool, default: False
Whether to return iterative history of residual norms.
- restartControlint, optional.
Iterations restart if the residuals jump
2**restartControl
times compared to the smallest recorded inretResidualNormsHistory
. The default isrestartControl=20
, making the restarts rare for backward compatibility.
- Returns:
- lambdandarray of the shape
(k, )
. Array of
k
approximate eigenvalues.- vndarray of the same shape as
X.shape
. An array of
k
approximate eigenvectors.- lambdaHistoryndarray, optional.
The eigenvalue history, if retLambdaHistory is
True
.- ResidualNormsHistoryndarray, optional.
The history of residual norms, if retResidualNormsHistory is
True
.
- lambdandarray of the shape
Notes
The iterative loop runs
maxit=maxiter
(20 ifmaxit=None
) iterations at most and finishes earlier if the tolerance is met. Breaking backward compatibility with the previous version, LOBPCG now returns the block of iterative vectors with the best accuracy rather than the last one iterated, as a cure for possible divergence.If
X.dtype == np.float32
and user-provided operations/multiplications by A, B, and M all preserve thenp.float32
data type, all the calculations and the output are innp.float32
.The size of the iteration history output equals to the number of the best (limited by maxit) iterations plus 3: initial, final, and postprocessing.
If both retLambdaHistory and retResidualNormsHistory are
True
, the return tuple has the following format(lambda, V, lambda history, residual norms history)
.In the following
n
denotes the matrix size andk
the number of required eigenvalues (smallest or largest).The LOBPCG code internally solves eigenproblems of the size
3k
on every iteration by calling the dense eigensolver eigh, so ifk
is not small enough compared ton
, it makes no sense to call the LOBPCG code. Moreover, if one calls the LOBPCG algorithm for5k > n
, it would likely break internally, so the code calls the standard function eigh instead. It is not thatn
should be large for the LOBPCG to work, but rather the ration / k
should be large. It you call LOBPCG withk=1
andn=10
, it works thoughn
is small. The method is intended for extremely largen / k
.The convergence speed depends basically on three factors:
Quality of the initial approximations X to the seeking eigenvectors. Randomly distributed around the origin vectors work well if no better choice is known.
Relative separation of the desired eigenvalues from the rest of the eigenvalues. One can vary
k
to improve the separation.Proper preconditioning to shrink the spectral spread. For example, a rod vibration test problem (under tests directory) is ill-conditioned for large
n
, so convergence will be slow, unless efficient preconditioning is used. For this specific problem, a good simple preconditioner function would be a linear solve for A, which is easy to code since A is tridiagonal.
References
[1]A. V. Knyazev (2001), Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23, no. 2, pp. 517-541. DOI:10.1137/S1064827500366124
[2]A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov (2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc. arXiv:0705.2626
[3]A. V. Knyazev’s C and MATLAB implementations: lobpcg/blopex
Examples
Our first example is minimalistic - find the largest eigenvalue of a diagonal matrix by solving the non-generalized eigenvalue problem
A x = lambda x
without constraints or preconditioning.>>> import numpy as np >>> from scipy.sparse import spdiags >>> from scipy.sparse.linalg import LinearOperator, aslinearoperator >>> from scipy.sparse.linalg import lobpcg
The square matrix size is
>>> n = 100
and its diagonal entries are 1, …, 100 defined by
>>> vals = np.arange(1, n + 1).astype(np.int16)
The first mandatory input parameter in this test is the sparse diagonal matrix A of the eigenvalue problem
A x = lambda x
to solve.>>> A = spdiags(vals, 0, n, n) >>> A = A.astype(np.int16) >>> A.toarray() array([[ 1, 0, 0, ..., 0, 0, 0], [ 0, 2, 0, ..., 0, 0, 0], [ 0, 0, 3, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 98, 0, 0], [ 0, 0, 0, ..., 0, 99, 0], [ 0, 0, 0, ..., 0, 0, 100]], shape=(100, 100), dtype=int16)
The second mandatory input parameter X is a 2D array with the row dimension determining the number of requested eigenvalues. X is an initial guess for targeted eigenvectors. X must have linearly independent columns. If no initial approximations available, randomly oriented vectors commonly work best, e.g., with components normally distributed around zero or uniformly distributed on the interval [-1 1]. Setting the initial approximations to dtype
np.float32
forces all iterative values to dtypenp.float32
speeding up the run while still allowing accurate eigenvalue computations.>>> k = 1 >>> rng = np.random.default_rng() >>> X = rng.normal(size=(n, k)) >>> X = X.astype(np.float32)
>>> eigenvalues, _ = lobpcg(A, X, maxiter=60) >>> eigenvalues array([100.], dtype=float32)
lobpcg
needs only access the matrix product with A rather then the matrix itself. Since the matrix A is diagonal in this example, one can write a function of the matrix productA @ X
using the diagonal valuesvals
only, e.g., by element-wise multiplication with broadcasting in the lambda-function>>> A_lambda = lambda X: vals[:, np.newaxis] * X
or the regular function
>>> def A_matmat(X): ... return vals[:, np.newaxis] * X
and use the handle to one of these callables as an input
>>> eigenvalues, _ = lobpcg(A_lambda, X, maxiter=60) >>> eigenvalues array([100.], dtype=float32) >>> eigenvalues, _ = lobpcg(A_matmat, X, maxiter=60) >>> eigenvalues array([100.], dtype=float32)
The traditional callable
LinearOperator
is no longer necessary but still supported as the input tolobpcg
. Specifyingmatmat=A_matmat
explicitly improves performance.>>> A_lo = LinearOperator((n, n), matvec=A_matmat, matmat=A_matmat, dtype=np.int16) >>> eigenvalues, _ = lobpcg(A_lo, X, maxiter=80) >>> eigenvalues array([100.], dtype=float32)
The least efficient callable option is
aslinearoperator
:>>> eigenvalues, _ = lobpcg(aslinearoperator(A), X, maxiter=80) >>> eigenvalues array([100.], dtype=float32)
We now switch to computing the three smallest eigenvalues specifying
>>> k = 3 >>> X = np.random.default_rng().normal(size=(n, k))
and
largest=False
parameter>>> eigenvalues, _ = lobpcg(A, X, largest=False, maxiter=90) >>> print(eigenvalues) [1. 2. 3.]
The next example illustrates computing 3 smallest eigenvalues of the same matrix A given by the function handle
A_matmat
but with constraints and preconditioning.Constraints - an optional input parameter is a 2D array comprising of column vectors that the eigenvectors must be orthogonal to
>>> Y = np.eye(n, 3)
The preconditioner acts as the inverse of A in this example, but in the reduced precision
np.float32
even though the initial X and thus all iterates and the output are in fullnp.float64
.>>> inv_vals = 1./vals >>> inv_vals = inv_vals.astype(np.float32) >>> M = lambda X: inv_vals[:, np.newaxis] * X
Let us now solve the eigenvalue problem for the matrix A first without preconditioning requesting 80 iterations
>>> eigenvalues, _ = lobpcg(A_matmat, X, Y=Y, largest=False, maxiter=80) >>> eigenvalues array([4., 5., 6.]) >>> eigenvalues.dtype dtype('float64')
With preconditioning we need only 20 iterations from the same X
>>> eigenvalues, _ = lobpcg(A_matmat, X, Y=Y, M=M, largest=False, maxiter=20) >>> eigenvalues array([4., 5., 6.])
Note that the vectors passed in Y are the eigenvectors of the 3 smallest eigenvalues. The results returned above are orthogonal to those.
The primary matrix A may be indefinite, e.g., after shifting
vals
by 50 from 1, …, 100 to -49, …, 50, we still can compute the 3 smallest or largest eigenvalues.>>> vals = vals - 50 >>> X = rng.normal(size=(n, k)) >>> eigenvalues, _ = lobpcg(A_matmat, X, largest=False, maxiter=99) >>> eigenvalues array([-49., -48., -47.]) >>> eigenvalues, _ = lobpcg(A_matmat, X, largest=True, maxiter=99) >>> eigenvalues array([50., 49., 48.])