Finding eigenvalues and eigenvectors
Finding a selection of eigenvalues and corresponding (right) eigenvectors of a linear map can be accomplished with the eigsolve routine:
KrylovKit.eigsolve — Function.eigsolve(A::AbstractMatrix, [howmany = 1, which = :LM, T = eltype(A)]; kwargs...)
eigsolve(f, n::Int, [howmany = 1, which = :LM, T = Float64]; kwargs...)
eigsolve(f, x₀, [howmany = 1, which = :LM]; kwargs...)
eigsolve(f, x₀, howmany, which, algorithm)Compute howmany eigenvalues from the linear map encoded in the matrix A or by the function f. Return eigenvalues, eigenvectors and a ConvergenceInfo structure.
Arguments:
The linear map can be an AbstractMatrix (dense or sparse) or a general function or callable object. If an AbstractMatrix is used, a starting vector x₀ does not need to be provided, it is then chosen as rand(T, size(A,1)). If the linear map is encoded more generally as a a callable function or method, the best approach is to provide an explicit starting guess x₀. Note that x₀ does not need to be of type AbstractVector, any type that behaves as a vector and supports the required methods (see KrylovKit docs) is accepted. If instead of x₀ an integer n is specified, it is assumed that x₀ is a regular vector and it is initialized to rand(T,n), where the default value of T is Float64, unless specified differently.
The next arguments are optional, but should typically be specified. howmany specifies how many eigenvalues should be computed; which specifies which eigenvalues should be targetted. Valid specifications of which are given by
LM: eigenvalues of largest magnitudeLR: eigenvalues with largest (most positive) real partSR: eigenvalues with smallest (most negative) real partLI: eigenvalues with largest (most positive) imaginary part, only ifT <: ComplexSI: eigenvalues with smallest (most negative) imaginary part, only ifT <: ComplexClosestTo(λ): eigenvalues closest to some numberλ
Krylov methods work well for extremal eigenvalues, i.e. eigenvalues on the periphery of the spectrum of the linear map. Even with ClosestTo, no shift and invert is performed. This is useful if, e.g., you know the spectrum to be within the unit circle in the complex plane, and want to target the eigenvalues closest to the value λ = 1.
The argument T acts as a hint in which Number type the computation should be performed, but is not restrictive. If the linear map automatically produces complex values, complex arithmetic will be used even though T<:Real was specified.
Return values:
The return value is always of the form vals, vecs, info = eigsolve(...) with
vals: aVectorcontaining the eigenvalues, of length at leasthowmany, but could be longer if more eigenvalues were converged at the same cost. Eigenvalues will be real ifLanczoswas used and complex ifArnoldiwas used (see below).vecs: aVectorof corresponding eigenvectors, of the same length asvals. Note that eigenvectors are not returned as a matrix, as the linear map could act on any custom Julia type with vector like behavior, i.e. the elements of the listvecsare objects that are typically similar to the starting guessx₀, up to a possibly differenteltype. In particular, for a general matrix (i.e. withArnoldi) the eigenvectors are generally complex and are therefore always returned in a complex number format. When the linear map is a simpleAbstractMatrix,vecswill beVector{Vector{<:Number}}.info: an object of type [ConvergenceInfo], which has the following fieldsinfo.converged::Int: indicates how many eigenvalues and eigenvectors were actually converged to the specified tolerancetol(see below under keyword arguments)info.residual::Vector: a list of the same length asvalscontaining the residualsinfo.residual[i] = f(vecs[i]) - vals[i] * vecs[i]info.normres::Vector{<:Real}: list of the same length asvalscontaining the norm of the residualinfo.normres[i] = norm(info.residual[i])info.numops::Int: number of times the linear map was applied, i.e. number of timesfwas called, or a vector was multiplied withAinfo.numiter::Int: number of times the Krylov subspace was restarted (see below)
No warning is printed if not all requested eigenvalues were converged, so always check if info.converged >= howmany.
Keyword arguments:
Keyword arguments and their default values are given by:
tol::Real: the requested accuracy (corresponding to the 2-norm of the residual for Schur vectors, not the eigenvectors). If you work in e.g. single precision (Float32), you should definitely change the default value.krylovdim::Integer: the maximum dimension of the Krylov subspace that will be constructed. Note that the dimension of the vector space is not known or checked, e.g.x₀should not necessarily support theBase.lengthfunction. If you know the actual problem dimension is smaller than the default value, it is useful to reduce the value ofkrylovdim, though in principle this should be detected.maxiter::Integer: the number of times the Krylov subspace can be rebuilt; see below for further details on the algorithms.orth::Orthogonalizer: the orthogonalization method to be used, seeOrthogonalizerissymmetric::Bool: if the linear map is symmetric, only meaningful ifT<:Realishermitian::Bool: if the linear map is hermitian
The default values are given by tol = KrylovDefaults.tol, krylovdim = KrylovDefaults.krylovdim, maxiter = KrylovDefaults.maxiter, orth = KrylovDefaults.orth; see KrylovDefaults for details.
The default value for the last two parameters depends on the method. If an AbstractMatrix is used, issymmetric and ishermitian are checked for that matrix, ortherwise the default values are issymmetric = false and ishermitian = T <: Real && issymmetric.
Algorithm
The last method, without default values and keyword arguments, is the one that is finally called, and can also be used directly. Here, one specifies the algorithm explicitly as either Lanczos, for real symmetric or complex hermitian problems, or Arnoldi, for general problems. Note that these names refer to the process for building the Krylov subspace, but the actual algorithm is an implementation of the Krylov-Schur algorithm, which can dynamically shrink and grow the Krylov subspace, i.e. the restarts are so-called thick restarts where a part of the current Krylov subspace is kept.
In case of a general problem, where the Arnoldi method is used, convergence of an eigenvalue is not based on the norm of the residual norm(f(vecs[i]) - vals[i]*vecs[i]) for the eigenvectors but rather on the norm of the residual for the corresponding Schur vectors.
See also schursolve if you want to use the partial Schur decomposition directly, or if you are not interested in computing the eigenvectors, and want to work in real arithmetic all the way true (if the linear map and starting guess are real).
For a general matrix, eigenvalues and eigenvectors will always be returned with complex values for reasons of type stability. However, if the linear map and initial guess are real, most of the computation is actually performed using real arithmetic, as in fact the first step is to compute an approximate partial Schur factorization. If one is not interested in the eigenvectors, one can also just compute this partial Schur factorization using schursolve.
KrylovKit.schursolve — Function.schursolve(A, x₀, howmany, which, algorithm)Compute a partial Schur decomposition containing howmany eigenvalues from the linear map encoded in the matrix or function A. Return the reduced Schur matrix, the basis of Schur vectors, the extracted eigenvalues and a ConvergenceInfo structure.
See also eigsolve to obtain the eigenvectors instead. For real symmetric or complex hermitian problems, the (partial) Schur decomposition is identical to the (partial) eigenvalue decomposition, and eigsolve should always be used.
Arguments:
The linear map can be an AbstractMatrix (dense or sparse) or a general function or callable object, that acts on vector like objects similar to x₀, which is the starting guess from which a Krylov subspace will be built. howmany specifies how many Schur vectors should be converged before the algorithm terminates; which specifies which eigenvalues should be targetted. Valid specifications of which are
LM: eigenvalues of largest magnitudeLR: eigenvalues with largest (most positive) real partSR: eigenvalues with smallest (most negative) real partLI: eigenvalues with largest (most positive) imaginary part, only ifT <: ComplexSI: eigenvalues with smallest (most negative) imaginary part, only ifT <: ComplexClosestTo(λ): eigenvalues closest to some numberλ
Krylov methods work well for extremal eigenvalues, i.e. eigenvalues on the periphery of the spectrum of the linear map. Even with ClosestTo, no shift and invert is performed. This is useful if, e.g., you know the spectrum to be within the unit circle in the complex plane, and want to target the eigenvalues closest to the value λ = 1.
The final argument algorithm can currently only be an instance of Arnoldi, but should nevertheless be specified. Since schursolve is less commonly used as eigsolve, no convenient keyword syntax is currently available.
Return values:
The return value is always of the form T, vecs, vals, info = schursolve(...) with
T: aMatrixcontaining the partial Schur decomposition of the linear map, i.e. it's elements are given byT[i,j] = dot(vecs[i], f(vecs[j])). It is of Schur form, i.e. upper triangular in case of complex arithmetic, and block upper triangular (with at most 2x2 blocks) in case of real arithmetic.vecs: aVectorof corresponding Schur vectors, of the same length asvals. Note that Schur vectors are not returned as a matrix, as the linear map could act on any custom Julia type with vector like behavior, i.e. the elements of the listvecsare objects that are typically similar to the starting guessx₀, up to a possibly differenteltype. When the linear map is a simpleAbstractMatrix,vecswill beVector{Vector{<:Number}}. Schur vectors are by definition orthogonal, i.e.dot(vecs[i],vecs[j]) = I[i,j]. Note that Schur vectors are real if the problem (i.e. the linear map and the initial guess) are real.vals: aVectorof eigenvalues, i.e. the diagonal elements ofTin case of complex arithmetic, or extracted from the diagonal blocks in case of real arithmetic. Note thatvalswill always be complex, independent of the underlying arithmetic.info: an object of type [ConvergenceInfo], which has the following fieldsinfo.converged::Int: indicates how many eigenvalues and Schur vectors were actually converged to the specified tolerance (see below under keyword arguments)info.residuals::Vector: a list of the same length asvalscontaining the actual residualsjulia info.residuals[i] = f(vecs[i]) - sum(vecs[j]*T[j,i] for j = 1:i+1)whereT[i+1,i]is definitely zero in case of complex arithmetic and possibly zero in case of real arithmeticinfo.normres::Vector{<:Real}: list of the same length asvalscontaining the norm of the residual for every Schur vector, i.e.info.normes[i] = norm(info.residual[i])info.numops::Int: number of times the linear map was applied, i.e. number of timesfwas called, or a vector was multiplied withAinfo.numiter::Int: number of times the Krylov subspace was restarted (see below)
No warning is printed if not all requested eigenvalues were converged, so always check if info.converged >= howmany.
Algorithm
The actual algorithm is an implementation of the Krylov-Schur algorithm, where the Arnoldi algorithm is used to generate the Krylov subspace. During the algorith, the Krylov subspace is dynamically grown and shrunk, i.e. the restarts are so-called thick restarts where a part of the current Krylov subspace is kept.
Note that, for symmetric or hermitian linear maps, the eigenvalue and Schur factorizaion are equivalent, and one can only use eigsolve.
Another example of a possible use case of schursolve is if the linear map is known to have a unique eigenvalue of, e.g. largest magnitude. Then, if the linear map is real valued, that largest magnitude eigenvalue and its corresponding eigenvector are also real valued. eigsolve will automatically return complex valued eigenvectors for reasons of type stability. However, as the first Schur vector will coincide with the first eigenvector, one can instead use
T, vecs, vals, info = schursolve(A, x₀, 1, :LM, Arnoldi(...))and use vecs[1] as the real valued eigenvector (after checking info.converged) corresponding to the largest magnitude eigenvalue of A.