Singular value problems
Computing a few singular values and corresponding left and right singular vectors is done using the function svdsolve:
KrylovKit.svdsolve — Function.svdsolve(A::AbstractMatrix, [howmany = 1, which = :LR, T = eltype(A)]; kwargs...)
svdsolve(f, m::Int, n::Int, [howmany = 1, which = :LR, T = Float64]; kwargs...)
svdsolve(f, x₀, y₀, [howmany = 1, which = :LM]; kwargs...)
svdsolve(f, x₀, y₀, howmany, which, algorithm)Compute howmany singular values from the linear map encoded in the matrix A or by the function f. Return singular values, left and right singular vectors and a ConvergenceInfo structure.
Arguments:
The linear map can be an AbstractMatrix (dense or sparse) or a general function or callable object. Since both the action of the linear map and its adjoint are required in order to compute singular values, f can either be a tuple of two callable objects (each accepting a single argument), representing the linear map and its adjoint respectively, or, f can be a single callable object that accepts two input arguments, where the second argument is a flag that indicates whether the adjoint or the normal action of the linear map needs to be computed. The latter form still combines well with the do block syntax of Julia, as in
vals, lvecs, rvecs, info = svdsolve(x₀, y₀, howmany, which; kwargs...) do (x, flag)
if flag
# y = compute action of adjoint map on x
else
# y = compute action of linear map on x
end
return y
endFor a general linear map encoded using either the tuple or the two-argument form, the best approach is to provide a start vector x₀ (in the domain of the linear map). Alternatively, one can specify the number n of columns of the linear map, in which case x₀ = rand(T, n) is used, where the default value of T is Float64, unless specified differently. If an AbstractMatrix is used, a starting vector x₀ does not need to be provided; it is chosen as rand(T, size(A,1)).
The next arguments are optional, but should typically be specified. howmany specifies how many singular values and vectors should be computed; which specifies which singular values should be targetted. Valid specifications of which are
LR: largest singular valuesSR: smallest singular values
However, the largest singular values tend to converge more rapidly.
Return values:
The return value is always of the form vals, lvecs, rvecs, info = svdsolve(...) with
vals: aVector{<:Real}containing the singular values, of length at leasthowmany, but could be longer if more singular values were converged at the same cost.lvecs: aVectorof corresponding left singular vectors, of the same length asvals.rvecs: aVectorof corresponding right singular vectors, of the same length asvals. Note that singular vectors are not returned as a matrix, as the linear map could act on any custom Julia type with vector like behavior, i.e. the elements of the listslvecs(rvecs) are objects that are typically similar to the starting guessy₀(x₀), up to a possibly differenteltype. When the linear map is a simpleAbstractMatrix,lvecsandrvecswill beVector{Vector{<:Number}}.info: an object of type [ConvergenceInfo], which has the following fieldsinfo.converged::Int: indicates how many singular values and vectors were actually converged to the specified tolerancetol(see below under keyword arguments)info.residual::Vector: a list of the same length asvalscontaining the residualsinfo.residual[i] = A * rvecs[i] - vals[i] * lvecs[i].info.normres::Vector{<:Real}: list of the same length asvalscontaining the norm of the residualinfo.normres[i] = norm(info.residual[i])info.numops::Int: number of times the linear map was applied, i.e. number of timesfwas called, or a vector was multiplied withAorA'.info.numiter::Int: number of times the Krylov subspace was restarted (see below)
No warning is printed if not all requested eigenvalues were converged, so always check if info.converged >= howmany.
Keyword arguments:
Keyword arguments and their default values are given by:
verbosity::Int = 0: verbosity level, i.e. 0 (no messages), 1 (single message at the end), 2 (information after every iteration), 3 (information per Krylov step)krylovdim: the maximum dimension of the Krylov subspace that will be constructed. Note that the dimension of the vector space is not known or checked, e.g.x₀should not necessarily support theBase.lengthfunction. If you know the actual problem dimension is smaller than the default value, it is useful to reduce the value ofkrylovdim, though in principle this should be detected.tol: the requested accuracy according tonormresas defined above. If you work in e.g. single precision (Float32), you should definitely change the default value.maxiter: the number of times the Krylov subspace can be rebuilt; see below for further details on the algorithms.orth: the orthogonalization method to be used, seeOrthogonalizer
Algorithm
The last method, without default values and keyword arguments, is the one that is finally called, and can also be used directly. Here the algorithm is specified, though currently only GKL is available. GKL refers to the the partial Golub-Kahan-Lanczos bidiagonalization which forms the basis for computing the approximation to the singular values. This factorization is dynamically shrunk and expanded (i.e. thick restart) similar to the Krylov-Schur factorization for eigenvalues.