Install KrylovKit.jl via the package manager:

    using Pkg

    KrylovKit.jl is a pure Julia package; no dependencies (aside from the Julia standard library) are required.

    Getting started

    After installation, start by loading KrylovKit

    using KrylovKit

    The help entry of the KrylovKit module states


    A Julia package collecting a number of Krylov-based algorithms for linear problems, singular value and eigenvalue problems and the application of functions of linear maps or operators to vectors.

    KrylovKit accepts general functions or callable objects as linear maps, and general Julia objects with vector like behavior as vectors.

    The high level interface of KrylovKit is provided by the following functions:

    • linsolve: solve linear systems
    • eigsolve: find a few eigenvalues and corresponding eigenvectors
    • geneigsolve: find a few generalized eigenvalues and corresponding vectors
    • svdsolve: find a few singular values and corresponding left and right singular vectors
    • exponentiate: apply the exponential of a linear map to a vector
    • expintegrator: exponential integrator for a linear non-homogeneous ODE, computes a linear combination of the ϕⱼ functions which generalize ϕ₀(z) = exp(z).

    Common interface

    The for high-level function linsolve, eigsolve, geneigsolve, svdsolve, exponentiate and expintegrator follow a common interface

    results..., info = problemsolver(A, args...; kwargs...)

    where problemsolver is one of the functions above. Here, A is the linear map in the problem, which could be an instance of AbstractMatrix, or any function or callable object that encodes the action of the linear map on a vector. In particular, one can write the linear map using Julia's do block syntax as

    results..., info = problemsolver(args...; kwargs...) do x
        y = # implement linear map on x
        return y

    Read the documentation for problems that require both the linear map and its adjoint to be implemented, e.g. svdsolve, or that require two different linear maps, e.g. geneigsolve.

    Furthermore, args is a set of additional arguments to specify the problem. The keyword arguments kwargs contain information about the linear map (issymmetric, ishermitian, isposdef) and about the solution strategy (tol, krylovdim, maxiter). Finally, there is a keyword argument verbosity that determines how much information is printed to STDOUT. The default value verbosity = 0 means that no information will be printed. With verbosity = 1, a single message at the end of the algorithm will be displayed, which is a warning if the algorithm did not succeed in finding the solution, or some information if it did. For verbosity = 2, information about the current state is displayed after every iteration of the algorithm. Finally, for verbosity > 2, information about the individual Krylov expansion steps is displayed.

    The return value contains one or more entries that define the solution, and a final entry info of type ConvergeInfo that encodes information about the solution, i.e. whether it has converged, the residual(s) and the norm thereof, the number of operations used:

    struct ConvergenceInfo{S,T}

    Used to return information about the solution found by the iterative method.

    • converged: the number of solutions that have converged according to an appropriate error measure and requested tolerance for the problem. Its value can be zero or one for linsolve, exponentiate and expintegrator, or any integer >= 0 for eigsolve, schursolve or svdsolve.
    • residual: the (list of) residual(s) for the problem, or nothing for problems without the concept of a residual (i.e. exponentiate, expintegrator). This is a single vector (of the same type as the type of vectors used in the problem) for linsolve, or a Vector of such vectors for eigsolve, schursolve or svdsolve.
    • normres: the norm of the residual(s) (in the previous field) or the value of any other error measure that is appropriate for the problem. This is a Real for linsolve and exponentiate, and a Vector{<:Real} for eigsolve, schursolve and svdsolve. The number of values in normres that are smaller than a predefined tolerance corresponds to the number converged of solutions that have converged.
    • numiter: the number of iterations (sometimes called restarts) used by the algorithm.
    • numops: the number of times the linear map or operator was applied

    There is also an expert interface where the user specifies the algorithm that should be used explicitly, i.e.

    results..., info = problemsolver(A, args..., algorithm(; kwargs...))

    Most algorithm constructions take the same keyword arguments (tol, krylovdim, maxiter and verbosity) discussed above.

    As mentioned before, there are two auxiliary structs that can be used to define new vectors, namely

    v = RecursiveVec(vecs)

    Create a new vector v from an existing (homogeneous or heterogeneous) list of vectors vecs with one or more elements, represented as a Tuple or AbstractVector. The elements of vecs can be any type of vectors that are supported by KrylovKit. For a heterogeneous list, it is best to use a tuple for reasons of type stability, while for a homogeneous list, either a Tuple or a Vector can be used. From a mathematical perspectve, v represents the direct sum of the vectors in vecs. Scalar multiplication and addition of vectors v acts simultaneously on all elements of v.vecs. The inner product corresponds to the sum of the inner products of the individual vectors in the list v.vecs.

    The vector v also adheres to the iteration syntax, but where it will just produce the individual vectors in v.vecs. Hence, length(v) = length(v.vecs). It can also be indexed, so that v[i] = v.vecs[i], which can be useful in writing a linear map that acts on v.

    v = InnerProductVec(vec, dotf)

    Create a new vector v from an existing vector dotf with a modified inner product given by inner. The vector vec, which can be any type (not necessarily Vector) that supports the basic vector interface required by KrylovKit, is wrapped in a custom struct v::InnerProductVec. All vector space functionality such as addition and multiplication with scalars (both out of place and in place using mul!, rmul!, axpy! and axpby!) applied to v is simply forwarded to v.vec. The inner product between vectors v1 = InnerProductVec(vec1, dotf) and v2 = InnerProductVec(vec2, dotf) is computed as dot(v1, v2) = dotf(v1.vec, v2.vec) = dotf(vec1, vec2). The inner product between vectors with different dotf functions is not defined. Similarly, The norm of v::InnerProductVec is defined as v = sqrt(real(dot(v, v))) = sqrt(real(dotf(vec, vec))).

    In a (linear) map applied to v, the original vector can be obtained as v.vec or simply as v[].