Tensors

Type hierarchy

The abstract supertype of all tensors in TensorKit is given by AbstractTensorMap:

TensorKit.AbstractTensorMapType
abstract type AbstractTensorMap{T<:Number, S<:IndexSpace, N₁, N₂} end

Abstract supertype of all tensor maps, i.e. linear maps between tensor products of vector spaces of type S<:IndexSpace, with element type T. An AbstractTensorMap maps from an input space of type ProductSpace{S, N₂} to an output space of type ProductSpace{S, N₁}.

source

The following concrete subtypes are provided within the TensorKit library:

TensorKit.TensorMapType
struct TensorMap{T, S<:IndexSpace, N₁, N₂, A<:DenseVector{T}} <: AbstractTensorMap{T, S, N₁, N₂}

Specific subtype of AbstractTensorMap for representing tensor maps (morphisms in a tensor category), where the data is stored in a dense vector.

source
TensorKit.BraidingTensorType
struct BraidingTensor{T,S<:IndexSpace} <: AbstractTensorMap{T, S, 2, 2}
BraidingTensor(V1::S, V2::S, adjoint::Bool=false) where {S<:IndexSpace}

Specific subtype of AbstractTensorMap for representing the braiding tensor that braids the first input over the second input; its inverse can be obtained as the adjoint.

It holds that domain(BraidingTensor(V1, V2)) == V1 ⊗ V2 and codomain(BraidingTensor(V1, V2)) == V2 ⊗ V1.

source

Of those, TensorMap provides the generic instantiation of our tensor concept. It supports various constructors, which are discussed in the next subsection.

Furthermore, some aliases are provided for convenience:

TensorKit.AbstractTensorType
AbstractTensor{T,S,N} = AbstractTensorMap{T,S,N,0}

Abstract supertype of all tensors, i.e. elements in the tensor product space of type ProductSpace{S, N}, with element type T.

An AbstractTensor{T, S, N} is actually a special case AbstractTensorMap{T, S, N, 0}, i.e. a tensor map with only non-trivial output spaces.

source
TensorKit.TensorType
Tensor{T, S, N, A<:DenseVector{T}} = TensorMap{T, S, N, 0, A}

Specific subtype of AbstractTensor for representing tensors whose data is stored in a dense vector.

A Tensor{T, S, N, A} is actually a special case TensorMap{T, S, N, 0, A}, i.e. a tensor map with only a non-trivial output space.

source

TensorMap constructors

General constructors

A TensorMap with undefined data can be constructed by specifying its domain and codomain:

TensorKit.TensorMapMethod
TensorMap{T}(undef, codomain::ProductSpace{S,N₁}, domain::ProductSpace{S,N₂})
            where {T,S,N₁,N₂}
TensorMap{T}(undef, codomain ← domain)
TensorMap{T}(undef, domain → codomain)
# expert mode: select storage type `A`
TensorMap{T,S,N₁,N₂,A}(undef, codomain ← domain)
TensorMap{T,S,N₁,N₂,A}(undef, domain → domain)

Construct a TensorMap with uninitialized data.

source

The resulting object can then be filled with data using the setindex! method as discussed below, using functions such as VectorInterface.zerovector!, rand! or fill!, or it can be used as an output argument in one of the many methods that accept output arguments, or in an @tensor output[...] = ... expression.

Alternatively, a TensorMap can be constructed by specifying its data, codmain and domain in one of the following ways:

TensorKit.TensorMapMethod
TensorMap(data::AbstractDict{<:Sector,<:AbstractMatrix}, codomain::ProductSpace{S,N₁},
            domain::ProductSpace{S,N₂}) where {S<:ElementarySpace,N₁,N₂}
TensorMap(data, codomain ← domain)
TensorMap(data, domain → codomain)

Construct a TensorMap by explicitly specifying its block data.

Arguments

  • data::AbstractDict{<:Sector,<:AbstractMatrix}: dictionary containing the block data for each coupled sector c as a matrix of size (blockdim(codomain, c), blockdim(domain, c)).
  • codomain::ProductSpace{S,N₁}: the codomain as a ProductSpace of N₁ spaces of type S<:ElementarySpace.
  • domain::ProductSpace{S,N₂}: the domain as a ProductSpace of N₂ spaces of type S<:ElementarySpace.

Alternatively, the domain and codomain can be specified by passing a HomSpace using the syntax codomain ← domain or domain → codomain.

source
TensorKit.TensorMapMethod
TensorMap(data::AbstractArray, codomain::ProductSpace{S,N₁}, domain::ProductSpace{S,N₂};
                tol=sqrt(eps(real(float(eltype(data)))))) where {S<:ElementarySpace,N₁,N₂}
TensorMap(data, codomain ← domain; tol=sqrt(eps(real(float(eltype(data))))))
TensorMap(data, domain → codomain; tol=sqrt(eps(real(float(eltype(data))))))

Construct a TensorMap from a plain multidimensional array.

Arguments

  • data::DenseArray: tensor data as a plain array.
  • codomain::ProductSpace{S,N₁}: the codomain as a ProductSpace of N₁ spaces of type S<:ElementarySpace.
  • domain::ProductSpace{S,N₂}: the domain as a ProductSpace of N₂ spaces of type S<:ElementarySpace.
  • tol=sqrt(eps(real(float(eltype(data)))))::Float64:

Here, data can be specified in three ways:

  1. data can be a DenseVector of length dim(codomain ← domain); in that case it represents the actual independent entries of the tensor map. An instance will be created that directly references data.
  2. data can be an AbstractMatrix of size (dim(codomain), dim(domain))
  3. data can be an AbstractArray of rank N₁ + N₂ with a size matching that of the domain and codomain spaces, i.e. size(data) == (dims(codomain)..., dims(domain)...)

In case 2 and 3, the TensorMap constructor will reconstruct the tensor data such that the resulting tensor t satisfies data == convert(Array, t), up to an error specified by tol. For the case where sectortype(S) == Trivial and data isa DenseArray, the data array is simply reshaped into a vector and used as in case 1 so that the memory will still be shared. In other cases, new memory will be allocated.

Note that in the case of N₁ + N₂ = 1, case 3 also amounts to data being a vector, whereas when N₁ + N₂ == 2, case 2 and case 3 both require data to be a matrix. Such ambiguous cases are resolved by checking the size of data in an attempt to support all possible cases.

Note

This constructor for case 2 and 3 only works for sectortype values for which conversion to a plain array is possible, and only in the case where the data actually respects the specified symmetry structure, up to a tolerance tol.

source

Finally, we also support the following Array-like constructors

Base.zerosMethod
zeros([T=Float64,], codomain::ProductSpace{S,N₁}, domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T}
zeros([T=Float64,], codomain ← domain)

Create a TensorMap with element type T, of all zeros with spaces specified by codomain and domain.

source
Base.onesMethod
ones([T=Float64,], codomain::ProductSpace{S,N₁}, domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T}
ones([T=Float64,], codomain ← domain)

Create a TensorMap with element type T, of all ones with spaces specified by codomain and domain.

source
Base.randMethod
rand([rng=default_rng()], [T=Float64], codomain::ProductSpace{S,N₁},
             domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T} -> t
rand([rng=default_rng()], [T=Float64], codomain ← domain) -> t

Generate a tensor t with entries generated by rand.

See also (rand)!.

source
Base.randnMethod
randn([rng=default_rng()], [T=Float64], codomain::ProductSpace{S,N₁},
             domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T} -> t
randn([rng=default_rng()], [T=Float64], codomain ← domain) -> t

Generate a tensor t with entries generated by randn.

See also (randn)!.

source
Random.randexpMethod
randexp([rng=default_rng()], [T=Float64], codomain::ProductSpace{S,N₁},
             domain::ProductSpace{S,N₂}) where {S,N₁,N₂,T} -> t
randexp([rng=default_rng()], [T=Float64], codomain ← domain) -> t

Generate a tensor t with entries generated by randexp.

See also (randexp)!.

source

as well as a similar constructor

Base.similarMethod
similar(t::AbstractTensorMap, [AorT=storagetype(t)], [V=space(t)])
similar(t::AbstractTensorMap, [AorT=storagetype(t)], codomain, domain)

Creates an uninitialized mutable tensor with the given scalar or storagetype AorT and structure V or codomain ← domain, based on the source tensormap. The second and third arguments are both optional, defaulting to the given tensor's storagetype and space. The structure may be specified either as a single HomSpace argument or as codomain and domain.

By default, this will result in TensorMap{T}(undef, V) when custom objects do not specialize this method.

source

Specific constructors

Additionally, the following methods can be used to construct specific TensorMap instances.

TensorKit.idFunction
id([T::Type=Float64,] V::TensorSpace) -> TensorMap

Construct the identity endomorphism on space V, i.e. return a t::TensorMap with domain(t) == codomain(t) == V, where either scalartype(t) = T if T is a Number type or storagetype(t) = T if T is a DenseVector type.

source
TensorKit.isomorphismFunction
isomorphism([T::Type=Float64,] codomain::TensorSpace, domain::TensorSpace) -> TensorMap
isomorphism([T::Type=Float64,] codomain ← domain) -> TensorMap
isomorphism([T::Type=Float64,] domain → codomain) -> TensorMap

Construct a specific isomorphism between the codomain and the domain, i.e. return a t::TensorMap where either scalartype(t) = T if T is a Number type or storagetype(t) = T if T is a DenseVector type. If the spaces are not isomorphic, an error will be thrown.

Note

There is no canonical choice for a specific isomorphism, but the current choice is such that isomorphism(cod, dom) == inv(isomorphism(dom, cod)).

See also unitary when InnerProductStyle(cod) === EuclideanInnerProduct().

source
TensorKit.unitaryFunction
unitary([T::Type=Float64,] codomain::TensorSpace, domain::TensorSpace) -> TensorMap
unitary([T::Type=Float64,] codomain ← domain) -> TensorMap
unitary([T::Type=Float64,] domain → codomain) -> TensorMap

Construct a specific unitary morphism between the codomain and the domain, i.e. return a t::TensorMap where either scalartype(t) = T if T is a Number type or storagetype(t) = T if T is a DenseVector type. If the spaces are not isomorphic, or the spacetype does not have a Euclidean inner product, an error will be thrown.

Note

There is no canonical choice for a specific unitary, but the current choice is such that unitary(cod, dom) == inv(unitary(dom, cod)) = adjoint(unitary(dom, cod)).

See also isomorphism and isometry.

source
TensorKit.isometryFunction
isometry([T::Type=Float64,] codomain::TensorSpace, domain::TensorSpace) -> TensorMap
isometry([T::Type=Float64,] codomain ← domain) -> TensorMap
isometry([T::Type=Float64,] domain → codomain) -> TensorMap

Construct a specific isometry between the codomain and the domain, i.e. return a t::TensorMap where either scalartype(t) = T if T is a Number type or storagetype(t) = T if T is a DenseVector type. The isometry t then satisfies t' * t = id(domain) and (t * t')^2 = t * t'. If the spaces do not allow for such an isometric inclusion, an error will be thrown.

See also isomorphism and unitary.

source

AbstractTensorMap properties and data access

The following methods exist to obtain type information:

Base.eltypeMethod
eltype(::AbstractTensorMap) -> Type{T}
eltype(::Type{<:AbstractTensorMap}) -> Type{T}

Return the scalar or element type T of a tensor.

source
TensorKit.spacetypeMethod
spacetype(::AbstractTensorMap) -> Type{S<:IndexSpace}
spacetype(::Type{<:AbstractTensorMap}) -> Type{S<:IndexSpace}

Return the type of the elementary space S of a tensor.

source
TensorKit.sectortypeMethod
sectortype(::AbstractTensorMap) -> Type{I<:Sector}
sectortype(::Type{<:AbstractTensorMap}) -> Type{I<:Sector}

Return the type of sector I of a tensor.

source
TensorKit.fieldMethod
field(::AbstractTensorMap) -> Type{𝔽<:Field}
field(::Type{<:AbstractTensorMap}) -> Type{𝔽<:Field}

Return the type of field 𝔽 of a tensor.

source
TensorKit.storagetypeFunction
storagetype(t::AbstractTensorMap) -> Type{A<:AbstractVector}
storagetype(T::Type{<:AbstractTensorMap}) -> Type{A<:AbstractVector}

Return the type of vector that stores the data of a tensor.

source

To obtain information about the indices, you can use:

TensorKit.spaceMethod
space(t::AbstractTensorMap{T,S,N₁,N₂}) -> HomSpace{S,N₁,N₂}
space(t::AbstractTensorMap{T,S,N₁,N₂}, i::Int) -> S

The index information of a tensor, i.e. the HomSpace of its domain and codomain. If i is specified, return the i-th index space.

source
TensorKit.domainFunction
domain(t::AbstractTensorMap{T,S,N₁,N₂}) -> ProductSpace{S,N₂}
domain(t::AbstractTensorMap{T,S,N₁,N₂}, i::Int) -> S

Return the domain of a tensor, i.e. the product space of the input spaces. If i is specified, return the i-th input space. Implementations should provide domain(t).

See also codomain and space.

source
TensorKit.codomainFunction
codomain(t::AbstractTensorMap{T,S,N₁,N₂}) -> ProductSpace{S,N₁}
codomain(t::AbstractTensorMap{T,S,N₁,N₂}, i::Int) -> S

Return the codomain of a tensor, i.e. the product space of the output spaces. If i is specified, return the i-th output space. Implementations should provide codomain(t).

See also domain and space.

source
TensorKit.numinFunction
numin(::Union{TT,Type{TT}}) where {TT<:AbstractTensorMap} -> Int

Return the number of input spaces of a tensor. This is equivalent to the number of spaces in the domain of that tensor.

See also numout and numind.

source
TensorKit.numoutFunction
numout(::Union{TT,Type{TT}}) where {TT<:AbstractTensorMap} -> Int

Return the number of output spaces of a tensor. This is equivalent to the number of spaces in the codomain of that tensor.

See also numin and numind.

source
TensorKit.numindFunction
numind(::Union{T,Type{T}}) where {T<:AbstractTensorMap} -> Int

Return the total number of input and output spaces of a tensor. This is equivalent to the total number of spaces in the domain and codomain of that tensor.

See also numout and numin.

source
TensorKit.allindFunction
allind(::Union{TT,Type{TT}}) where {TT<:AbstractTensorMap} -> Tuple{Int}

Return all indices of a tensor, i.e. the indices of its domain and codomain.

See also codomainind and domainind.

source

In TensorMap instances, all data is gathered in a single AbstractVector, which has an internal structure into blocks associated to total coupled charge, within which live subblocks associated with the different possible fusion-splitting tree pairs.

To obtain information about the structure of the data, you can use:

TensorKit.fusionblockstructureMethod
fusionblockstructure(t::AbstractTensorMap) -> TensorStructure

Return the necessary structure information to decompose a tensor in blocks labeled by coupled sectors and in subblocks labeled by a splitting-fusion tree couple.

source
TensorKitSectors.dimMethod
dim(t::AbstractTensorMap) -> Int

The total number of free parameters of a tensor, discounting the entries that are fixed by symmetry. This is also the dimension of the HomSpace on which the TensorMap is defined.

source
TensorKit.hasblockMethod
hasblock(t::AbstractTensorMap, c::Sector) -> Bool

Verify whether a tensor has a block corresponding to a coupled sector c.

source
TensorKit.fusiontreesMethod
fusiontrees(t::AbstractTensorMap)

Return an iterator over all splitting - fusion tree pairs of a tensor.

source

Data can be accessed (and modified) in a number of ways. To access the full matrix block associated with the coupled charges, you can use:

To access the data associated with a specific fusion tree pair, you can use:

Base.getindexMethod
Base.getindex(t::TensorMap{T,S,N₁,N₂,I},
              f₁::FusionTree{I,N₁},
              f₂::FusionTree{I,N₂}) where {T,SN₁,N₂,I<:Sector}
    -> StridedViews.StridedView
t[f₁, f₂]

Return a view into the data slice of t corresponding to the splitting - fusion tree pair (f₁, f₂). In particular, if f₁.coupled == f₂.coupled == c, then a StridedViews.StridedView of size (dims(codomain(t), f₁.uncoupled)..., dims(domain(t), f₂.uncoupled)) is returned which represents the slice of block(t, c) whose row indices correspond to f₁.uncoupled and column indices correspond to f₂.uncoupled.

source
Base.setindex!Method
Base.setindex!(t::TensorMap{T,S,N₁,N₂,I},
               v,
               f₁::FusionTree{I,N₁},
               f₂::FusionTree{I,N₂}) where {T,S,N₁,N₂,I<:Sector}
t[f₁, f₂] = v

Copies v into the data slice of t corresponding to the splitting - fusion tree pair (f₁, f₂). Here, v can be any object that can be copied into a StridedViews.StridedView of size (dims(codomain(t), f₁.uncoupled)..., dims(domain(t), f₂.uncoupled)) using Base.copy!.

See also Base.getindex(::TensorMap{T,S,N₁,N₂,I<:Sector}, ::FusionTree{I<:Sector,N₁}, ::FusionTree{I<:Sector,N₂})

source

For a tensor t with FusionType(sectortype(t)) isa UniqeFuison, fusion trees are completely determined by the outcoming sectors, and the data can be accessed in a more straightforward way:

Base.getindexMethod
Base.getindex(t::TensorMap
              sectors::NTuple{N₁+N₂,I}) where {N₁,N₂,I<:Sector} 
    -> StridedViews.StridedView
t[sectors]

Return a view into the data slice of t corresponding to the splitting - fusion tree pair with combined uncoupled charges sectors. In particular, if sectors == (s₁..., s₂...) where s₁ and s₂ correspond to the coupled charges in the codomain and domain respectively, then a StridedViews.StridedView of size (dims(codomain(t), s₁)..., dims(domain(t), s₂)) is returned.

This method is only available for the case where FusionStyle(I) isa UniqueFusion, since it assumes a uniquely defined coupled charge.

source

For tensor t with sectortype(t) == Trivial, the data can be accessed and manipulated directly as multidimensional arrays:

Base.getindexMethod
Base.getindex(t::AbstractTensorMap)
t[]

Return a view into the data of t as a StridedViews.StridedView of size (dims(codomain(t))..., dims(domain(t))...).

source
Base.getindexMethod
Base.getindex(t::AbstractTensorMap, indices::Vararg{Int})
t[indices]

Return a view into the data slice of t corresponding to indices, by slicing the StridedViews.StridedView into the full data array.

source
Base.setindex!Method
Base.setindex!(t::AbstractTensorMap, v, indices::Vararg{Int})
t[indices] = v

Assigns v to the data slice of t corresponding to indices.

source

AbstractTensorMap operations

The operations that can be performed on an AbstractTensorMap can be organized into the following categories:

  • vector operations: these do not change the space or index strucure of a tensor and can be straightforwardly implemented on on the full data. All the methods described in VectorInterface.jl are supported. For compatibility reasons, we also provide implementations for equivalent methods from LinearAlgebra.jl, such as axpy!, axpby!.

  • index manipulations: these change (permute) the index structure of a tensor, which affects the data in a way that is fully determined by the categorical data of the sectortype of the tensor.

  • (planar) contractions and (planar) traces (i.e., contractions with identity tensors). Tensor contractions correspond to a combination of some index manipulations followed by a composition or multiplication of the tensors in their role as linear maps. Tensor contractions are however of such important and frequency that they require a dedicated implementation.

  • tensor factorisations, which relies on their identification of tensors with linear maps between tensor spaces. The factorisations are applied as ordinary matrix factorisations to the matrix blocks associated with the coupled charges.

Index manipulations

A general index manipulation of a TensorMap object can be built up by considering some transformation of the fusion trees, along with a permutation of the stored data. They come in three flavours, which are either of the type transform(!) which are exported, or of the type add_transform!, for additional expert-mode options that allows for addition and scaling, as well as the selection of a custom backend.

TensorKit.permuteMethod
permute(tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple;
        copy::Bool=false)
    -> tdst::TensorMap

Return tensor tdst obtained by permuting the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively.

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To permute into an existing destination, see permute! and add_permute!

source
TensorKit.braidMethod
braid(tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple, levels::IndexTuple;
      copy::Bool = false)
    -> tdst::TensorMap

Return tensor tdst obtained by braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To braid into an existing destination, see braid! and add_braid!

source
Base.transposeMethod
transpose(tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple;
          copy::Bool=false)
    -> tdst::TensorMap

Return tensor tdst obtained by transposing the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. The new index positions should be attainable without any indices crossing each other, i.e., the permutation (p₁..., reverse(p₂)...) should constitute a cyclic permutation of (codomainind(tsrc)..., reverse(domainind(tsrc))...).

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To permute into an existing destination, see permute! and add_permute!

source
TensorKit.repartitionMethod
repartition(tsrc::AbstractTensorMap{S}, N₁::Int, N₂::Int; copy::Bool=false) where {S}
    -> tdst::AbstractTensorMap{S,N₁,N₂}

Return tensor tdst obtained by repartitioning the indices of t. The codomain and domain of tdst correspond to the first N₁ and last N₂ spaces of t, respectively.

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To repartition into an existing destination, see repartition!.

source
TensorKitSectors.twistMethod
twist(tsrc::AbstractTensorMap, i::Int; inv::Bool=false) -> tdst
twist(tsrc::AbstractTensorMap, is; inv::Bool=false) -> tdst

Apply a twist to the ith index of tsrc and return the result as a new tensor. If inv=true, use the inverse twist.

See twist! for storing the result in place.

source
Base.permute!Method
permute!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple)
    -> tdst

Write into tdst the result of permuting the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively.

See permute for creating a new tensor and add_permute! for a more general version.

source
TensorKit.braid!Function
braid!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap,
       (p₁, p₂)::Index2Tuple, levels::Tuple)
    -> tdst

Write into tdst the result of braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

See braid for creating a new tensor and add_braid! for a more general version.

source
LinearAlgebra.transpose!Function
transpose!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap,
           (p₁, p₂)::Index2Tuple)
    -> tdst

Write into tdst the result of transposing the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. The new index positions should be attainable without any indices crossing each other, i.e., the permutation (p₁..., reverse(p₂)...) should constitute a cyclic permutation of (codomainind(tsrc)..., reverse(domainind(tsrc))...).

See transpose for creating a new tensor and add_transpose! for a more general version.

source
TensorKit.repartition!Function
repartition!(tdst::AbstractTensorMap{S}, tsrc::AbstractTensorMap{S}) where {S} -> tdst

Write into tdst the result of repartitioning the indices of tsrc. This is just a special case of a transposition that only changes the number of in- and outgoing indices.

See repartition for creating a new tensor.

source
TensorKit.twist!Function
twist!(t::AbstractTensorMap, i::Int; inv::Bool=false) -> t
twist!(t::AbstractTensorMap, is; inv::Bool=false) -> t

Apply a twist to the ith index of t, or all indices in is, storing the result in t. If inv=true, use the inverse twist.

See twist for creating a new tensor.

source
TensorKit.add_permute!Function
add_permute!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple,
             α::Number, β::Number, backend::AbstractBackend...)

Return the updated tdst, which is the result of adding α * tsrc to tdst after permuting the indices of tsrc according to (p₁, p₂).

See also permute, permute!, add_braid!, add_transpose!.

source
TensorKit.add_braid!Function
add_braid!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple,
           levels::IndexTuple, α::Number, β::Number, backend::AbstractBackend...)

Return the updated tdst, which is the result of adding α * tsrc to tdst after braiding the indices of tsrc according to (p₁, p₂) and levels.

See also braid, braid!, add_permute!, add_transpose!.

source
TensorKit.add_transpose!Function
add_transpose!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap, (p₁, p₂)::Index2Tuple,
               α::Number, β::Number, backend::AbstractBackend...)

Return the updated tdst, which is the result of adding α * tsrc to tdst after transposing the indices of tsrc according to (p₁, p₂).

See also transpose, transpose!, add_permute!, add_braid!.

source

Tensor map composition, traces, contractions and tensor products

TensorKit.composeMethod
compose(t1::AbstractTensorMap, t2::AbstractTensorMap) -> AbstractTensorMap

Return the AbstractTensorMap that implements the composition of the two tensor maps t1 and t2.

source
TensorKit.trace_permute!Function
trace_permute!(tdst::AbstractTensorMap, tsrc::AbstractTensorMap,
               (p₁, p₂)::Index2Tuple, (q₁, q₂)::Index2Tuple,
               α::Number, β::Number, backend=TO.DefaultBackend())

Return the updated tdst, which is the result of adding α * tsrc to tdst after permuting the indices of tsrc according to (p₁, p₂) and furthermore tracing the indices in q₁ and q₂.

source
TensorKit.contract!Function
contract!(C::AbstractTensorMap,
          A::AbstractTensorMap, (oindA, cindA)::Index2Tuple,
          B::AbstractTensorMap, (cindB, oindB)::Index2Tuple,
          (p₁, p₂)::Index2Tuple,
          α::Number, β::Number,
          backend, allocator)

Return the updated C, which is the result of adding α * A * B to C after permuting the indices of A and B according to (oindA, cindA) and (cindB, oindB) respectively.

source
TensorKitSectors.:⊗Method
⊗(t1::AbstractTensorMap, t2::AbstractTensorMap, ...) -> TensorMap
otimes(t1::AbstractTensorMap, t2::AbstractTensorMap, ...) -> TensorMap

Compute the tensor product between two AbstractTensorMap instances, which results in a new TensorMap instance whose codomain is codomain(t1) ⊗ codomain(t2) and whose domain is domain(t1) ⊗ domain(t2).

source

TensorMap factorizations

TensorKit.leftorthFunction
leftorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> Q, R

Create orthonormal basis Q for indices in leftind, and remainder R such that permute(t, (leftind, rightind)) = Q*R.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftorth!(t, alg = QRpos()).

Different algorithms are available, namely QR(), QRpos(), SVD() and Polar(). QR() and QRpos() use a standard QR decomposition, producing an upper triangular matrix R. Polar() produces a Hermitian and positive semidefinite R. QRpos() corrects the standard QR decomposition such that the diagonal elements of R are positive. Only QRpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanInnerProduct().

source
TensorKit.rightorthFunction
rightorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
            alg::OrthogonalFactorizationAlgorithm = LQpos()) -> L, Q

Create orthonormal basis Q for indices in rightind, and remainder L such that permute(t, (leftind, rightind)) = L*Q.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightorth!(t, alg = LQpos()).

Different algorithms are available, namely LQ(), LQpos(), RQ(), RQpos(), SVD() and Polar(). LQ() and LQpos() produce a lower triangular matrix L and are computed using a QR decomposition of the transpose. RQ() and RQpos() produce an upper triangular remainder L and only works if the total left dimension is smaller than or equal to the total right dimension. LQpos() and RQpos() add an additional correction such that the diagonal elements of L are positive. Polar() produces a Hermitian and positive semidefinite L. Only LQpos(), RQpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanInnerProduct().

source
TensorKit.leftnullFunction
leftnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in leftind, such that N' * permute(t, (leftind, rightind)) = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftnull!(t, alg = QRpos()).

Different algorithms are available, namely QR() (or equivalently, QRpos()), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanInnerProduct().

source
TensorKit.rightnullFunction
rightnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
            alg::OrthogonalFactorizationAlgorithm = LQ(),
            atol::Real = 0.0,
            rtol::Real = eps(real(float(one(scalartype(t)))))*iszero(atol)) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in rightind, such that permute(t, (leftind, rightind))*N' = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightnull!(t, alg = LQpos()).

Different algorithms are available, namely LQ() (or equivalently, LQpos), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanInnerProduct().

source
TensorKit.tsvdFunction
tsvd(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
    trunc::TruncationScheme = notrunc(), p::Real = 2, alg::Union{SVD, SDD} = SDD())
    -> U, S, V, ϵ

Compute the (possibly truncated) singular value decomposition such that norm(permute(t, (leftind, rightind)) - U * S * V) ≈ ϵ, where ϵ thus represents the truncation error.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using tsvd!(t, trunc = notrunc(), p = 2).

A truncation parameter trunc can be specified for the new internal dimension, in which case a truncated singular value decomposition will be computed. Choices are:

  • notrunc(): no truncation (default);
  • truncerr(η::Real): truncates such that the p-norm of the truncated singular values is smaller than η;
  • truncdim(χ::Int): truncates such that the equivalent total dimension of the internal vector space is no larger than χ;
  • truncspace(V): truncates such that the dimension of the internal vector space is smaller than that of V in any sector.
  • truncbelow(η::Real): truncates such that every singular value is larger then η ;

Truncation options can also be combined using &, i.e. truncbelow(η) & truncdim(χ) will choose the truncation space such that every singular value is larger than η, and the equivalent total dimension of the internal vector space is no larger than χ.

The method tsvd also returns the truncation error ϵ, computed as the p norm of the singular values that were truncated.

THe keyword alg can be equal to SVD() or SDD(), corresponding to the underlying LAPACK algorithm that computes the decomposition (_gesvd or _gesdd).

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and tsvd(!) is currently only implemented for InnerProductStyle(t) === EuclideanInnerProduct().

source
TensorKit.eighFunction
eigh(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eigh assumes that the linear map is hermitian and D and V tensors with the same scalartype as t. See eig and eigen for non-hermitian tensors. Hermiticity requires that the tensor acts on inner product spaces, and the current implementation requires InnerProductStyle(t) === EuclideanInnerProduct().

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eigh!(t). Note that the permuted tensor on which eigh! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

See also eigen and eig.

source
TensorKit.eigFunction
eig(t::AbstractTensor, (leftind, rightind)::Index2Tuple; kwargs...) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eig assumes that the linear map is not hermitian and returns type stable complex valued D and V tensors for both real and complex valued t. See eigh for hermitian linear maps

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eig!(t). Note that the permuted tensor on which eig! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

Accepts the same keyword arguments scale and permute as eigen of dense matrices. See the corresponding documentation for more information.

See also eigen and eigh.

source
LinearAlgebra.isposdefFunction
isposdef(t::AbstractTensor, (leftind, rightind)::Index2Tuple) -> ::Bool

Test whether a tensor t is positive definite as linear map from rightind to leftind.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using isposdef!(t). Note that the permuted tensor on which isposdef! is called should have equal domain and codomain, as otherwise it is meaningless.

source

TODO: document svd truncation types