Implementation
The @tensor
macro and its relatives work as parsers for indexed tensor expressions. They transform these into a sequence of calls to the primitive tensor operations. This allows the support of custom types that implement the Interface. The actual implementation is achieved through the use of TensorParser
, which provides the general framework to parse tensor expressions. The @tensor
macro is then just a wrapper around this, which configures the default behavior and handles keyword arguments of the parser.
The TensorParser
works by breaking down the parsing into three main phases. First, a basic check of the supplied expression is performed, to ensure that it is a valid tensor expression. Then, a number of preprocessing steps can be performed, which are used to standardize expressions, allow for syntactic sugar features, and can also be used as a hook for writing custom parsers. Then, the different contractions within the tensor expression are analyzed and processed, which rewrites the expression into a set of binary rooted trees. Then, the main step can be executed, namely transforming the whole expression into actual calls to the primitive tensor operations tensoradd!
, tensortrace!
and tensorcontract!
, as well as calls to tensoralloc_add
and tensoralloc_contract
to allocate the temporary and final tensors. For those, also the resulting scalar type needs to be determined. Finally, a number of postprocessing steps can be added, which are mostly used to clean up the resulting expression by flattening and by removing line number nodes, but also to incorporate the custom backend and allocation system.
Verifiers
The basic checks are performed by verifytensorexpr
, which calls the verifiers isassignment
, isdefinition
, istensor
, istensorexpr
and isscalarexpr
.
TensorOperations.verifytensorexpr
— Functionverifytensorexpr(ex)
Check that ex
is a valid tensor expression and throw an ArgumentError
if not. Valid tensor expressions satisfy one of the following (recursive) rules):
- The expression is a scalar expression or a tensor expression.
- The expression is an assignment or a definition, and the left hand side and right hand side are valid tensor expressions or scalars.
- The expression is a block, and all subexpressions are valid tensor expressions or scalars.
See also istensorexpr
and isscalarexpr
.
TensorOperations.isassignment
— Functionisassignment(ex)
Test if ex
is an assignment expression, i.e. ex
is of one of the forms:
a = b
a += b
a -= b
TensorOperations.isdefinition
— Functionisdefinition(ex)
Test if ex
is a definition expression, i.e. ex
is of the form:
a := b
a ≔ b
TensorOperations.isindex
— Functionisindex(ex)
Test for a valid index, namely a symbol or integer, or an expression of the form i′
where i
is itself a valid index.
TensorOperations.istensor
— Functionistensor(ex)
Test for a simple tensor object indexed by valid indices. This means an expression of the form:
A[i, j, k, ...]
A[i j k ...]
A[i j k ...; l m ...]
A[(i, j, k, ...); (l, m, ...)]
where i
, j
, k
, ... are valid indices.
TensorOperations.istensorexpr
— Functionistensorexpr(ex)
Test for a tensor expression. This means an expression which can be evaluated to a valid tensor. This includes:
A[...] + B[...] - C[...] - ...
A[...] * B[...] * ...
λ * A[...] / μ
λ \ conj(A[...])
A[...]' + adjoint(B[...]) - ...
TensorOperations.isscalarexpr
— Functionisscalarexpr(ex)
Test for a scalar expression, i.e. an expression that can be evaluated to a scalar.
Preprocessing
The following functions exist as preprocessors and are enabled in the default TensorParser
objects.
TensorOperations.normalizeindices
— Functionnormalizeindices(ex::Expr)
Normalize indices of an expression by replacing all indices with a prime expression i'
by indices with a unicode prime 'i′'.
TensorOperations.expandconj
— Functionexpandconj(ex)
Expand all conj
calls in an expression to conjugate the individual terms and factors.
TensorOperations.nconindexcompletion
— Functionnconindexcompletion(ex)
Complete the indices of the left hand side of an ncon expression. For example, the following expressions are equivalent after index completion.
@tensor A[:] := B[-1, 1, 2] * C[1, 2, -3]
@tensor A[-1, -2] := B[-1, 1, 2] * C[1, 2, -3]
TensorOperations.extracttensorobjects
— Functionextracttensorobjects(ex)
Extract all tensor objects which are not simple symbols with newly generated symbols, and assign them before the expression and after the expression as necessary, in order to avoid multiple evaluations of the expression constituting the tensor object.
TensorOperations.insertcontractionchecks
— Functioninsertcontractionchecks(ex)
Insert runtime checks before each contraction, which provide clearer debug information.
Processing
The following functions implement the main steps in parsing the tensor expression, and are always performed by any TensorParser
object.
TensorOperations.processcontractions
— Functionprocesscontractions(ex, treebuilder, treesorter, costcheck)
Process the contractions in ex
using the given treebuilder
and treesorter
functions. This is done by first extracting a network representation from the expression, then building and sorting the contraction trees with a given treebuilder
and treesorter
function, and finally inserting the contraction trees back into the expression. When the costcheck
argument equals :warn
or :cache
(as opposed to :nothing
), the optimal contraction order is computed at runtime using the actual values of tensorcost
and this optimal order is compared to the contraction order that was determined at compile time. If the compile time order deviated from the optimal order, a warning will be printed (in case of costcheck == :warn
) or this particular contraction will be recorded in TensorOperations.costcache
(in case of costcheck == :cache
). Both the warning or the recorded cache entry contain a order
suggestion that can be passed to the @tensor
macro in order to encode the optimal contraction order at compile time..
TensorOperations.tensorify
— Functiontensorify(ex)
Main parsing step to transform a tensor expression ex
into a series of function calls associated with the primitive building blocks (tensor operations and allocations).
Postprocessing
The following functions exist as postprocessors and are enabled in the default TensorParser
objects.
TensorOperations._flatten
— Function_flatten(ex)
Flatten nested structure of an expression, returning an unnested Expr(:block, …)
.
TensorOperations.removelinenumbernode
— Functionremovelinenumbernode(ex)
Remove all LineNumberNode
s from an expression.
TensorOperations.addtensoroperations
— Functionaddtensoroperations(ex)
Fix references to TensorOperations functions in namespaces where @tensor
is present but the functions are not.
TensorOperations.insertargument
— Functioninsertargument(ex, args, methods)
Insert an extra argument into a tensor operation, e.g. for any op
∈ methods
, transform TensorOperations.op(args...)
-> TensorOperations.op(args..., arg)
TensorOperations.insertbackend
— Functioninsertbackend(ex, backend)
Insert the backend argument into the tensor operation methods tensoradd!
, tensortrace!
, and tensorcontract!
.
TensorOperations.insertallocator
— Functioninsertallocator(ex, allocator)
Insert the allocator argument into the tensor operation and allocation methods tensoradd!
, tensortrace!
, tensorcontract!
, tensoralloc
, tensoralloc_add
, tensoralloc_contract
and tensorfree!
.
Analysis of contraction graphs and optimizing contraction order
The macro @tensoropt
or the combination of @tensor
with the keyword opt
can be used to optimize the contraction order of the expression at compile time. This is done by analyzing the contraction graph, where the nodes are the tensors and the edges are the contractions, in combination with the data provided in optdata
, which is a dictionary associating a cost (either a number or a polynomial in some abstract scaling parameter) to every index. This information is then used to determine the (asymptotically) optimal contraction tree (in terms of number of floating point operations). The algorithm that is used is described in arXiv:1304.6112.