Implementation

The @tensor macro and its relatives work as parsers for indexed tensor expressions. They transform these into a sequence of calls to the primitive tensor operations. This allows the support of custom types that implement the Interface. The actual implementation is achieved through the use of TensorParser, which provides the general framework to parse tensor expressions. The @tensor macro is then just a wrapper around this, which configures the default behavior and handles keyword arguments of the parser.

The TensorParser works by breaking down the parsing into three main phases. First, a basic check of the supplied expression is performed, to ensure that it is a valid tensor expression. Then, a number of preprocessing steps can be performed, which are used to standardize expressions, allow for syntactic sugar features, and can also be used as a hook for writing custom parsers. Then, the different contractions within the tensor expression are analyzed and processed, which rewrites the expression into a set of binary rooted trees. Then, the main step can be executed, namely transforming the whole expression into actual calls to the primitive tensor operations tensoradd!, tensortrace! and tensorcontract!, as well as calls to tensoralloc_add and tensoralloc_contract to allocate the temporary and final tensors. For those, also the resulting scalar type needs to be determined. Finally, a number of postprocessing steps can be added, which are mostly used to clean up the resulting expression by flattening and by removing line number nodes, but also to incorporate the custom backend and allocation system.

Verifiers

The basic checks are performed by verifytensorexpr, which calls the verifiers isassignment, isdefinition, istensor, istensorexpr and isscalarexpr.

TensorOperations.verifytensorexprFunction
verifytensorexpr(ex)

Check that ex is a valid tensor expression and throw an ArgumentError if not. Valid tensor expressions satisfy one of the following (recursive) rules):

  • The expression is a scalar expression or a tensor expression.
  • The expression is an assignment or a definition, and the left hand side and right hand side are valid tensor expressions or scalars.
  • The expression is a block, and all subexpressions are valid tensor expressions or scalars.

See also istensorexpr and isscalarexpr.

source
TensorOperations.isindexFunction
isindex(ex)

Test for a valid index, namely a symbol or integer, or an expression of the form i′ where i is itself a valid index.

source
TensorOperations.istensorFunction
istensor(ex)

Test for a simple tensor object indexed by valid indices. This means an expression of the form:

A[i, j, k, ...]
A[i j k ...]
A[i j k ...; l m ...]
A[(i, j, k, ...); (l, m, ...)]

where i, j, k, ... are valid indices.

source
TensorOperations.istensorexprFunction
istensorexpr(ex)

Test for a tensor expression. This means an expression which can be evaluated to a valid tensor. This includes:

A[...] + B[...] - C[...] - ...
A[...] * B[...] * ...
λ * A[...] / μ
λ \ conj(A[...])
A[...]' + adjoint(B[...]) - ...
source

Preprocessing

The following functions exist as preprocessors and are enabled in the default TensorParser objects.

TensorOperations.normalizeindicesFunction
normalizeindices(ex::Expr)

Normalize indices of an expression by replacing all indices with a prime expression i' by indices with a unicode prime 'i′'.

source
Missing docstring.

Missing docstring for TensorOperations.groupscalarfactors. Check Documenter's build log for details.

TensorOperations.nconindexcompletionFunction
nconindexcompletion(ex)

Complete the indices of the left hand side of an ncon expression. For example, the following expressions are equivalent after index completion.

@tensor A[:] := B[-1, 1, 2] * C[1, 2, -3]
@tensor A[-1, -2] := B[-1, 1, 2] * C[1, 2, -3]
source
TensorOperations.extracttensorobjectsFunction
extracttensorobjects(ex)

Extract all tensor objects which are not simple symbols with newly generated symbols, and assign them before the expression and after the expression as necessary, in order to avoid multiple evaluations of the expression constituting the tensor object.

source

Processing

The following functions implement the main steps in parsing the tensor expression, and are always performed by any TensorParser object.

TensorOperations.processcontractionsFunction
processcontractions(ex, treebuilder, treesorter, costcheck)

Process the contractions in ex using the given treebuilder and treesorter functions. This is done by first extracting a network representation from the expression, then building and sorting the contraction trees with a given treebuilder and treesorter function, and finally inserting the contraction trees back into the expression. When the costcheck argument equals :warn or :cache (as opposed to :nothing), the optimal contraction order is computed at runtime using the actual values of tensorcost and this optimal order is compared to the contraction order that was determined at compile time. If the compile time order deviated from the optimal order, a warning will be printed (in case of costcheck == :warn) or this particular contraction will be recorded in TensorOperations.costcache (in case of costcheck == :cache). Both the warning or the recorded cache entry contain a order suggestion that can be passed to the @tensor macro in order to encode the optimal contraction order at compile time..

source
TensorOperations.tensorifyFunction
tensorify(ex)

Main parsing step to transform a tensor expression ex into a series of function calls associated with the primitive building blocks (tensor operations and allocations).

source

Postprocessing

The following functions exist as postprocessors and are enabled in the default TensorParser objects.

TensorOperations.insertbackendFunction
insertbackend(ex, backend, operations)

Insert a backend into a tensor operation, e.g. for any opoperations, transform TensorOperations.op(args...) -> TensorOperations.op(args..., Backend{:backend}())

source

Analysis of contraction graphs and optimizing contraction order

The macro @tensoropt or the combination of @tensor with the keyword opt can be used to optimize the contraction order of the expression at compile time. This is done by analyzing the contraction graph, where the nodes are the tensors and the edges are the contractions, in combination with the data provided in optdata, which is a dictionary associating a cost (either a number or a polynomial in some abstract scaling parameter) to every index. This information is then used to determine the (asymptotically) optimal contraction tree (in terms of number of floating point operations). The algorithm that is used is described in arXiv:1304.6112.