SpECTRE  v2025.08.19
Writing Tensor Equations with TensorExpressions

SpECTRE's TensorExpressions interface allows you to write tensor equations in SpECTRE in C++ with syntax that resembles tensor index notation. To use it, simply add this include to the top of your file:

#include "DataStructures/Tensor/Tensor.hpp"

The following guide assumes a basic understanding of the Tensor class and tnsr type aliases. RHS refers to the right hand side expression that we wish to compute and LHS refers to the resulting left hand side tensor that stores the result of computing the RHS expression.

Syntax

TensorExpressions are arithmetic expressions of Tensors that can be evaluated using tenex::evaluate. Terms used in the expression may be Tensors or numbers (see supported types).

As a simple example of how TensorExpressions are used, if you would like to raise the index of some Tensor R with some inverse spacetime metric Tensor g, i.e. \(R^c{}_b = R_{ab} g^{ac}\), you can compute this with TensorExpressions by doing:

auto R_up =
tenex::evaluate<ti::C, ti::b>(R(ti::a, ti::b) * g(ti::A, ti::C));

where R_up, R, and g are rank 2 spacetime Tensors and the ti::* variables are TensorIndexs representing generic tensor indices. Here is a breakdown of the different parts of this line:

  • the RHS expression to compute is the argument to evaluate: R(ti::a, ti::b) * g(ti::A, ti::C)
  • the result LHS Tensor is R_up
  • the LHS Tensor's indices are the template arguments to evaluate: ti::C, ti::b

The LHS Symmetry will be deduced from the RHS tensors' symmetries and order of operations.

Alternatively, if you already have a LHS Tensor variable, you can pass it into the following evaluate overload, where the LHS Tensor provided will be assigned to the result of the RHS expression:

tenex::evaluate<ti::C, ti::b>(make_not_null(&R_up),
R(ti::a, ti::b) * g(ti::A, ti::C));
Definition: ContractFirstNIndices.hpp:16

Note that to use this evaluate overload, the LHS Tensor does not need to be previously sized unless the data type is a Blaze vector type (e.g. DataVector) and the RHS expression contains no Tensor terms (see example where sizing is necessary). This overload is useful in a couple cases:

  • Specifying the LHS symmetry: One advantage of this overload is that it uses the Symmetry of the provided LHS tensor instead of deducing it from the RHS expression. This enables you to specify the LHS symmetry in cases where the previous evaluate overload does not deduce the one you want. While the LHS index order in this example (ti::C, ti::b) could theoretically be deduced from the index types of R_up, we still require specifying them because this isn't the case for all equations and we would like to have a unified interface. See this example, which demonstrates a case where you might want to specify the LHS symmetry and where the LHS index order would not be deducible.
  • Using spatial and time indices on LHS spacetime indices

Tensor indices

TensorIndexs represent generic tensor indices and are supplied as comma-separated lists in two places:

  • in parentheses for each tensor in the RHS expression
  • in the template parameters of evaluate to specify the order of the LHS result tensor's indices.

Each TensorIndex takes the form ti::* where * is a letter that encodes index properties:

  • Uppercase letters denote upper indices and lowercase letters denote lower indices
  • Letters A/a - H/h indicate spacetime indices, I/i - N/n indicate spatial indices, and T/t indicates a concrete time index. This is what is currently defined, but more spatial and spacetime indices (letters) can easily be added if needed. Note that there is no precedence or difference between the indices of some type, e.g. ti::a, ti::b, ... ti::h are equivalent

The properties of each TensorIndex and the Tensor's indices (typelist of TensorIndexTypes) must be compatible:

To demonstrate correct and incorrect usage, let's say we have tensors \(R_{ab}\) (two spacetime indices, e.g. type tnsr::ab) and \(S_{ij}\) (two spatial indices, e.g. type tnsr::ij):

R(ti::c, ti::d) // OK
R(ti::c, ti::k) // OK, can use spatial TensorIndex on spacetime index
R(ti::c, ti::t) // OK, can use time TensorIndex on spacetime index
R(ti::c, ti::D) // ERROR, ti::D is upper but the 2nd index is lower
S(ti::j, ti::k) // OK
S(ti::a, ti::k) // ERROR, can't use spacetime TensorIndex on a spatial index
S(ti::j, ti::t) // ERROR, can't use time TensorIndex on a spatial index

Examples

Basic operations

In the following examples:

Addition and subtraction

\(L_{ab} = R_{ab} + S_{ba}\)

auto L =
tenex::evaluate<ti::a, ti::b>(R(ti::a, ti::b) + S(ti::b, ti::a));

\(L = 1 - T\)

auto L =
tenex::evaluate(1.0 - T());
void evaluate(const gsl::not_null< Tensor< LhsDataType, LhsSymmetry, LhsIndexList > * > lhs_tensor, const TensorExpression< Derived, RhsDataType, RhsSymmetry, RhsIndexList, tmpl::list< RhsTensorIndices... > > &rhs_tensorexpression)
Assign the result of a RHS tensor expression to a tensor with the LHS index order set in the template...
Definition: Evaluate.hpp:684

Contraction of a single tensor

\(L = U^{a}{}_{a}\)

auto L =
tenex::evaluate(U(ti::A, ti::a));

\(L^b = V_{a}{}^{ba}\)

auto L =
tenex::evaluate<ti::B>(V(ti::a, ti::B, ti::A));

Inner and outer products

\(L = G_a H^{a}\)

auto L =
tenex::evaluate(G(ti::a) * H(ti::A));

\(L_{cb} = T G_a G_c U^{a}{}_{b}\)

auto L = tenex::evaluate<ti::c, ti::b>(T() * G(ti::a) * G(ti::c) *
U(ti::A, ti::b));

Division

\(L_a = \frac{G_a}{2}\)

auto L =
tenex::evaluate<ti::a>(G(ti::a) / 2.0);

\(L_{ba} = \frac{R_{ab}}{T}\)

auto L =
tenex::evaluate<ti::b, ti::a>(R(ti::a, ti::b) / T());

\(L = \frac{5}{U^{a}{}_{a} + 1}\)

auto L =
tenex::evaluate(5.0 / (U(ti::A, ti::a) + 1.0));

Square root

\(L = \sqrt{T}\)

auto L =
auto sqrt(const TensorExpression< T, X, Symm, IndexList, tmpl::list< Args... > > &t)
Returns the tensor expression representing the square root of a tensor expression that evaluates to a...
Definition: SquareRoot.hpp:253

\(L = \sqrt{G_a H^a}\)

auto L =
tenex::evaluate(sqrt(G(ti::a) * H(ti::A)));

More features

Specifying the LHS symmetry

When using the evaluate overload that returns the LHS Tensor, the Symmetry of the LHS Tensor will be deduced from the RHS expression. However, in some cases the deduced LHS symmetry may not be what you want. To specify it yourself, you can pass your LHS Tensor (that has the desired Symmetry) to the evaluate overload that takes the LHS Tensor as the first argument.

For example, if we have \(L_{ab} = R_a R_b\), the indices of \(L\) are symmetric. However, when we do:

auto L = tenex::evaluate<ti::a, ti::b>(R(ti::a) * R(ti::b));
static_assert(std::is_same_v<decltype(L), tnsr::ab<DataType, Dim>>);

the type of L will be tnsr::ab because it is not known at compile time that the two vectors in the product are the same. To override the deduced symmetry and make it a symmetric result, we can create a tnsr::aa and pass it into the other overload:

tenex::evaluate<ti::a, ti::b>(make_not_null(&L), R(ti::a) * R(ti::b));

Assigning to a number

You can assign a number (e.g. double) to a Tensor of any rank to fill all components with that value, e.g. \(L_{ab} = -1\). How you do that is slightly different depending on the underlying data type of your Tensor:

tenex::evaluate<ti::a, ti::b>(make_not_null(&L), -1.0);
// construct LHS tensor with size 5 DataVector
tenex::evaluate<ti::a, ti::b>(make_not_null(&L), -1.0);
Stores a collection of function values.
Definition: DataVector.hpp:48

See supported number types for the data types that the RHS number can be.

Using spatial and time indices on RHS spacetime indices

If a Tensor has spacetime indices, you can use generic spatial indices and concrete time indices to refer to a subset of the components, as we see in literature.

Lapse \(\alpha\) computed from the spacetime metric \(g_{ab}\) and shift \(\beta^i\):

\(\alpha = \sqrt{\beta^i g_{it} - g_{tt}}\)

auto lapse =
tenex::evaluate(sqrt(shift(ti::I) * spacetime_metric(ti::i, ti::t) -
spacetime_metric(ti::t, ti::t)));
void spacetime_metric(gsl::not_null< tnsr::aa< DataType, Dim, Frame > * > spacetime_metric, const Scalar< DataType > &lapse, const tnsr::I< DataType, Dim, Frame > &shift, const tnsr::ii< DataType, Dim, Frame > &spatial_metric)
Computes the spacetime metric from the spatial metric, lapse, and shift.
tnsr::I< DataType, SpatialDim, Frame > shift(const tnsr::aa< DataType, SpatialDim, Frame > &spacetime_metric, const tnsr::II< DataType, SpatialDim, Frame > &inverse_spatial_metric)
Compute shift from spacetime metric and inverse spatial metric.
Scalar< DataType > lapse(const tnsr::I< DataType, SpatialDim, Frame > &shift, const tnsr::aa< DataType, SpatialDim, Frame > &spacetime_metric)
Compute lapse from shift and spacetime metric.

Using spatial and time indices on LHS spacetime indices

Related to the previous example, you can also use generic spatial indices and concrete time indices for the spacetime indices of the LHS Tensor to assign subsets of the LHS Tensor's components.

Spacetime metric \(g_{ab}\) computed from the lapse \(\alpha\), shift \(\beta^i\), and spatial metric \(\gamma_{ij}\):

\(g_{tt} = -\alpha^2 + \beta^m \beta^n \gamma_{mn}\)

\(g_{ti} = \gamma_{mi} \beta^m\)

\(g_{ij} = \gamma_{ij}\)

tenex::evaluate<ti::t, ti::t>(
make_not_null(&spacetime_metric),
-square(lapse()) +
shift(ti::M) * shift(ti::N) * spatial_metric(ti::m, ti::n));
tenex::evaluate<ti::t, ti::i>(make_not_null(&spacetime_metric),
spatial_metric(ti::m, ti::i) * shift(ti::M));
tenex::evaluate<ti::i, ti::j>(make_not_null(&spacetime_metric),
spatial_metric(ti::i, ti::j));
constexpr decltype(auto) square(const T &x)
Compute the square of x
Definition: ConstantExpressions.hpp:53
tnsr::ii< DataType, SpatialDim, Frame > spatial_metric(const tnsr::aa< DataType, SpatialDim, Frame > &spacetime_metric)
Compute spatial metric from spacetime metric.
Note
The above example is for demonstration purposes only. In practice, you would want to avoid repeating computing the reused quantity \(\beta^i \gamma_{ij}\) by e.g. storing the result of the reused quantity in another variable.

Using the LHS Tensor in the RHS expression

You can use the LHS Tensor in the RHS expression to emulate operations like +=, *=, etc. For example, say you would like to emulate the following:

// pseudocode
L_ab = R_ab
L_ab += 2.0 * S_ba

You can emulate the += operation by calling update instead of evaluate:

auto L = tenex::evaluate<ti::a, ti::b>(R(ti::a, ti::b));
// use the LHS tensor in the RHS
tenex::update<ti::a, ti::b>(
make_not_null(&L), L(ti::a, ti::b) + 2.0 * S(ti::b, ti::a));

One limitation is that when using the LHS tensor in the RHS expression, the LHS tensor's index order must be the same on the LHS and RHS. This means that the order of the TensorIndex template parameters of update must match the order of the TensorIndex arguments in the parentheses that come after the LHS Tensor in the RHS expression. For example, the following is not allowed and will yield a runtime error:

// ERROR: index order for L on LHS and RHS is not the same
tenex::update<ti::a, ti::b>(
make_not_null(&L), L(ti::b, ti::a) + 2.0 * S(ti::b, ti::a));
Note
It is not advised to use very large RHS expressions with update because runtime performance does not scale well as the number of operations gets very large. This is because evaluate breaks up large expressions into smaller ones, but update cannot. One way around this is to break up the expression and use more than one call to update.

Compile time math checks

For all operations, mathematical legality is checked at compile time. The compiler will catch what is not sound to write on paper, which includes things like no repeated indices, can't divide by a tensor with rank > 0, and that spatial dimensions, frames, valences, index types (spatial or spacetime), and ranks of tensors match where they should.

Here are some examples of illegal math that the compiler will catch:

tnsr::ab<double, 3, Frame::Inertial> R{};
tnsr::ab<double, 3, Frame::Inertial> S{};
tnsr::ab<double, 3, Frame::Grid> T{};
tnsr::AB<double, 2, Frame::Inertial> G{};
// ERROR: LHS and RHS indices don't match
auto result1 = tenex::evaluate<ti::a, ti::c>(R(ti::a, ti::b) + S(ti::a, ti::b));
// ERROR: Can't add Tensors with different indices
auto result2 = tenex::evaluate<ti::a, ti::b>(R(ti::a, ti::b) + S(ti::a, ti:c));
// ERROR: Repeated index in the RHS
auto result3 =
tenex::evaluate<ti::a, ti::b, ti::c>(R(ti::a, ti::b) * S(ti::a, ti::c));
// ERROR: Can't add Tensors with different Frame types
auto result4 = tenex::evaluate<ti::a, ti::b>(R(ti::a, ti::b) + T(ti::a, ti::b));
// ERROR: Can't contract indices with different number of spatial dimensions
auto result5 = tenex::evaluate(R(ti::a, ti::b) * G(ti::A, ti::B));
// ERROR: Can't divide by a rank > 0 Tensor
auto result6 = tenex::evaluate<ti::a, ti::b>(R(ti::a, ti::b) / S(ti::a, ti::b));

Support for data types and operations

Data types

The RHS expression may contain a mixture of number terms and Tensor terms, e.g. 0.5 * T(ti::a).

Currently supported data types for number terms:

Currently supported underlying data types for Tensor terms:

Support for more types can be added.

Operations

It's possible for terms in the RHS expression to have different data types. The mixture of doubles and Tensor<double>s is shown in the subtraction example and the division examples.

It's also possible for terms to be a mixture of both real-valued and complex-valued numbers or Tensors. For example, we can compute \(z^i = x^i + i y*i\), where \(x\) and \(y\) are real-valued Tensors, i is a std::complex<double>, and \(z\) is a complex-valued Tensor.

const auto x = make_with_random_values<tnsr::I<DataVector, Dim>>(
generator, distribution, used_for_size);
const auto y = make_with_random_values<tnsr::I<DataVector, Dim>>(
generator, distribution, used_for_size);
const std::complex<double> i{0.0, 1.0};
tenex::evaluate<ti::I>(x(ti::I) + i * y(ti::I));

In the above example, a std::complex<double> is multiplied by a Tensor<DataVector>, which can be thought to have an "intermediate" type Tensor<ComplexDataVector>, and then a Tensor<DataVector> is added to that intermediate Tensor<ComplexDataVector> to yield the resulting type, Tensor<ComplexDataVector>.

The following table shows the data type that results from performing a binary operation (+, -, *, /) between two terms of given data types:

Data types resulting from binary operations between supported TensorExpression operand types
double std::complex<double> Tensor<double> Tensor<std::complex<double>> Tensor<DataVector>

Tensor<ComplexDataVector>

double double - - - -

-

std::complex<double> std::complex<double> std::complex<double> - - -

-

Tensor<double> Tensor<double> Tensor<std::complex<double>> Tensor<double> - -

-

Tensor<std::complex<double>> Tensor<std::complex<double>> Tensor<std::complex<double>> Tensor<std::complex<double>> Tensor<std::complex<double>> -

-

Tensor<DataVector> Tensor<DataVector> Tensor<ComplexDataVector>* Not supported Not supported Tensor<DataVector>

-

Tensor<ComplexDataVector> Tensor<ComplexDataVector> Tensor<ComplexDataVector> Not supported Not supported Tensor<ComplexDataVector> Tensor<ComplexDataVector>
Note
The only binary operation that is supported between std::complex<double> and Tensor<DataVector> is multiplication. This is because Blaze does not support addition, subtraction, nor division between std::complex<double> and DataVector.