SpECTRE  v2024.04.12
Numerical Algorithms

Generic numerical algorithms. More...

Namespaces

namespace  integration
 Numerical integration algorithms.
 
namespace  OdeIntegration
 For ODE integration, we suggest using the boost libraries whenever possible.
 
namespace  RootFinder::StoppingConditions
 The different options for the convergence criterion of gsl_multiroot.
 
namespace  intrp
 Contains classes and functions for interpolation.
 
namespace  intrp::callbacks
 Contains callback functions called by InterpolationTargets.
 

Classes

class  intrp::BarycentricRational
 A barycentric rational interpolation class. More...
 
class  intrp::CubicSpline
 A natural cubic spline interpolation class. More...
 
class  intrp::Irregular< Dim >
 Interpolates a Variables onto an arbitrary set of points. More...
 
class  intrp::LinearLeastSquares< Order >
 A linear least squares solver class. More...
 
class  intrp::RegularGrid< Dim >
 Interpolate data from a Mesh onto a regular grid of points. More...
 
class  intrp::ZeroCrossingPredictor
 A class that predicts when a function crosses zero. More...
 

Enumerations

enum class  RootFinder::Method { Method::Hybrids , Method::Hybrid , Method::Newton }
 The different options for the rootfinding method of gsl_multiroot. More...
 

Functions

template<typename T >
void raw_transpose (const gsl::not_null< T * > result, const T *const data, const size_t chunk_size, const size_t number_of_chunks)
 Function to compute transposed data. More...
 
template<typename T >
LinearRegressionResult intrp::linear_regression (const T &x_values, const T &y_values)
 A linear regression function. More...
 
void find_generalized_eigenvalues (gsl::not_null< DataVector * > eigenvalues_real_part, gsl::not_null< DataVector * > eigenvalues_imaginary_part, gsl::not_null< Matrix * > eigenvectors, Matrix matrix_a, Matrix matrix_b)
 Solve the generalized eigenvalue problem for two matrices. More...
 
template<size_t Dim>
double definite_integral (const DataVector &integrand, const Mesh< Dim > &mesh)
 Compute the definite integral of a function over a manifold. More...
 
template<size_t Dim>
double mean_value (const DataVector &f, const Mesh< Dim > &mesh)
 Compute the mean value of a function over a manifold. More...
 
template<typename... ResultTags, typename... FluxTags, size_t Dim>
void weak_divergence (const gsl::not_null< Variables< tmpl::list< ResultTags... > > * > divergence_of_fluxes, const Variables< tmpl::list< FluxTags... > > &fluxes, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, Frame::Inertial > &det_jac_times_inverse_jacobian)
 Compute the weak form divergence of fluxes. More...
 
void RootFinder::newton_raphson ()=delete
 
double positive_root (double a, double b, double c)
 Returns the positive root of a quadratic equation \(ax^2 + bx + c = 0\). More...
 
template<typename T >
smallest_root_greater_than_value_within_roundoff (const T &a, const T &b, const T &c, double value)
 Returns the smallest root of a quadratic equation \(ax^2 + bx + c = 0\) that is greater than the given value, within roundoff. More...
 
template<typename T >
largest_root_between_values_within_roundoff (const T &a, const T &b, const T &c, double min_value, double max_value)
 Returns the largest root of a quadratic equation \(ax^2 + bx + c = 0\) that is between min_value and max_value, within roundoff. More...
 
std::optional< std::array< double, 2 > > real_roots (double a, double b, double c)
 Returns the two real roots of a quadratic equation \(ax^2 + bx + c = 0\) with the root closer to \(-\infty\) first, or an empty optional if there are no real roots.
 
template<typename Functor >
void RootFinder::bracket_possibly_undefined_function_in_interval (const gsl::not_null< double * > lower_bound, const gsl::not_null< double * > upper_bound, const gsl::not_null< double * > f_at_lower_bound, const gsl::not_null< double * > f_at_upper_bound, const Functor &f, const double guess)
 Brackets the root of the function f, assuming a single root in a given interval \(f[x_\mathrm{lo},x_\mathrm{up}]\) and assuming that f is defined only in an unknown smaller interval \(f[x_a,x_b]\) where \(x_\mathrm{lo} \leq x_a \leq x_b \leq x_\mathrm{hi}\). More...
 
template<typename Functor >
void RootFinder::bracket_possibly_undefined_function_in_interval (const gsl::not_null< DataVector * > lower_bound, const gsl::not_null< DataVector * > upper_bound, const gsl::not_null< DataVector * > f_at_lower_bound, const gsl::not_null< DataVector * > f_at_upper_bound, const Functor &f, const DataVector &guess)
 Brackets the single root of the function f for each element in a DataVector, assuming the root lies in the given interval and that f may be undefined at some points in the interval. More...
 
template<bool AssumeFinite = false, typename Function , typename T >
RootFinder::toms748 (const Function &f, const T lower_bound, const T upper_bound, const T f_at_lower_bound, const T f_at_upper_bound, const simd::scalar_type_t< T > absolute_tolerance, const simd::scalar_type_t< T > relative_tolerance, const size_t max_iterations=100, const simd::mask_type_t< T > ignore_filter=static_cast< simd::mask_type_t< T > >(0))
 Finds the root of the function f with the TOMS_748 method. More...
 
template<bool AssumeFinite = false, typename Function , typename T >
RootFinder::toms748 (const Function &f, const T lower_bound, const T upper_bound, const simd::scalar_type_t< T > absolute_tolerance, const simd::scalar_type_t< T > relative_tolerance, const size_t max_iterations=100, const simd::mask_type_t< T > ignore_filter=static_cast< simd::mask_type_t< T > >(0))
 Finds the root of the function f with the TOMS_748 method, where function values are not supplied at the lower and upper bounds. More...
 
template<bool UseSimd = true, bool AssumeFinite = false, typename Function >
DataVector RootFinder::toms748 (const Function &f, const DataVector &lower_bound, const DataVector &upper_bound, const double absolute_tolerance, const double relative_tolerance, const size_t max_iterations=100)
 Finds the root of the function f with the TOMS_748 method on each element in a DataVector. More...
 
template<bool UseSimd = true, bool AssumeFinite = false, typename Function >
DataVector RootFinder::toms748 (const Function &f, const DataVector &lower_bound, const DataVector &upper_bound, const DataVector &f_at_lower_bound, const DataVector &f_at_upper_bound, const double absolute_tolerance, const double relative_tolerance, const size_t max_iterations=100)
 Finds the root of the function f with the TOMS_748 method on each element in a DataVector, where function values are supplied at the lower and upper bounds. More...
 
template<typename VariableTags , typename MatrixType , size_t Dim>
void apply_matrices (const gsl::not_null< Variables< VariableTags > * > result, const std::array< MatrixType, Dim > &matrices, const Variables< VariableTags > &u, const Index< Dim > &extents)
 Multiply by matrices in each dimension. More...
 
template<typename VariableTags , typename MatrixType , size_t Dim>
Variables< VariableTags > apply_matrices (const std::array< MatrixType, Dim > &matrices, const Variables< VariableTags > &u, const Index< Dim > &extents)
 Multiply by matrices in each dimension. More...
 
template<typename ResultType , typename MatrixType , typename VectorType , size_t Dim>
void apply_matrices (const gsl::not_null< ResultType * > result, const std::array< MatrixType, Dim > &matrices, const VectorType &u, const Index< Dim > &extents)
 Multiply by matrices in each dimension. More...
 
template<typename MatrixType , typename VectorType , size_t Dim>
VectorType apply_matrices (const std::array< MatrixType, Dim > &matrices, const VectorType &u, const Index< Dim > &extents)
 Multiply by matrices in each dimension. More...
 
template<typename ResultType , typename MatrixType , typename VectorType , size_t Dim>
ResultType apply_matrices (const std::array< MatrixType, Dim > &matrices, const VectorType &u, const Index< Dim > &extents)
 Multiply by matrices in each dimension. More...
 
template<typename U , typename T >
void transpose (const gsl::not_null< T * > result, const U &u, const size_t chunk_size, const size_t number_of_chunks)
 Function to compute transposed data. More...
 
template<typename U , typename T = U>
transpose (const U &u, const size_t chunk_size, const size_t number_of_chunks)
 Function to compute transposed data. More...
 
template<typename FluxTags , size_t Dim, typename DerivativeFrame >
auto divergence (const Variables< FluxTags > &F, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian) -> Variables< db::wrap_tags_in< Tags::div, FluxTags > >
 Compute the (Euclidean) divergence of fluxes.
 
template<typename... DivTags, typename... FluxTags, size_t Dim, typename DerivativeFrame >
void divergence (gsl::not_null< Variables< tmpl::list< DivTags... > > * > divergence_of_F, const Variables< tmpl::list< FluxTags... > > &F, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the (Euclidean) divergence of fluxes.
 
template<size_t Dim, typename DerivativeFrame >
Scalar< DataVectordivergence (const tnsr::I< DataVector, Dim, DerivativeFrame > &input, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the divergence of the vector input
 
template<size_t Dim, typename DerivativeFrame >
void divergence (gsl::not_null< Scalar< DataVector > * > div_input, const tnsr::I< DataVector, Dim, DerivativeFrame > &input, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the divergence of the vector input
 
template<size_t Dim, typename VectorType >
void indefinite_integral (gsl::not_null< VectorType * > integral, const VectorType &integrand, const Mesh< Dim > &mesh, size_t dim_to_integrate)
 Compute the indefinite integral of a function in the dim_to_integrate, applying a zero boundary condition on each stripe. More...
 
template<size_t Dim, typename VectorType >
VectorType indefinite_integral (const VectorType &integrand, const Mesh< Dim > &mesh, size_t dim_to_integrate)
 Compute the indefinite integral of a function in the dim_to_integrate, applying a zero boundary condition on each stripe. More...
 
template<size_t Dim>
void linearize (gsl::not_null< DataVector * > result, const DataVector &u, const Mesh< Dim > &mesh)
 Truncate u to a linear function in each dimension. More...
 
template<size_t Dim>
DataVector linearize (const DataVector &u, const Mesh< Dim > &mesh)
 Truncate u to a linear function in each dimension. More...
 
template<size_t Dim>
void linearize (gsl::not_null< DataVector * > result, const DataVector &u, const Mesh< Dim > &mesh, size_t d)
 Truncate u to a linear function in the given dimension. More...
 
template<size_t Dim>
DataVector linearize (const DataVector &u, const Mesh< Dim > &mesh, size_t d)
 Truncate u to a linear function in the given dimension. More...
 
template<size_t Dim>
double mean_value_on_boundary (const DataVector &f, const Mesh< Dim > &mesh, size_t d, Side side)
 Compute the mean value of a function over a boundary of a manifold. More...
 
template<size_t Dim>
double mean_value_on_boundary (gsl::not_null< DataVector * > boundary_buffer, const DataVector &f, const Mesh< Dim > &mesh, size_t d, Side side)
 Compute the mean value of a function over a boundary of a manifold. More...
 
double mean_value_on_boundary (gsl::not_null< DataVector * >, const DataVector &f, const Mesh< 1 > &mesh, size_t d, Side side)
 Compute the mean value of a function over a boundary of a manifold. More...
 
template<size_t Dim>
double mean_value_on_boundary (gsl::not_null< DataVector * > boundary_buffer, gsl::span< std::pair< size_t, size_t > > volume_and_slice_indices, const DataVector &f, const Mesh< Dim > &mesh, size_t d, Side)
 Compute the mean value of a function over a boundary of a manifold. More...
 
double mean_value_on_boundary (gsl::not_null< DataVector * >, gsl::span< std::pair< size_t, size_t > >, const DataVector &f, const Mesh< 1 > &mesh, size_t d, Side side)
 Compute the mean value of a function over a boundary of a manifold. More...
 
template<typename DerivativeTags , typename VariableTags , size_t Dim>
void logical_partial_derivatives (gsl::not_null< std::array< Variables< DerivativeTags >, Dim > * > logical_partial_derivatives_of_u, const Variables< VariableTags > &u, const Mesh< Dim > &mesh)
 Compute the partial derivatives of each variable with respect to the element logical coordinate. More...
 
template<typename DerivativeTags , typename VariableTags , size_t Dim>
auto logical_partial_derivatives (const Variables< VariableTags > &u, const Mesh< Dim > &mesh) -> std::array< Variables< DerivativeTags >, Dim >
 Compute the partial derivatives of each variable with respect to the element logical coordinate. More...
 
template<typename SymmList , typename IndexList , size_t Dim>
void logical_partial_derivative (gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > * > logical_derivative_of_u, gsl::not_null< gsl::span< double > * > buffer, const Tensor< DataVector, SymmList, IndexList > &u, const Mesh< Dim > &mesh)
 Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b). More...
 
template<typename SymmList , typename IndexList , size_t Dim>
void logical_partial_derivative (gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > * > logical_derivative_of_u, const Tensor< DataVector, SymmList, IndexList > &u, const Mesh< Dim > &mesh)
 Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b). More...
 
template<typename SymmList , typename IndexList , size_t Dim>
auto logical_partial_derivative (const Tensor< DataVector, SymmList, IndexList > &u, const Mesh< Dim > &mesh) -> TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical >
 Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b). More...
 
template<typename ResultTags , typename DerivativeTags , size_t Dim, typename DerivativeFrame >
void partial_derivatives (gsl::not_null< Variables< ResultTags > * > du, const std::array< Variables< DerivativeTags >, Dim > &logical_partial_derivatives_of_u, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame. More...
 
template<typename ResultTags , typename VariableTags , size_t Dim, typename DerivativeFrame >
void partial_derivatives (gsl::not_null< Variables< ResultTags > * > du, const Variables< VariableTags > &u, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame. More...
 
template<typename DerivativeTags , typename VariableTags , size_t Dim, typename DerivativeFrame >
auto partial_derivatives (const Variables< VariableTags > &u, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian) -> Variables< db::wrap_tags_in< Tags::deriv, DerivativeTags, tmpl::size_t< Dim >, DerivativeFrame > >
 Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame. More...
 
template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
void partial_derivative (const gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame > * > du, const TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > &logical_partial_derivative_of_u, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame. More...
 
template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
void partial_derivative (const gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame > * > du, const Tensor< DataVector, SymmList, IndexList > &u, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian)
 Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame. More...
 
template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
auto partial_derivative (const Tensor< DataVector, SymmList, IndexList > &u, const Mesh< Dim > &mesh, const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &inverse_jacobian) -> TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame >
 Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame. More...
 
template<size_t Dim, typename Function , Requires< gsl_multiroot_detail::is_jacobian_callable_v< Function, std::array< double, Dim > > > = nullptr>
std::array< double, Dim > RootFinder::gsl_multiroot (const Function &func, const std::array< double, Dim > &initial_guess, const StoppingCondition &condition, const size_t maximum_iterations, const Verbosity verbosity=Verbosity::Silent, const double maximum_absolute_tolerance=0.0, const Method method=Method::Newton)
 A multidimensional root finder supporting Newton and Hybrid methods, as well as modified methods based on these. More...
 

Detailed Description

Generic numerical algorithms.

Enumeration Type Documentation

◆ Method

enum class RootFinder::Method
strong

The different options for the rootfinding method of gsl_multiroot.

This enum is for setting the method used the rootfinder. The precise method used by the gsl rootfinder depends on whether or not the function passed to it has a callable jacobian member function. In the case where it doesn't, the jacobian is approximated with a finite difference. For example, if the Method specified is Hybrid, gsl will use the gsl_multiroot_fdfsolver_hybridj method in the case where a jacobian is provided, and gsl_multiroot_fsolver_hybrid in the case where one isn't. See GSL's documentation for multidimensional rootfinding for information on the different methods.

Note
gsl does not provide a finite difference version for the modified Newton method (gsl_multiroot_fdfsolver_gnewton). In the case where a jacobian is not provided the method used will be a non-modified Newton method.
Enumerator
Hybrids 

Hybrid of Newton's method along with following the gradient direction.

Hybrid 

"Unscaled version of Hybrids that uses a spherical trust region," see GSL documentation for more details.

Newton 

If an analytic jacobian is provided, gsl uses a modification of Newton's method to improve global convergence. Uses vanilla Newton's method if no jacobian is provided.

Function Documentation

◆ apply_matrices() [1/5]

template<typename ResultType , typename MatrixType , typename VectorType , size_t Dim>
void apply_matrices ( const gsl::not_null< ResultType * >  result,
const std::array< MatrixType, Dim > &  matrices,
const VectorType &  u,
const Index< Dim > &  extents 
)

Multiply by matrices in each dimension.

Multiplies each stripe in the first dimension of u by matrices[0], each stripe in the second dimension of u by matrices[1], and so on. If any of the matrices are empty they will be treated as the identity, but the matrix multiplications will be skipped for increased efficiency.

Note
The element type stored in the vectors to be transformed may be either double or std::complex<double>. The matrix, however, must be real. In the case of acting on a vector of complex values, the matrix is treated as having zero imaginary part. This is chosen for efficiency in all use-cases for spectral matrix arithmetic so far encountered.

◆ apply_matrices() [2/5]

template<typename VariableTags , typename MatrixType , size_t Dim>
void apply_matrices ( const gsl::not_null< Variables< VariableTags > * >  result,
const std::array< MatrixType, Dim > &  matrices,
const Variables< VariableTags > &  u,
const Index< Dim > &  extents 
)

Multiply by matrices in each dimension.

Multiplies each stripe in the first dimension of u by matrices[0], each stripe in the second dimension of u by matrices[1], and so on. If any of the matrices are empty they will be treated as the identity, but the matrix multiplications will be skipped for increased efficiency.

Note
The element type stored in the vectors to be transformed may be either double or std::complex<double>. The matrix, however, must be real. In the case of acting on a vector of complex values, the matrix is treated as having zero imaginary part. This is chosen for efficiency in all use-cases for spectral matrix arithmetic so far encountered.

◆ apply_matrices() [3/5]

template<typename VariableTags , typename MatrixType , size_t Dim>
Variables< VariableTags > apply_matrices ( const std::array< MatrixType, Dim > &  matrices,
const Variables< VariableTags > &  u,
const Index< Dim > &  extents 
)

Multiply by matrices in each dimension.

Multiplies each stripe in the first dimension of u by matrices[0], each stripe in the second dimension of u by matrices[1], and so on. If any of the matrices are empty they will be treated as the identity, but the matrix multiplications will be skipped for increased efficiency.

Note
The element type stored in the vectors to be transformed may be either double or std::complex<double>. The matrix, however, must be real. In the case of acting on a vector of complex values, the matrix is treated as having zero imaginary part. This is chosen for efficiency in all use-cases for spectral matrix arithmetic so far encountered.

◆ apply_matrices() [4/5]

template<typename MatrixType , typename VectorType , size_t Dim>
VectorType apply_matrices ( const std::array< MatrixType, Dim > &  matrices,
const VectorType &  u,
const Index< Dim > &  extents 
)

Multiply by matrices in each dimension.

Multiplies each stripe in the first dimension of u by matrices[0], each stripe in the second dimension of u by matrices[1], and so on. If any of the matrices are empty they will be treated as the identity, but the matrix multiplications will be skipped for increased efficiency.

Note
The element type stored in the vectors to be transformed may be either double or std::complex<double>. The matrix, however, must be real. In the case of acting on a vector of complex values, the matrix is treated as having zero imaginary part. This is chosen for efficiency in all use-cases for spectral matrix arithmetic so far encountered.

◆ apply_matrices() [5/5]

template<typename ResultType , typename MatrixType , typename VectorType , size_t Dim>
ResultType apply_matrices ( const std::array< MatrixType, Dim > &  matrices,
const VectorType &  u,
const Index< Dim > &  extents 
)

Multiply by matrices in each dimension.

Multiplies each stripe in the first dimension of u by matrices[0], each stripe in the second dimension of u by matrices[1], and so on. If any of the matrices are empty they will be treated as the identity, but the matrix multiplications will be skipped for increased efficiency.

Note
The element type stored in the vectors to be transformed may be either double or std::complex<double>. The matrix, however, must be real. In the case of acting on a vector of complex values, the matrix is treated as having zero imaginary part. This is chosen for efficiency in all use-cases for spectral matrix arithmetic so far encountered.

◆ bracket_possibly_undefined_function_in_interval() [1/2]

template<typename Functor >
void RootFinder::bracket_possibly_undefined_function_in_interval ( const gsl::not_null< DataVector * >  lower_bound,
const gsl::not_null< DataVector * >  upper_bound,
const gsl::not_null< DataVector * >  f_at_lower_bound,
const gsl::not_null< DataVector * >  f_at_upper_bound,
const Functor &  f,
const DataVector guess 
)

Brackets the single root of the function f for each element in a DataVector, assuming the root lies in the given interval and that f may be undefined at some points in the interval.

f is a binary invokable that takes a double and a size_t as arguments. The double is the current value at which to evaluate f, and the size_t is the index into the DataVectors. f returns a std::optional<double> which evaluates to false if the function is undefined at the supplied point.

Assumes that there is only one root in the interval.

Assumes that if \(f(x_1)\) and \(f(x_2)\) are both defined for some \((x_1,x_2)\), then \(f(x)\) is defined for all \(x\) between \(x_1\) and \(x_2\).

On input, assumes that the root lies in the interval [lower_bound,upper_bound]. Optionally takes a guess for the location of the root.

On return, lower_bound and upper_bound are replaced with values that bracket the root and for which the function is defined, and f_at_lower_bound and f_at_upper_bound are replaced with f evaluated at those bracketing points.

◆ bracket_possibly_undefined_function_in_interval() [2/2]

template<typename Functor >
void RootFinder::bracket_possibly_undefined_function_in_interval ( const gsl::not_null< double * >  lower_bound,
const gsl::not_null< double * >  upper_bound,
const gsl::not_null< double * >  f_at_lower_bound,
const gsl::not_null< double * >  f_at_upper_bound,
const Functor &  f,
const double  guess 
)

Brackets the root of the function f, assuming a single root in a given interval \(f[x_\mathrm{lo},x_\mathrm{up}]\) and assuming that f is defined only in an unknown smaller interval \(f[x_a,x_b]\) where \(x_\mathrm{lo} \leq x_a \leq x_b \leq x_\mathrm{hi}\).

f is a unary invokable that takes a double which is the current value at which to evaluate f. f returns a std::optional<double> which evaluates to false if the function is undefined at the supplied point.

Assumes that there is only one root in the interval.

Assumes that if \(f(x_1)\) and \(f(x_2)\) are both defined for some \((x_1,x_2)\), then \(f(x)\) is defined for all \(x\) between \(x_1\) and \(x_2\).

On input, assumes that the root lies in the interval [lower_bound,upper_bound]. Optionally takes a guess for the location of the root. If guess is supplied, then evaluates the function first at guess and upper_bound before trying lower_bound: this means that it would be optimal if guess underestimates the actual root and if upper_bound was less likely to be undefined than lower_bound.

On return, lower_bound and upper_bound are replaced with values that bracket the root and for which the function is defined, and f_at_lower_bound and f_at_upper_bound are replaced with f evaluated at those bracketing points.

bracket_possibly_undefined_function_in_interval throws an error if all points are valid but of the same sign (because that would indicate multiple roots but we assume only one root), if no root exists, or if the range of a sign change is sufficently small relative to the given interval that the number of iterations to find the root is exceeded.

◆ definite_integral()

template<size_t Dim>
double definite_integral ( const DataVector integrand,
const Mesh< Dim > &  mesh 
)

Compute the definite integral of a function over a manifold.

Given a function \(f\), compute its integral \(I\) with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\). E.g., in 1 dimension, \(I = \int_{-1}^1 f d\xi\).

The integral w.r.t. a different set of coordinates \(\boldsymbol{x} = \boldsymbol{x}(\boldsymbol{\xi})\) can be computed by pre-multiplying \(f\) by the Jacobian determinant \(J = \det d\boldsymbol{x}/d\boldsymbol{\xi}\) of the mapping \(\boldsymbol{x}(\boldsymbol{\xi})\). Note that, in the \(\boldsymbol{x}\) coordinates, the domain of integration is the image of the logical cube (square in 2D, interval in 1D) under the mapping.

The integral is computed by quadrature, using the quadrature rule for the basis associated with the collocation points.

Parameters
integrandthe function to integrate.
meshthe Mesh defining the grid points on the manifold.

◆ find_generalized_eigenvalues()

void find_generalized_eigenvalues ( gsl::not_null< DataVector * >  eigenvalues_real_part,
gsl::not_null< DataVector * >  eigenvalues_imaginary_part,
gsl::not_null< Matrix * >  eigenvectors,
Matrix  matrix_a,
Matrix  matrix_b 
)

Solve the generalized eigenvalue problem for two matrices.

This function uses the lapack routine dggev (http://www.netlib.org/lapack/explore-3.1.1-html/dggev.f.html) to solve the generalized eigenvalue problem \(A v_a =\lambda_a B v_a \) for the generalized eigenvalues \(\lambda_a\) and corresponding eigenvectors \(v_a\). matrix_a and matrix_b are each a Matrix; they correspond to square matrices \(A\) and \(B\) that are the same dimension \(N\). eigenvalues_real_part is a DataVector of size \(N\) that will store the real parts of the eigenvalues, eigenvalues_imaginary_part is a DataVector of size \(N\) that will store the imaginary parts of the eigenvalues. Complex eigenvalues always form complex conjugate pairs, and the \(j\) and \(j+1\) eigenvalues will have the forms \(a+ib\) and \(a-ib\), respectively. The eigenvectors are returned as the columns of a square Matrix of dimension \(N\) called eigenvectors. If eigenvalue \(j\) is real, then column \(j\) of eigenvectors is the corresponding eigenvector. If eigenvalue \(j\) and \(j+1\) are complex-conjugate pairs, then the eigenvector for eigenvalue \(j\) is (column j) + \(i\) (column j+1), and the eigenvector for eigenvalue \(j+1\) is (column j) - \(i\) (column j+1).

◆ gsl_multiroot()

template<size_t Dim, typename Function , Requires< gsl_multiroot_detail::is_jacobian_callable_v< Function, std::array< double, Dim > > > = nullptr>
std::array< double, Dim > RootFinder::gsl_multiroot ( const Function &  func,
const std::array< double, Dim > &  initial_guess,
const StoppingCondition condition,
const size_t  maximum_iterations,
const Verbosity  verbosity = Verbosity::Silent,
const double  maximum_absolute_tolerance = 0.0,
const Method  method = Method::Newton 
)

A multidimensional root finder supporting Newton and Hybrid methods, as well as modified methods based on these.

This root finder accepts function objects with and without a callable jacobian member function. The call operator both accepts and returns a std::array<double, Dim>, of the appropriate dimension for the domain and range the function the root find is being performed on.

If a jacobian function is provided, it must accept a std::array<double, Dim> and return a std::array<std::array<double, Dim>, Dim> representing the derivative of the call operator, with

\begin{equation} \text{jacobian[i][j]} = \frac{\partial f_i}{x_j}. \end{equation}

Whether the jacobian is provided determines the details of the implementation of the root-finding method that is selected by the user using the Method enum. That is, whether the jacobian is computed analytically via the jacobian member function, or whether the jacobian is computed numerically via a finite difference approximation.

Note
GSL does not provide a finite difference version of its modified Newton method, so the unmodified one is used instead when the user uses the Method::Newton method.

The user can select one of two possible criteria for convergence, StoppingCondition::Residual, where the sum of the absolute values of the components of the residual vector f are compared against the value provided to absolute_tolerance, and StoppingCondition::Convergence, where the size of the most recent step taken in the root-finding iteration is compared against absolute_tolerance + relative_tolerance * |x_i|, for each component. In either case, a maximum_absolute_tolerance may be specified if the user anticipates that the convergence criterion specified with StoppingCondition will be too strict for a few points out of a population of points found with a sequence of root finds.

See GSL's documentation for multidimensional rootfinding for reference.

Parameters
funcFunction whose root is to be found.
initial_guessContains initial guess.
maximum_iterationsThe maximum number of iterations.
verbosityWhether to print diagnostic messages.
maximum_absolute_toleranceAcceptable absolute tolerance when root finder doesn't converge. You may wish to use this if there are only a few "problematic" points where it is difficult to do a precise root find.
methodThe method to use. See the documentation for the Method enum.
conditionThe convergence condition to use. See the documentation for the StoppingCondition enum.

◆ indefinite_integral() [1/2]

template<size_t Dim, typename VectorType >
VectorType indefinite_integral ( const VectorType &  integrand,
const Mesh< Dim > &  mesh,
size_t  dim_to_integrate 
)

Compute the indefinite integral of a function in the dim_to_integrate, applying a zero boundary condition on each stripe.

Integrates with respect to one of the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

The integral w.r.t. a different set of coordinates \(\boldsymbol{x} = \boldsymbol{x}(\boldsymbol{\xi})\) can be computed by pre-multiplying integrand by the Jacobian determinant \(J = \det d\boldsymbol{x}/d\boldsymbol{\xi}\) of the mapping \(\boldsymbol{x}(\boldsymbol{\xi})\). The integration is still performed along one logical-coordinate direction, indicated by dim_to_integrate.

Requires: number of points in integrand and mesh are equal.

◆ indefinite_integral() [2/2]

template<size_t Dim, typename VectorType >
void indefinite_integral ( gsl::not_null< VectorType * >  integral,
const VectorType &  integrand,
const Mesh< Dim > &  mesh,
size_t  dim_to_integrate 
)

Compute the indefinite integral of a function in the dim_to_integrate, applying a zero boundary condition on each stripe.

Integrates with respect to one of the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

The integral w.r.t. a different set of coordinates \(\boldsymbol{x} = \boldsymbol{x}(\boldsymbol{\xi})\) can be computed by pre-multiplying integrand by the Jacobian determinant \(J = \det d\boldsymbol{x}/d\boldsymbol{\xi}\) of the mapping \(\boldsymbol{x}(\boldsymbol{\xi})\). The integration is still performed along one logical-coordinate direction, indicated by dim_to_integrate.

Requires: number of points in integrand and mesh are equal.

◆ largest_root_between_values_within_roundoff()

template<typename T >
T largest_root_between_values_within_roundoff ( const T &  a,
const T &  b,
const T &  c,
double  min_value,
double  max_value 
)

Returns the largest root of a quadratic equation \(ax^2 + bx + c = 0\) that is between min_value and max_value, within roundoff.

Returns: A root of a quadratic equation.

Requires: That there are two real roots.

Requires: At least one root is between min_value and max_value, to roundoff.

◆ linear_regression()

template<typename T >
LinearRegressionResult intrp::linear_regression ( const T &  x_values,
const T &  y_values 
)

A linear regression function.

A wrapper for gsl linear regression. Fits the data to \(y = m x + b\). Returns a struct containing \((b, m, \delta b, \delta m)\), where \(\delta b\) and \(\delta m\) are the error bars in \(b\) and \(m\). The error bars are computed assuming unknown errors in \(y\).

linear_regression could be implemented by calling LinearLeastSquares using Order=1, but we choose instead to implement linear_regression in a simpler way, by calling a simpler gsl function that involves no memory allocations, no copying, and no pow functions.

◆ linearize() [1/4]

template<size_t Dim>
DataVector linearize ( const DataVector u,
const Mesh< Dim > &  mesh 
)

Truncate u to a linear function in each dimension.

Ex in 2D: \(u^{Lin} = U_0 + U_x x + U_y y + U_{xy} xy\)

Warning
the gsl::not_null variant assumes *result is of the correct size.

◆ linearize() [2/4]

template<size_t Dim>
DataVector linearize ( const DataVector u,
const Mesh< Dim > &  mesh,
size_t  d 
)

Truncate u to a linear function in the given dimension.

Parameters

  • u the function to linearize.
  • mesh the Mesh of the grid on the manifold on which u is located.
  • d the dimension that is to be linearized.
Warning
the gsl::not_null variant assumes *result is of the correct size.

◆ linearize() [3/4]

template<size_t Dim>
void linearize ( gsl::not_null< DataVector * >  result,
const DataVector u,
const Mesh< Dim > &  mesh 
)

Truncate u to a linear function in each dimension.

Ex in 2D: \(u^{Lin} = U_0 + U_x x + U_y y + U_{xy} xy\)

Warning
the gsl::not_null variant assumes *result is of the correct size.

◆ linearize() [4/4]

template<size_t Dim>
void linearize ( gsl::not_null< DataVector * >  result,
const DataVector u,
const Mesh< Dim > &  mesh,
size_t  d 
)

Truncate u to a linear function in the given dimension.

Parameters

  • u the function to linearize.
  • mesh the Mesh of the grid on the manifold on which u is located.
  • d the dimension that is to be linearized.
Warning
the gsl::not_null variant assumes *result is of the correct size.

◆ logical_partial_derivative() [1/3]

template<typename SymmList , typename IndexList , size_t Dim>
auto logical_partial_derivative ( const Tensor< DataVector, SymmList, IndexList > &  u,
const Mesh< Dim > &  mesh 
) -> TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical >

Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b).

There is an overload that accepts a buffer of size mesh.number_of_grid_points() or larger. When passed this function performs no memory allocations, which helps improve performance.

If you have a Variables with several tensors you need to differentiate you should use the logical_partial_derivatives function that operates on Variables since that'll be more efficient.

◆ logical_partial_derivative() [2/3]

template<typename SymmList , typename IndexList , size_t Dim>
void logical_partial_derivative ( gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > * >  logical_derivative_of_u,
const Tensor< DataVector, SymmList, IndexList > &  u,
const Mesh< Dim > &  mesh 
)

Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b).

There is an overload that accepts a buffer of size mesh.number_of_grid_points() or larger. When passed this function performs no memory allocations, which helps improve performance.

If you have a Variables with several tensors you need to differentiate you should use the logical_partial_derivatives function that operates on Variables since that'll be more efficient.

◆ logical_partial_derivative() [3/3]

template<typename SymmList , typename IndexList , size_t Dim>
void logical_partial_derivative ( gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > * >  logical_derivative_of_u,
gsl::not_null< gsl::span< double > * >  buffer,
const Tensor< DataVector, SymmList, IndexList > &  u,
const Mesh< Dim > &  mesh 
)

Computes the logical partial derivative of a tensor, prepending the spatial derivative index, e.g. for \(\partial_i T_{a}{}^{b}\) the C++ call is get(i, a, b).

There is an overload that accepts a buffer of size mesh.number_of_grid_points() or larger. When passed this function performs no memory allocations, which helps improve performance.

If you have a Variables with several tensors you need to differentiate you should use the logical_partial_derivatives function that operates on Variables since that'll be more efficient.

◆ logical_partial_derivatives() [1/2]

template<typename DerivativeTags , typename VariableTags , size_t Dim>
auto logical_partial_derivatives ( const Variables< VariableTags > &  u,
const Mesh< Dim > &  mesh 
) -> std::array< Variables< DerivativeTags >, Dim >

Compute the partial derivatives of each variable with respect to the element logical coordinate.

Requires: DerivativeTags to be the head of VariableTags

Returns a Variables with a spatial tensor index appended to the front of each tensor within u and each Tag wrapped with a Tags::deriv.

Template Parameters
DerivativeTagsthe subset of VariableTags for which derivatives are computed.

◆ logical_partial_derivatives() [2/2]

template<typename DerivativeTags , typename VariableTags , size_t Dim>
void logical_partial_derivatives ( gsl::not_null< std::array< Variables< DerivativeTags >, Dim > * >  logical_partial_derivatives_of_u,
const Variables< VariableTags > &  u,
const Mesh< Dim > &  mesh 
)

Compute the partial derivatives of each variable with respect to the element logical coordinate.

Requires: DerivativeTags to be the head of VariableTags

Returns a Variables with a spatial tensor index appended to the front of each tensor within u and each Tag wrapped with a Tags::deriv.

Template Parameters
DerivativeTagsthe subset of VariableTags for which derivatives are computed.

◆ mean_value()

template<size_t Dim>
double mean_value ( const DataVector f,
const Mesh< Dim > &  mesh 
)

Compute the mean value of a function over a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\) with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\). E.g., in 1 dimension, \(\bar{f} = \int_{-1}^1 f d\xi \Big/ \int_{-1}^1 d\xi\).

Note
The mean w.r.t. a different set of coordinates \(\boldsymbol{x} = \boldsymbol{x}(\boldsymbol{\xi})\) can't be directly computed using this function. Before calling mean_value, \(f\) must be pre-multiplied by the Jacobian determinant \(J = \det d\boldsymbol{x}/d\boldsymbol{\xi}\) of the mapping \(\boldsymbol{x}(\boldsymbol{\xi})\). Additionally, the output of mean_value must be multiplied by a factor \(2^{\text{d}} / \int J d^{\text{d}}\xi\) (in \(d\) dimensions), to account for the different volume of the manifold in the \(\boldsymbol{x}\) coordinates.
Parameters
fthe function to average.
meshthe Mesh defining the grid points on the manifold.

◆ mean_value_on_boundary() [1/5]

template<size_t Dim>
double mean_value_on_boundary ( const DataVector f,
const Mesh< Dim > &  mesh,
size_t  d,
Side  side 
)

Compute the mean value of a function over a boundary of a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\), over a boundary, with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

See also
mean_value for notes about means w.r.t. other coordinates.
  • f the function to average.
  • mesh the Mesh defining the grid points on the manifold.
  • d the dimension which is sliced away to get the boundary.
  • side whether it is the lower or upper boundary in the d-th dimension.
  • boundary_buffer is a pointer to a DataVector of size mesh.slice_away(d).number_of_grid_points() used as a temporary buffer when slicing the data to the boundary.
  • volume_and_slice_indices a pair of (volume_index_for_point, slice_index_for_point) computed using the SliceIterator. Because SliceIterator is somewhat expensive, if computing the mean value on the same boundary for many different tensor components, prefer computing the slice indices once.

◆ mean_value_on_boundary() [2/5]

template<size_t Dim>
double mean_value_on_boundary ( gsl::not_null< DataVector * >  boundary_buffer,
const DataVector f,
const Mesh< Dim > &  mesh,
size_t  d,
Side  side 
)

Compute the mean value of a function over a boundary of a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\), over a boundary, with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

See also
mean_value for notes about means w.r.t. other coordinates.
  • f the function to average.
  • mesh the Mesh defining the grid points on the manifold.
  • d the dimension which is sliced away to get the boundary.
  • side whether it is the lower or upper boundary in the d-th dimension.
  • boundary_buffer is a pointer to a DataVector of size mesh.slice_away(d).number_of_grid_points() used as a temporary buffer when slicing the data to the boundary.
  • volume_and_slice_indices a pair of (volume_index_for_point, slice_index_for_point) computed using the SliceIterator. Because SliceIterator is somewhat expensive, if computing the mean value on the same boundary for many different tensor components, prefer computing the slice indices once.

◆ mean_value_on_boundary() [3/5]

template<size_t Dim>
double mean_value_on_boundary ( gsl::not_null< DataVector * >  boundary_buffer,
gsl::span< std::pair< size_t, size_t > >  volume_and_slice_indices,
const DataVector f,
const Mesh< Dim > &  mesh,
size_t  d,
Side   
)

Compute the mean value of a function over a boundary of a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\), over a boundary, with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

See also
mean_value for notes about means w.r.t. other coordinates.
  • f the function to average.
  • mesh the Mesh defining the grid points on the manifold.
  • d the dimension which is sliced away to get the boundary.
  • side whether it is the lower or upper boundary in the d-th dimension.
  • boundary_buffer is a pointer to a DataVector of size mesh.slice_away(d).number_of_grid_points() used as a temporary buffer when slicing the data to the boundary.
  • volume_and_slice_indices a pair of (volume_index_for_point, slice_index_for_point) computed using the SliceIterator. Because SliceIterator is somewhat expensive, if computing the mean value on the same boundary for many different tensor components, prefer computing the slice indices once.

◆ mean_value_on_boundary() [4/5]

double mean_value_on_boundary ( gsl::not_null< DataVector * >  ,
const DataVector f,
const Mesh< 1 > &  mesh,
size_t  d,
Side  side 
)

Compute the mean value of a function over a boundary of a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\), over a boundary, with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

See also
mean_value for notes about means w.r.t. other coordinates.
  • f the function to average.
  • mesh the Mesh defining the grid points on the manifold.
  • d the dimension which is sliced away to get the boundary.
  • side whether it is the lower or upper boundary in the d-th dimension.
  • boundary_buffer is a pointer to a DataVector of size mesh.slice_away(d).number_of_grid_points() used as a temporary buffer when slicing the data to the boundary.
  • volume_and_slice_indices a pair of (volume_index_for_point, slice_index_for_point) computed using the SliceIterator. Because SliceIterator is somewhat expensive, if computing the mean value on the same boundary for many different tensor components, prefer computing the slice indices once.

◆ mean_value_on_boundary() [5/5]

double mean_value_on_boundary ( gsl::not_null< DataVector * >  ,
gsl::span< std::pair< size_t, size_t > >  ,
const DataVector f,
const Mesh< 1 > &  mesh,
size_t  d,
Side  side 
)

Compute the mean value of a function over a boundary of a manifold.

Given a function \(f\), compute its mean value \(\bar{f}\), over a boundary, with respect to the logical coordinates \(\boldsymbol{\xi} = (\xi, \eta, \zeta)\).

See also
mean_value for notes about means w.r.t. other coordinates.
  • f the function to average.
  • mesh the Mesh defining the grid points on the manifold.
  • d the dimension which is sliced away to get the boundary.
  • side whether it is the lower or upper boundary in the d-th dimension.
  • boundary_buffer is a pointer to a DataVector of size mesh.slice_away(d).number_of_grid_points() used as a temporary buffer when slicing the data to the boundary.
  • volume_and_slice_indices a pair of (volume_index_for_point, slice_index_for_point) computed using the SliceIterator. Because SliceIterator is somewhat expensive, if computing the mean value on the same boundary for many different tensor components, prefer computing the slice indices once.

◆ newton_raphson()

void RootFinder::newton_raphson ( )
delete

For the nonlinear solver, see NonlinearSolver::newton_raphson. We do not provide a Newton-Raphson root finder. The Boost implementation is buggy and can return the wrong answer, and it is not clear that performance would be better than RootFinder::toms748 in practice.

Newton-Raphson is asymptotically faster than TOMS748 in the ideal case, but that assumes that function evaluations take the same amount of time for both solvers, while in reality the derivatives are often as or even more expensive than the values. Additionally, for realistic problems, convergence is usually fast enough that neither solver reaches the asymptotic regime. Newton-Raphson also has the advantage of requiring less work internally in the solver implementation, but, again, for realistic problems solver overhead is usually dwarfed by the function evaluations.

◆ partial_derivative() [1/3]

template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
void partial_derivative ( const gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame > * >  du,
const Tensor< DataVector, SymmList, IndexList > &  u,
const Mesh< Dim > &  mesh,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
)

Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame.

Returns a Tensor with a spatial tensor index appended to the front of the input Tensor.

If you have a Variables with several tensors you need to differentiate, you should use the partial_derivatives function that operates on Variables since that'll be more efficient.

◆ partial_derivative() [2/3]

template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
void partial_derivative ( const gsl::not_null< TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame > * >  du,
const TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, Frame::ElementLogical > &  logical_partial_derivative_of_u,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
)

Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame.

Returns a Tensor with a spatial tensor index appended to the front of the input Tensor.

If you have a Variables with several tensors you need to differentiate, you should use the partial_derivatives function that operates on Variables since that'll be more efficient.

◆ partial_derivative() [3/3]

template<typename SymmList , typename IndexList , size_t Dim, typename DerivativeFrame >
auto partial_derivative ( const Tensor< DataVector, SymmList, IndexList > &  u,
const Mesh< Dim > &  mesh,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
) -> TensorMetafunctions::prepend_spatial_index< Tensor< DataVector, SymmList, IndexList >, Dim, UpLo::Lo, DerivativeFrame >

Compute the partial derivative of a Tensor with respect to the coordinates of DerivativeFrame.

Returns a Tensor with a spatial tensor index appended to the front of the input Tensor.

If you have a Variables with several tensors you need to differentiate, you should use the partial_derivatives function that operates on Variables since that'll be more efficient.

◆ partial_derivatives() [1/3]

template<typename DerivativeTags , typename VariableTags , size_t Dim, typename DerivativeFrame >
auto partial_derivatives ( const Variables< VariableTags > &  u,
const Mesh< Dim > &  mesh,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
) -> Variables< db::wrap_tags_in< Tags::deriv, DerivativeTags, tmpl::size_t< Dim >, DerivativeFrame > >

Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame.

Either compute partial derivatives of all variables in VariableTags, or of a subset of the VariablesTags. The subset of tags (DerivativeTags) must be the head of VariablesTags.

The return-by-reference overload infers all template parameters from the arguments. The tensor types in the output buffer must have a spatial index appended to the front.

The return-by-value overload requires that the DerivativeTags are specified explicitly as the first template parameter. It returns a Variables with the DerivativeTags wrapped in Tags::deriv.

◆ partial_derivatives() [2/3]

template<typename ResultTags , typename DerivativeTags , size_t Dim, typename DerivativeFrame >
void partial_derivatives ( gsl::not_null< Variables< ResultTags > * >  du,
const std::array< Variables< DerivativeTags >, Dim > &  logical_partial_derivatives_of_u,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
)

Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame.

Either compute partial derivatives of all variables in VariableTags, or of a subset of the VariablesTags. The subset of tags (DerivativeTags) must be the head of VariablesTags.

The return-by-reference overload infers all template parameters from the arguments. The tensor types in the output buffer must have a spatial index appended to the front.

The return-by-value overload requires that the DerivativeTags are specified explicitly as the first template parameter. It returns a Variables with the DerivativeTags wrapped in Tags::deriv.

◆ partial_derivatives() [3/3]

template<typename ResultTags , typename VariableTags , size_t Dim, typename DerivativeFrame >
void partial_derivatives ( gsl::not_null< Variables< ResultTags > * >  du,
const Variables< VariableTags > &  u,
const Mesh< Dim > &  mesh,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, DerivativeFrame > &  inverse_jacobian 
)

Compute the partial derivatives of each variable with respect to the coordinates of DerivativeFrame.

Either compute partial derivatives of all variables in VariableTags, or of a subset of the VariablesTags. The subset of tags (DerivativeTags) must be the head of VariablesTags.

The return-by-reference overload infers all template parameters from the arguments. The tensor types in the output buffer must have a spatial index appended to the front.

The return-by-value overload requires that the DerivativeTags are specified explicitly as the first template parameter. It returns a Variables with the DerivativeTags wrapped in Tags::deriv.

◆ positive_root()

double positive_root ( double  a,
double  b,
double  c 
)

Returns the positive root of a quadratic equation \(ax^2 + bx + c = 0\).

Returns: The positive root of a quadratic equation.

Requires: That there are two real roots, of which only one is positive.

◆ raw_transpose()

template<typename T >
void raw_transpose ( const gsl::not_null< T * >  result,
const T *const  data,
const size_t  chunk_size,
const size_t  number_of_chunks 
)

Function to compute transposed data.

Transpose the data pointed to by data, writing the result to the location pointed to by result. See the transpose function for a safer interface and for the meaning of the other arguments.

◆ smallest_root_greater_than_value_within_roundoff()

template<typename T >
T smallest_root_greater_than_value_within_roundoff ( const T &  a,
const T &  b,
const T &  c,
double  value 
)

Returns the smallest root of a quadratic equation \(ax^2 + bx + c = 0\) that is greater than the given value, within roundoff.

Returns: A root of a quadratic equation.

Requires: That there are two real roots.

Requires: At least one root is greater than the given value, to roundoff.

◆ toms748() [1/4]

template<bool UseSimd = true, bool AssumeFinite = false, typename Function >
DataVector RootFinder::toms748 ( const Function &  f,
const DataVector lower_bound,
const DataVector upper_bound,
const DataVector f_at_lower_bound,
const DataVector f_at_upper_bound,
const double  absolute_tolerance,
const double  relative_tolerance,
const size_t  max_iterations = 100 
)

Finds the root of the function f with the TOMS_748 method on each element in a DataVector, where function values are supplied at the lower and upper bounds.

Supplying function values is an optimization that saves two function calls per point. The function values are often available because one often checks if the root is bracketed before calling toms748.

Note
if AssumeFinite is true than the code assumes all numbers are finite and that a > 0 is equivalent to sign(a) > 0. This reduces runtime but will cause bugs if the numbers aren't finite. It also assumes that products like fa * fb are also finite.

◆ toms748() [2/4]

template<bool UseSimd = true, bool AssumeFinite = false, typename Function >
DataVector RootFinder::toms748 ( const Function &  f,
const DataVector lower_bound,
const DataVector upper_bound,
const double  absolute_tolerance,
const double  relative_tolerance,
const size_t  max_iterations = 100 
)

Finds the root of the function f with the TOMS_748 method on each element in a DataVector.

f is a binary invokable that takes a double as its first argument and a size_t as its second. The double is the current value at which to evaluate f, and the size_t is the current index into the DataVectors. Below is an example of how to root find different functions by indexing into a lambda-captured DataVector using the size_t passed to f.

const double abs_tol = 1e-15;
const double rel_tol = 1e-15;
const DataVector upper{2.0, 3.0, -sqrt(2.0) + abs_tol, -sqrt(2.0)};
const DataVector lower{sqrt(2.0) - abs_tol, sqrt(2.0), -2.0, -3.0};
const DataVector constant{2.0, 4.0, 2.0, 4.0};
const auto f_lambda = [&constant](const auto x, const size_t i) {
if constexpr (simd::is_batch<std::decay_t<decltype(x)>>::value) {
return simd::load_unaligned(&(constant[i])) - square(x);
} else {
return constant[i] - square(x);
}
};
const auto root_no_function_values =
RootFinder::toms748<true>(f_lambda, lower, upper, abs_tol, rel_tol);
Stores a collection of function values.
Definition: DataVector.hpp:48
constexpr decltype(auto) square(const T &x)
Compute the square of x
Definition: ConstantExpressions.hpp:55
auto sqrt(const TensorExpression< T, X, Symm, IndexList, tmpl::list< Args... > > &t)
Returns the tensor expression representing the square root of a tensor expression that evaluates to a...
Definition: SquareRoot.hpp:253
constexpr T & value(T &t)
Returns t.value() if t is a std::optional otherwise returns t.
Definition: OptionalHelpers.hpp:32
Definition: Simd.hpp:120

For each index i into the DataVector, the TOMS_748 algorithm searches for a root in the interval [lower_bound[i], upper_bound[i]], and will throw if this interval does not bracket a root, i.e. if f(lower_bound[i], i) * f(upper_bound[i], i) > 0.

See the Boost documentation for more details.

Requires: Function f be callable with a double and a size_t

Note
if AssumeFinite is true than the code assumes all numbers are finite and that a > 0 is equivalent to sign(a) > 0. This reduces runtime but will cause bugs if the numbers aren't finite. It also assumes that products like fa * fb are also finite.

Throws: convergence_error if, for any index, the requested tolerance is not met after max_iterations iterations.

◆ toms748() [3/4]

template<bool AssumeFinite = false, typename Function , typename T >
T RootFinder::toms748 ( const Function &  f,
const T  lower_bound,
const T  upper_bound,
const simd::scalar_type_t< T >  absolute_tolerance,
const simd::scalar_type_t< T >  relative_tolerance,
const size_t  max_iterations = 100,
const simd::mask_type_t< T >  ignore_filter = static_cast<simd::mask_type_t<T>>(0) 
)

Finds the root of the function f with the TOMS_748 method, where function values are not supplied at the lower and upper bounds.

Note
if AssumeFinite is true than the code assumes all numbers are finite and that a > 0 is equivalent to sign(a) > 0. This reduces runtime but will cause bugs if the numbers aren't finite. It also assumes that products like fa * fb are also finite.

◆ toms748() [4/4]

template<bool AssumeFinite = false, typename Function , typename T >
T RootFinder::toms748 ( const Function &  f,
const T  lower_bound,
const T  upper_bound,
const T  f_at_lower_bound,
const T  f_at_upper_bound,
const simd::scalar_type_t< T >  absolute_tolerance,
const simd::scalar_type_t< T >  relative_tolerance,
const size_t  max_iterations = 100,
const simd::mask_type_t< T >  ignore_filter = static_cast<simd::mask_type_t<T>>(0) 
)

Finds the root of the function f with the TOMS_748 method.

f is a unary invokable that takes a double which is the current value at which to evaluate f. An example is below.

const double abs_tol = 1e-15;
const double rel_tol = 1e-15;
const double upper = 2.0;
const double lower = sqrt(2.0) - abs_tol; // bracket surrounds root
const auto f_lambda = [](double x) { return 2.0 - square(x); };
auto root = RootFinder::toms748(f_lambda, lower, upper, abs_tol, rel_tol);
T toms748(const Function &f, const T lower_bound, const T upper_bound, const T f_at_lower_bound, const T f_at_upper_bound, const simd::scalar_type_t< T > absolute_tolerance, const simd::scalar_type_t< T > relative_tolerance, const size_t max_iterations=100, const simd::mask_type_t< T > ignore_filter=static_cast< simd::mask_type_t< T > >(0))
Finds the root of the function f with the TOMS_748 method.
Definition: TOMS748.hpp:531

The TOMS_748 algorithm searches for a root in the interval [lower_bound, upper_bound], and will throw if this interval does not bracket a root, i.e. if f(lower_bound) * f(upper_bound) > 0.

The arguments f_at_lower_bound and f_at_upper_bound are optional, and are the function values at lower_bound and upper_bound. These function values are often known because the user typically checks if a root is bracketed before calling toms748; passing the function values here saves two function evaluations.

Note
if AssumeFinite is true than the code assumes all numbers are finite and that a > 0 is equivalent to sign(a) > 0. This reduces runtime but will cause bugs if the numbers aren't finite. It also assumes that products like fa * fb are also finite.

Requires: Function f is invokable with a double

Throws: convergence_error if the requested tolerance is not met after max_iterations iterations.

◆ transpose() [1/2]

template<typename U , typename T >
void transpose ( const gsl::not_null< T * >  result,
const U &  u,
const size_t  chunk_size,
const size_t  number_of_chunks 
)

Function to compute transposed data.

The primary use of this function is to rearrange the memory layout so that another function can operate on contiguous chunks of memory.

Requires: result.size() to be the product of number_of_chunks and chunk_size, u.size() to be equal or greater than result.size(), and that both result and u have a data() member function.

Details

The container u holds a contiguous array of data, treated as a sequence of number_of_chunks contiguous sets of entries of size chunk_size. The output result has its data arranged such that the first number_of_chunks elements in result will be the first element of each chunk of u. The last number_of_chunks elements in result will be the last (i.e. chunk_size-th) element of each chunk of u. If u.size() is greater than result.size() the extra elements of u are ignored.

Note
This is equivalent to treating the first part of u as a matrix and filling result (or the returned object) with the transpose of that matrix.

If u represents a block of data indexed by \((x, y, z, \ldots)\) with the first index varying fastest, transpose serves to rotate the indices. If the extents are \((X, Y, Z, \ldots)\), with product \(N\), transpose(u, X, N/X) reorders the data to be indexed \((y, z, \ldots, x)\), transpose(u, X*Y, N/X/Y) reorders the data to be indexed \((z, \ldots, x, y)\), etc.

Example

const DataVector matrix{ 1., 2., 3.,
4., 5., 6.,
7., 8., 9.,
10., 11., 12.};
CHECK(transpose(matrix, 3, 4) == DataVector{1., 4., 7., 10.,
2., 5., 8., 11.,
3., 6., 9., 12.});
void transpose(const gsl::not_null< T * > result, const U &u, const size_t chunk_size, const size_t number_of_chunks)
Function to compute transposed data.
Definition: Transpose.hpp:92
const size_t chunk_size_vars = 8;
const size_t n_grid_pts = 2 * chunk_size_vars;
Variables<two_vars<2>> variables(n_grid_pts, 0.);
for (size_t i = 0; i < variables.size(); ++i) {
// clang-tidy: pointer arithmetic
variables.data()[i] = i * i; // NOLINT
}
const size_t number_of_chunks_vars = variables.size() / chunk_size_vars;
auto transposed_vars = variables;
transpose(make_not_null(&transposed_vars), variables, chunk_size_vars,
number_of_chunks_vars);
for (size_t i = 0; i < chunk_size_vars; ++i) {
for (size_t j = 0; j < number_of_chunks_vars; ++j) {
// clang-tidy: pointer arithmetic
CHECK(variables.data()[i + chunk_size_vars * j] == // NOLINT
transposed_vars.data()[j + number_of_chunks_vars * i]); // NOLINT
}
}
const size_t chunk_size = 8;
const size_t number_of_chunks = 2;
const size_t n_pts = chunk_size * number_of_chunks;
DataVector data(n_pts);
for (size_t i = 0; i < data.size(); ++i) {
data[i] = static_cast<double>(i * i);
}
DataVector transposed_data(n_pts, 0.);
transposed_data = transpose(data, chunk_size, number_of_chunks);
for (size_t i = 0; i < chunk_size; ++i) {
for (size_t j = 0; j < number_of_chunks; ++j) {
CHECK(data[i + chunk_size * j] ==
transposed_data[j + number_of_chunks * i]);
}
}
T data(T... args)
Variables<one_var<2>> partial_vars(n_grid_pts, 0.);
get<Var1<2>>(partial_vars) = get<Var1<2>>(variables);
Variables<one_var<2>> partial_transpose(n_grid_pts, 0.);
const size_t partial_number_of_chunks = 2 * number_of_chunks_vars / 3;
transpose(make_not_null(&partial_transpose), variables, chunk_size_vars,
partial_number_of_chunks);
for (size_t i = 0; i < chunk_size_vars; ++i) {
for (size_t j = 0; j < partial_number_of_chunks; ++j) {
// clang-tidy: pointer arithmetic
CHECK(partial_transpose
.data()[j + partial_number_of_chunks * i] == // NOLINT
variables.data()[i + chunk_size_vars * j]); // NOLINT
CHECK(partial_transpose
.data()[j + partial_number_of_chunks * i] == // NOLINT
partial_vars.data()[i + chunk_size_vars * j]); // NOLINT
}
}
const auto & get(const DataBox< TagList > &box)
Retrieve the item with tag Tag from the DataBox.
Definition: DataBox.hpp:1264
Template Parameters
Uthe type of data to be transposed
Tthe type of the transposed data

◆ transpose() [2/2]

template<typename U , typename T = U>
T transpose ( const U &  u,
const size_t  chunk_size,
const size_t  number_of_chunks 
)

Function to compute transposed data.

The primary use of this function is to rearrange the memory layout so that another function can operate on contiguous chunks of memory.

Requires: result.size() to be the product of number_of_chunks and chunk_size, u.size() to be equal or greater than result.size(), and that both result and u have a data() member function.

Details

The container u holds a contiguous array of data, treated as a sequence of number_of_chunks contiguous sets of entries of size chunk_size. The output result has its data arranged such that the first number_of_chunks elements in result will be the first element of each chunk of u. The last number_of_chunks elements in result will be the last (i.e. chunk_size-th) element of each chunk of u. If u.size() is greater than result.size() the extra elements of u are ignored.

Note
This is equivalent to treating the first part of u as a matrix and filling result (or the returned object) with the transpose of that matrix.

If u represents a block of data indexed by \((x, y, z, \ldots)\) with the first index varying fastest, transpose serves to rotate the indices. If the extents are \((X, Y, Z, \ldots)\), with product \(N\), transpose(u, X, N/X) reorders the data to be indexed \((y, z, \ldots, x)\), transpose(u, X*Y, N/X/Y) reorders the data to be indexed \((z, \ldots, x, y)\), etc.

Example

const DataVector matrix{ 1., 2., 3.,
4., 5., 6.,
7., 8., 9.,
10., 11., 12.};
CHECK(transpose(matrix, 3, 4) == DataVector{1., 4., 7., 10.,
2., 5., 8., 11.,
3., 6., 9., 12.});
const size_t chunk_size_vars = 8;
const size_t n_grid_pts = 2 * chunk_size_vars;
Variables<two_vars<2>> variables(n_grid_pts, 0.);
for (size_t i = 0; i < variables.size(); ++i) {
// clang-tidy: pointer arithmetic
variables.data()[i] = i * i; // NOLINT
}
const size_t number_of_chunks_vars = variables.size() / chunk_size_vars;
auto transposed_vars = variables;
transpose(make_not_null(&transposed_vars), variables, chunk_size_vars,
number_of_chunks_vars);
for (size_t i = 0; i < chunk_size_vars; ++i) {
for (size_t j = 0; j < number_of_chunks_vars; ++j) {
// clang-tidy: pointer arithmetic
CHECK(variables.data()[i + chunk_size_vars * j] == // NOLINT
transposed_vars.data()[j + number_of_chunks_vars * i]); // NOLINT
}
}
const size_t chunk_size = 8;
const size_t number_of_chunks = 2;
const size_t n_pts = chunk_size * number_of_chunks;
DataVector data(n_pts);
for (size_t i = 0; i < data.size(); ++i) {
data[i] = static_cast<double>(i * i);
}
DataVector transposed_data(n_pts, 0.);
transposed_data = transpose(data, chunk_size, number_of_chunks);
for (size_t i = 0; i < chunk_size; ++i) {
for (size_t j = 0; j < number_of_chunks; ++j) {
CHECK(data[i + chunk_size * j] ==
transposed_data[j + number_of_chunks * i]);
}
}
Variables<one_var<2>> partial_vars(n_grid_pts, 0.);
get<Var1<2>>(partial_vars) = get<Var1<2>>(variables);
Variables<one_var<2>> partial_transpose(n_grid_pts, 0.);
const size_t partial_number_of_chunks = 2 * number_of_chunks_vars / 3;
transpose(make_not_null(&partial_transpose), variables, chunk_size_vars,
partial_number_of_chunks);
for (size_t i = 0; i < chunk_size_vars; ++i) {
for (size_t j = 0; j < partial_number_of_chunks; ++j) {
// clang-tidy: pointer arithmetic
CHECK(partial_transpose
.data()[j + partial_number_of_chunks * i] == // NOLINT
variables.data()[i + chunk_size_vars * j]); // NOLINT
CHECK(partial_transpose
.data()[j + partial_number_of_chunks * i] == // NOLINT
partial_vars.data()[i + chunk_size_vars * j]); // NOLINT
}
}
Template Parameters
Uthe type of data to be transposed
Tthe type of the transposed data

◆ weak_divergence()

template<typename... ResultTags, typename... FluxTags, size_t Dim>
void weak_divergence ( const gsl::not_null< Variables< tmpl::list< ResultTags... > > * >  divergence_of_fluxes,
const Variables< tmpl::list< FluxTags... > > &  fluxes,
const Mesh< Dim > &  mesh,
const InverseJacobian< DataVector, Dim, Frame::ElementLogical, Frame::Inertial > &  det_jac_times_inverse_jacobian 
)

Compute the weak form divergence of fluxes.

In a discontinuous Galerkin scheme we integrate the equations against the basis functions over the element. For the flux divergence term this gives:

\begin{align*} \int_{\Omega}d^n x \phi_{\breve{\imath}}\partial_i F^i, \end{align*}

where the basis functions are denoted by \(\phi_{\breve{\imath}}\).

Integrating by parts we get

\begin{align*} \int_{\Omega}d^n x\, \phi_{\breve{\imath}}\partial_i F^i = -\int_{\Omega}d^n x\, F^i \partial_i \phi_{\breve{\imath}} + \int_{\partial\Omega} d^{(n-1)}\Sigma\, n_i F^i \phi_{\breve{\imath}} \end{align*}

Next we expand the flux \(F^i\) in terms of the basis functions, yielding

\begin{align*} - \int_{\Omega}d^n x\,F^i_{\breve{\jmath}} \phi_{\breve{\jmath}} \partial_i \phi_{\breve{\imath}} + \int_{\partial\Omega} d^{(n-1)}\Sigma\, n_i F^i_{\breve{\jmath}} \phi_{\breve{\jmath}} \phi_{\breve{\imath}} \end{align*}

This function computes the volume term:

\begin{align*} \frac{1}{w_\breve{\imath}} \int_{\Omega}d^n x \, F^i_{\breve{\jmath}} \phi_{\breve{\jmath}} \partial_i \phi_{\breve{\imath}} \end{align*}

Note
When using Gauss-Lobatto points the numerical values of the divergence are the same for the strong and weak divergence at the interior points. When using Gauss points they are only the same at the central grid point and only when an odd number of grid points is used.