Algorithms to solve linear systems of equations. More...
Namespaces  
LinearSolver::InnerProductImpls  
Implementations of LinearSolver::inner_product.  
LinearSolver  
Functionality for solving linear systems of equations.  
LinearSolver::Tags  
The DataBox tags associated with the linear solver.  
Classes  
struct  LinearSolver::ConjugateGradient< Metavariables > 
A conjugate gradient solver for linear systems of equations \(Ax=b\) where the operator \(A\) is symmetric. More...  
struct  LinearSolver::ConvergenceCriteria 
Criteria that determine the linear solve has converged. More...  
struct  LinearSolver::Gmres< Metavariables > 
A GMRES solver for nonsymmetric linear systems of equations \(Ax=b\). More...  
struct  LinearSolver::IterationId 
Identifies a step in the linear solver algorithm. More...  
Functions  
template<typename Lhs , typename Rhs >  
double  LinearSolver::inner_product (const Lhs &lhs, const Rhs &rhs) noexcept 
The local part of the Euclidean inner product on the vector space w.r.t. which the addition and scalar multiplication of both Lhs and Rhs is defined. More...  
Algorithms to solve linear systems of equations.
In a way, the linear solver is for elliptic systems what time stepping is for the evolution code. This is because the DG scheme for an elliptic system reduces to a linear system of equations of the type \(Ax=b\), where \(A\) is a global matrix representing the DG discretization of the problem. Since this is one equation for each node in the computational domain it becomes unfeasible to numerically invert the global matrix \(A\). Instead, we solve the problem iteratively so that we never need to construct \(A\) globally but only need \(Ax\) that can be evaluated locally by virtue of the DG formulation. This action of the operator is what we have to supply in each step of the iterative algorithms implemented here. It is where most of the computational cost goes and usually involves computing a volume contribution for each element and communicating fluxes with neighboring elements. Since the iterative algorithms typically scale badly with increasing grid size, a preconditioner \(P\) is needed in order to make \(P^{1}A\) easier to invert.
In the iterative algorithms we usually don't work with the physical field \(x\) directly. Instead we need to apply the operator to an internal variable defined by the respective algorithm. This variable is exposed as the LinearSolver::Tags::Operand
prefix, and the algorithm expects that the computed operator action is written into db::add_tag_prefix<LinearSolver::Tags::OperatorAppliedTo, LinearSolver::Tags::Operand<...>>
in each step.
Each linear solver is expected to expose the following compiletime interface:
component_list
: A tmpl::list
that collects the additional parallel components this linear solver uses. The executables will append these to their own component_list
.tags
: A type that follows the same structure as those that initialize other parts of the DataBox in InitializeElement.hpp
files. This means it exposes simple_tags
, compute_tags
and a static initialize
function so that it can be chained into the DataBox initialization.perform_step
: The action to be executed after the linear operator has been applied to the operand and written to the DataBox (see above). It will converge the fields towards their solution and update the operand before handing responsibility back to the algorithm for the next application of the linear operator:

noexcept 
The local part of the Euclidean inner product on the vector space w.r.t. which the addition and scalar multiplication of both Lhs
and Rhs
is defined.
The linear solver works under the following assumptions:
lhs
and rhs
can each be interpreted as the local chunk of a vector of the same vector space \(V\). Local means there are vectors \(q, p\in V\) such that lhs
and rhs
represent the components of these vectors w.r.t. a subset \(B_i\) of a basis \(B\subset V\).*
and +
operators of Lhs
and Rhs
implement the scalar multiplication and addition in the vector space locally, i.e. restricted to \(B_i\) in the above sense.In practice this means that the full vectors \(p\) and \(q\) can be distributed on many elements, where each only holds local chunks lhs
and rhs
of the components. Scalar multiplication and addition can be performed locally as expected, but computing the full inner product requires a global reduction over all elements that sums their local inner_product
s.