GroupDefs.hpp
Go to the documentation of this file.
2 // See LICENSE.txt for details.
3
4 /// \file
5 /// Defines all group definitions
6
7 #pragma once
8
9 /*!
10  * \defgroup ActionsGroup Actions
11  * \brief A collection of steps used in algorithms.
12  */
13
14 /*!
15  * \defgroup AnalyticSolutionsGroup Analytic Solutions
16  * \brief Analytic solutions to the equations implemented in \ref
17  * EvolutionSystemsGroup and \ref EllipticSystemsGroup.
18  */
19
20 /*!
21  * \defgroup BoundaryConditionsGroup Boundary Conditions
22  * A collection of boundary conditions used for evolutions.
23  */
24
25 /*!
26  * \defgroup CharmExtensionsGroup Charm++ Extensions
27  * \brief Classes and functions used to make Charm++ easier and safer to use.
28  */
29
30 /*!
31  * \defgroup ComputationalDomainGroup Computational Domain
32  * \brief The building blocks used to describe the computational domain.
33  *
34  * ### Description
35  * The VolumeDim-dimensional computational Domain is constructed from a set of
36  * non-overlapping Block%s. Each Block is a distorted VolumeDim-dimensional
37  * hyperrcube Each codimension-1 boundary of a Block is either part of the
38  * external boundary of the computational domain, or is identical to a boundary
39  * of one other Block. Each Block is subdivided into one or more Element%s
40  * that may be changed dynamically if AMR is enabled.
41  */
42
43 /*!
44  * \defgroup ConservativeGroup Conservative System Evolution
45  * \brief Contains generic functions used for evolving conservative
46  * systems.
47  */
48
49 /*!
50  * \defgroup ConstantExpressionsGroup Constant Expressions
51  * \brief Contains an assortment of constexpr functions
52  *
53  * ### Description
54  * Contains an assortment of constexpr functions that are useful for
55  * metaprogramming, or efficient mathematical computations, such as
56  * exponentiating to an integer power, where the power is known at compile
57  * time.
58  */
59
60 /*!
61  * \defgroup ControlSystemGroup Control System
62  * \brief Contains control system elements
63  *
64  * The control system manages the time-dependent mapping between frames, such as
65  * the fixed computational frame (grid frame) and the inertial frame. The
66  * time-dependent parameters of the mapping are adjusted by a feedback control
67  * system in order to follow the dynamical evolution of objects such as horizons
68  * of black holes or surfaces of neutron stars. For example, in binary black
69  * hole simulations the map is typically a composition of maps that include
70  * translation, rotation, scaling, shape, etc.
71  * Each map under the governance of the control system has an associated
72  * time-dependent map parameter \f$\lambda(t)\f$ that is a piecewise Nth order
73  * polynomial. At discrete times (called reset times), the control system resets
74  * the Nth time derivative of \f$\lambda(t)\f$ to a new constant value, in order
75  * to minimize an error function \f$Q(t)\f$ that is specific to each map. At
76  * each reset time, the Nth derivative of \f$\lambda(t)\f$ is set to a function
77  * \f$U(t)\f$, called the control signal, that is determined by \f$Q(t)\f$ and
78  * its time derivatives and time integral. Note that \f$\lambda(t)\f$,
79  * \f$U(t)\f$, and \f$Q(t)\f$ can be vectors.
80  *
81  * The key components of the control system are:
82  * - FunctionsOfTime: each map has an associated FunctionOfTime that represents
83  * the map parameter \f$\lambda(t)\f$ and relevant time derivatives.
84  * - ControlError: each map has an associated ControlError that computes
85  * the error, \f$Q(t)\f$. Note that for each map, \f$Q(t)\f$ is defined to
86  * follow the convention that \f$dQ = -d \lambda\f$ as \f$Q \rightarrow 0\f$.
87  * - Averager: an averager can be used to average out the noise in the 'raw'
88  * \f$Q(t)\f$ returned by the ControlError.
89  * - Controller: the map controller computes the control signal \f$U(t)\f$ from
90  * \f$Q(t)\f$ and its time integral and time derivatives.
91  * The control is accomplished by setting the Nth derivative of
92  * \f$\lambda(t)\f$ to \f$U(t)\f$. Two common controllers are PID
93  * (proportional/integral/derivative)
94  * \f[U(t) = a_{0}\int_{t_{0}}^{t} Q(t') dt'+a_{1}Q(t)+a_{2}\frac{dQ}{dt}\f]
95  * or
96  * PND (proportional/N derivatives)
97  * \f[ U(t) = \sum_{k=0}^{N} a_{k} \frac{d^kQ}{dt^k} \f]
98  * The coefficients \f$a_{k} \f$ in the computation of \f$U(t)\f$ are chosen
99  * at each time such that the error \f$Q(t)\f$ will be critically damped
100  * on a timescale of \f$\tau\f$ (the damping time),
101  * i.e. \f$Q(t) \propto e^{-t/\tau}\f$.
102  * - TimescaleTuner: each map has a TimescaleTuner that dynamically adjusts
103  * the damping timescale \f$\tau\f$ appropriately to keep the error \f$Q(t)\f$
104  * within some specified error bounds. Note that the reset time interval,
105  * \f$\Delta t\f$, is a constant fraction of this damping timescale,
106  * i.e. \f$\Delta t = \alpha \tau\f$ (empirically, we have found
107  * \f$\alpha=0.3\f$ to be a good choice).
108  *
109  *
110  * For additional details describing our control system approach, see
111  * \cite Hemberger2012jz.
112  */
113
114 /*!
115  * \defgroup CoordinateMapsGroup Coordinate Maps
116  * \brief Functions for mapping coordinates between different frames
117  *
118  * Coordinate maps provide the maps themselves, the inverse maps, along
119  * with the Jacobian and inverse Jacobian of the maps.
120  */
121
122 /*!
123  * \defgroup CoordMapsTimeDependentGroup Coordinate Maps, Time-dependent
124  * \brief Functions for mapping time-dependent coordinates between different
125  * frames
126  *
127  * Coordinate maps provide the maps themselves, the inverse maps, the Jacobian
128  * and inverse Jacobian of the maps, and the frame velocity (time derivative of
129  * the map)
130  */
131
132 /*!
133  * \defgroup DataBoxGroup DataBox
134  * \brief Documentation, functions, metafunctions, and classes necessary for
135  * using DataBox
136  *
137  * DataBox is a heterogeneous compile-time associative container with lazy
138  * evaluation of functions. DataBox can not only store data, but can also store
139  * functions that depend on other data inside the DataBox. The functions will be
140  * evaluated when the data they return is requested. The result is cached, and
141  * if a dependency of the function is modified the cache is invalidated.
142  *
143  * #### Simple and Compute Tags and Their Items
144  *
145  * The compile-time keys are structs called tags, while the values are called
146  * items. Tags are quite minimal, containing only the information necessary to
147  * store the data and evaluate functions. There are two different types of tags
148  * that a DataBox can hold: simple tags and compute tags. Simple tags are for
149  * data that is inserted into the DataBox at the time of creation, while compute
150  * tags are for data that will be computed from a function when the compute item
151  * is retrieved. If a compute item is never retrieved from the DataBox then it
152  * is never evaluated.
153  *
154  * Simple tags must have a member type alias type that is the type of the data
155  * to be stored and a static std::string name() method that returns the name
156  * of the tag. Simple tags must inherit from db::SimpleTag.
157  *
158  * Compute tags must also have a static std::string name() method that returns
159  * the name of the tag, but they cannot have a type type alias. Instead,
160  * compute tags must have a static member function or static member function
161  * pointer named function. function can be a function template if necessary.
162  * The function must take all its arguments by const reference. The
163  * arguments to the function are retrieved using tags from the DataBox that the
164  * compute tag is in. The tags for the arguments are set in the member type
165  * alias argument_tags, which must be a tmpl::list of the tags corresponding
166  * to each argument. Note that the order of the tags in the argument_list is
167  * the order that they will be passed to the function. Compute tags must inherit
168  * from db::ComputeTag.
169  *
170  * Here is an example of a simple tag:
171  *
172  * \snippet Test_DataBox.cpp databox_tag_example
173  *
174  * and an example of a compute tag with a function pointer:
175  *
176  * \snippet Test_DataBox.cpp databox_compute_item_tag_example
177  *
178  * If the compute item's tag is inline then the compute item is of the form:
179  *
180  * \snippet Test_DataBox.cpp compute_item_tag_function
181  *
182  * Compute tags can also have their functions be overloaded on the type of its
183  * arguments:
184  *
186  *
187  * or be overloaded on the number of arguments:
188  *
190  *
191  * Compute tag function templates are implemented as follows:
192  *
194  *
196  * combined to produce extremely generic compute tags. The below compute tag
197  * takes as template parameters a parameter pack of integers, which is used to
198  * specify several of the arguments. The function is overloaded for the single
199  * argument case, and a variadic function template is provided for the multiple
200  * arguments case. Note that in practice few compute tags will be this complex.
201  *
202  * \snippet Test_BaseTags.cpp compute_template_base_tags
203  *
204  * #### Subitems and Prefix Tags
205  *
206  * A simple or compute tag might also hold a collection of data, such as a
207  * container of Tensors. In many cases you will want to be able to retrieve
208  * individual elements of the collection from the DataBox without having to
209  * first retrieve the collection. The infrastructure that allows for this is
210  * called *Subitems*. The subitems of the parent tag must refer to a subset of
211  * the data inside the parent tag, e.g. one Tensor in the collection. If the
212  * parent tag is Parent and the subitems tags are Sub<0>, Sub<1>, then when
213  * Parent is added to the DataBox, so are Sub<0> and Sub<1>. This means
214  * the retrieval mechanisms described below will work on Parent, Sub<0>, and
215  * Sub<1>.
216  *
217  * Subitems specify requirements on the tags they act on. For example, there
218  * could be a requirement that all tags with a certain type are to be treated as
219  * a Subitms. Let's say that the Parent tag holds a Variables, and
220  * Variables can be used with the Subitems infrastructure to add the nested
221  * Tensors. Then all tags that hold a Variables will have their subitems
222  * added into the DataBox. To add a new type as a subitem the db::Subitems
223  * struct must be specialized. See the documentation of db::Subitems for more
224  * details.
225  *
226  * The DataBox also supports *prefix tags*, which are commonly used for items
227  * that are related to a different item by some operation. Specifically, say
228  * you have a tag MyTensor and you want to also have the time derivative of
229  * MyTensor, then you can use the prefix tag dt to get dt<MyTensor>. The
230  * benefit of a prefix tag over, say, a separate tag dtMyTensor is that prefix
231  * tags can be added and removed by the compute tags acting on the original tag.
232  * Prefix tags can also be composed, so a second time derivative would be
233  * dt<dt<MyTensor>>. The net result of the prefix tags infrastructure is that
234  * the compute tag that returns dt<MyTensor> only needs to know its input
235  * tags, it knows how to name its output based off that. In addition to the
236  * normal things a simple or a compute tag must hold, prefix tags must have a
237  * nested type alias tag, which is the tag being prefixed. Prefix tags must
238  * also inherit from db::PrefixTag in addition to inheriting from
239  * db::SimpleTag or db::ComputeTag.
240  *
241  * #### Creating a DataBox
242  *
243  * You should never call the constructor of a DataBox directly. DataBox
244  * construction is quite complicated and the helper functions db::create and
245  * db::create_from should be used instead. db::create is used to construct a
246  * new DataBox. It takes two typelists as explicit template parameters, the
247  * first being a list of the simple tags to add and the second being a list of
248  * compute tags to add. If no compute tags are being added then only the simple
249  * tags list must be specified. The tags lists should be passed as
250  * db::create<db::AddSimpleTags<simple_tags...>,
251  * db::AddComputeTags<compute_tags...>>. The arguments to db::create are the
252  * initial values of the simple tags and must be passed in the same order as the
253  * tags in the db::AddSimpleTags list. If the type of an argument passed to
254  * db::create does not match the type of the corresponding simple tag a static
255  * assertion will trigger. Here is an example of how to use db::create:
256  *
257  * \snippet Test_DataBox.cpp create_databox
258  *
259  * To create a new DataBox from an existing one use the db::create_from
260  * function. The only time a new DataBox needs to be created is when tags need
261  * to be removed or added. Like db::create, db::create_from also takes
262  * typelists as explicit template parameter. The first template parameter is the
263  * list of tags to be removed, which is passed using db::RemoveTags, second is
264  * the list of simple tags to add, and the third is the list of compute tags to
265  * add. If tags are only removed then only the first template parameter needs to
266  * be specified. If tags are being removed and only simple tags are being added
267  * then only the first two template parameters need to be specified. Here is an
268  * example of removing a tag or compute tag:
269  *
270  * \snippet Test_DataBox.cpp create_from_remove
271  *
272  * Adding a simple tag is done using:
273  *
275  *
276  * Adding a compute tag is done using:
277  *
279  *
280  * #### Accessing and Mutating Items
281  *
282  * To retrieve an item from a DataBox use the db::get function. db::get
283  * will always return a const reference to the object stored in the DataBox
284  * and will also have full type information available. This means you are able
285  * to use const auto& when retrieving tags from the DataBox. For example,
286  * \snippet Test_DataBox.cpp using_db_get
287  *
288  * If you want to mutate the value of a simple item in the DataBox use
289  * db::mutate. Any compute item that depends on the mutated item will have its
290  * cached value invalidated and be recomputed the next time it is retrieved from
291  * the DataBox. db::mutate takes a parameter pack of tags to mutate as
292  * explicit template parameters, a gsl::not_null of the DataBox whose items
293  * will be mutated, an invokable, and extra arguments to forward to the
294  * invokable. The invokable takes the arguments passed from the DataBox by
295  * const gsl::not_null while the extra arguments are forwarded to the
296  * invokable. The invokable is not allowed to retrieve anything from the
297  * DataBox, so any items must be passed as extra arguments using db::get to
298  * retrieve them. For example,
299  *
300  * \snippet Test_DataBox.cpp databox_mutate_example
301  *
302  * In addition to retrieving items using db::get and mutating them using
303  * db::mutate, there is a facility to invoke an invokable with tags from the
304  * DataBox. db::apply takes a tmpl::list of tags as an explicit template
305  * parameter, will retrieve all the tags from the DataBox passed in and then
306  * invoke the invokable with the items in the tag list. Similarly,
307  * db::mutate_apply invokes the invokable but allows for mutating some of
308  * the tags. See the documentation of db::apply and db::mutate_apply for
309  * examples of how to use them.
310  *
311  * #### The Base Tags Mechanism
312  *
313  * Retrieving items by tags should not require knowing whether the item being
314  * retrieved was computed using a compute tag or simply added using a simple
315  * tag. The framework that handles this falls under the umbrella term
316  * *base tags*. The reason is that a compute tag can inherit from a simple tag
317  * with the same item type, and then calls to db::get with the simple tag can
318  * be used to retrieve the compute item as well. That is, say you have a compute
319  * tag ArrayCompute that derives off of the simple tag Array, then you can
320  * retrieve the compute tag ArrayCompute and Array by calling
321  * db::get<Array>(box). The base tags mechanism requires that only one Array
322  * tag be present in the DataBox, otherwise a static assertion is triggered.
323  *
324  * The inheritance idea can be generalized further with what are called base
325  * tags. A base tag is an empty struct that inherits from db::BaseTag. Any
326  * simple or compute item that derives off of the base tag can be retrieved
327  * using db::get. Consider the following VectorBase and Vector tag:
328  *
329  * \snippet Test_BaseTags.cpp vector_base_definitions
330  *
331  * It is possible to retrieve Vector<1> from the DataBox using
332  * VectorBase<1>. Most importantly, base tags can also be used in compute tag
333  * arguments, as follows:
334  *
335  * \snippet Test_BaseTags.cpp compute_template_base_tags
336  *
337  * As shown in the code example, the base tag mechanism works with function
338  * template compute tags, enabling generic programming to be combined with the
339  * lazy evaluation and automatic dependency analysis offered by the DataBox. To
340  * really demonstrate the power of base tags, let's also have ArrayComputeBase
341  * inherit from a simple tag Array, which inherits from a base tag ArrayBase
342  * as follows:
343  *
344  * \snippet Test_BaseTags.cpp array_base_definitions
345  *
346  * To start, let's create a DataBox that holds a Vector<0> and an
347  * ArrayComputeBase<0> (the concrete tag must be used when creating the
348  * DataBox, not the base tags), retrieve the tags using the base tag mechanism,
349  * including mutating Vector<0>, and then verifying that the dependencies are
350  * handled correctly.
351  *
352  * \snippet Test_BaseTags.cpp base_simple_and_compute_mutate
353  *
354  * Notice that we are able to retrieve ArrayComputeBase<0> with ArrayBase<0>
355  * and Array<0>. We were also able to mutate Vector<0> using
356  * VectorBase<0>.
357  *
358  * We can even remove tags using their base tags with db::create_from:
359  *
360  * \snippet Test_BaseTags.cpp remove_using_base
361  *
362  * The base tags infrastructure even works with Subitems. Even if you mutate the
363  * subitem of a parent using a base tag, the appropriate compute item caches
364  * will be invalidated.
365  *
366  * \note All of the base tags infrastructure works for db::get, db::mutate,
367  * db::apply and db::mutate_apply.
368  */
369
370 /*!
371  * \defgroup DataBoxTagsGroup DataBox Tags
372  * \brief Structures and metafunctions for labeling the contents of DataBoxes
373  */
374
375 /*!
376  * \defgroup DataStructuresGroup Data Structures
377  * \brief Various useful data structures used in SpECTRE
378  */
379
380 /*!
381  * \defgroup DiscontinuousGalerkinGroup Discontinuous Galerkin
382  * \brief Functions and classes specific to the Discontinuous Galerkin
383  * algorithm.
384  */
385
386 /*!
387  * \defgroup DomainCreatorsGroup Domain Creators
388  * A collection of domain creators for specifying the initial computational
389  * domain geometry.
390  */
391
392 /*!
393  * \defgroup EllipticSystemsGroup Elliptic Systems
394  * \brief All available elliptic systems and information on how to implement
395  * elliptic systems
396  *
397  * \details Actions and parallel components may require an elliptic system to
398  * expose the following types:
399  *
400  * - volume_dim: The number of spatial dimensions
401  * - fields_tag: A \ref DataBoxGroup tag that represents the fields being
402  * solved for.
403  * - variables_tag: The variables to compute DG volume contributions and
404  * fluxes for. Use db::add_tag_prefix<LinearSolver::Tags::Operand, fields_tag>
405  * unless you have a reason not to.
406  * - compute_operator_action: A struct that computes the bulk contribution to
407  * the DG operator. Must expose a tmpl::list of argument_tags and a static
408  * apply function that takes the following arguments in this order:
409  * - First, the types of the tensors in
410  * db::add_tag_prefix<Metavariables::temporal_id::step_prefix, variables_tag>
411  * (which represent the linear operator applied to the variables) as not-null
412  * pointers.
413  * - Followed by the types of the argument_tags as constant references.
414  *
415  * Actions and parallel components may also require the Metavariables to expose
416  * the following types:
417  *
418  * - system: See above.
419  * - temporal_id: A DataBox tag that identifies steps in the algorithm.
420  * Generally use LinearSolver::Tags::IterationId.
421  */
422
423 /*!
424  * \defgroup EquationsOfStateGroup Equations of State
425  * \brief The various available equations of state
426  */
427
428 /*!
429  * \defgroup ErrorHandlingGroup Error Handling
430  * Macros and functions used for handling errors
431  */
432
433 /*!
434  * \defgroup EventsAndTriggersGroup Events and Triggers
435  * \brief Classes and functions related to events and triggers
436  */
437
438 /*!
439  * \defgroup EvolutionSystemsGroup Evolution Systems
440  * \brief All available evolution systems and information on how to implement
441  * evolution systems
442  *
443  * \details Actions and parallel components may require an evolution system to
444  * expose the following types:
445  *
446  * - volume_dim: The number of spatial dimensions
447  * - variables_tag: The evolved variables to compute DG volume contributions
448  * and fluxes for.
449  * - compute_time_derivative: A struct that computes the bulk contribution to
450  * the DG discretization of the time derivative. Must expose a tmpl::list of
451  * argument_tags and a static apply function that takes the following
452  * arguments in this order:
453  * - First, the types of the tensors in
454  * db::add_tag_prefix<Metavariables::temporal_id::step_prefix, variables_tag>
455  * (which represent the time derivatives of the variables) as not-null pointers.
456  * - The types of the argument_tags as constant references.
457  *
458  * Actions and parallel components may also require the Metavariables to expose
459  * the following types:
460  *
461  * - system: See above.
462  * - temporal_id: A DataBox tag that identifies steps in the algorithm.
463  * Generally use Tags::TimeId.
464  */
465
466 /*!
467  * \defgroup ExecutablesGroup Executables
468  * \brief A list of executables and how to use them
469  *
470  * <table class="doxtable">
471  * <tr>
472  * <th>Executable Name </th><th>Description </th>
473  * </tr>
474  * <tr>
475  * <td> \ref ParallelInfoExecutablePage "ParallelInfo" </td>
476  * <td> Executable for checking number of nodes, cores, etc.</td>
477  * </tr>
478  * </table>
479  */
480
481 /*!
482  * \defgroup FileSystemGroup File System
483  * \brief A light-weight file system library.
484  */
485
486 /*!
487  * \defgroup GeneralRelativityGroup General Relativity
488  * \brief Contains functions used in General Relativistic simulations
489  */
490
491 /*!
492  * \defgroup HDF5Group HDF5
493  * \brief Functions and classes for manipulating HDF5 files
494  */
495
496 /*!
497  * \defgroup OptionTagsGroup Input File Options
498  * \brief Tags used for options parsed from the input file.
499  *
500  * These can be stored in the ConstGlobalCache or passed to the initialize
501  * function of a parallel component.
502  */
503
504 /*!
505  * \defgroup LinearSolverGroup Linear Solver
506  * \brief Algorithms to solve linear systems of equations
507  *
508  * \details In a way, the linear solver is for elliptic systems what time
509  * stepping is for the evolution code. This is because the DG scheme for an
510  * elliptic system reduces to a linear system of equations of the type
511  * \f$Ax=b\f$, where \f$A\f$ is a global matrix representing the DG
512  * discretization of the problem. Since this is one equation for each node in
513  * the computational domain it becomes unfeasible to numerically invert the
514  * global matrix \f$A\f$. Instead, we solve the problem iteratively so that we
515  * never need to construct \f$A\f$ globally but only need \f$Ax\f$ that can be
516  * evaluated locally by virtue of the DG formulation. This action of the
517  * operator is what we have to supply in each step of the iterative algorithms
518  * implemented here. It is where most of the computational cost goes and usually
519  * involves computing a volume contribution for each element and communicating
520  * fluxes with neighboring elements. Since the iterative algorithms typically
521  * scale badly with increasing grid size, a preconditioner \f$P\f$ is needed
522  * in order to make \f$P^{-1}A\f$ easier to invert.
523  *
524  * In the iterative algorithms we usually don't work with the physical field
525  * \f$x\f$ directly. Instead we need to apply the operator to an internal
526  * variable defined by the respective algorithm. This variable is exposed as the
527  * LinearSolver::Tags::Operand prefix, and the algorithm expects that the
528  * computed operator action is written into
529  * db::add_tag_prefix<LinearSolver::Tags::OperatorAppliedTo,
530  * LinearSolver::Tags::Operand<...>> in each step.
531  *
532  * Each linear solver is expected to expose the following compile-time
533  * interface:
534  * - component_list: A tmpl::list that collects the additional parallel
535  * components this linear solver uses. The executables will append these to
536  * their own component_list.
537  * - tags: A type that follows the same structure as those that initialize
538  * other parts of the DataBox in InitializeElement.hpp files. This means it
539  * exposes simple_tags, compute_tags and a static initialize function so
540  * that it can be chained into the DataBox initialization.
541  * - perform_step: The action to be executed after the linear operator has
542  * been applied to the operand and written to the DataBox (see above). It will
543  * converge the fields towards their solution and update the operand before
544  * handing responsibility back to the algorithm for the next application of the
545  * linear operator:
546  * \snippet LinearSolverAlgorithmTestHelpers.hpp action_list
547  */
548
549 /// \defgroup LoggingGroup Logging
550 /// \brief Functions for logging progress of running code
551
552 /// \defgroup MathFunctionsGroup Math Functions
553 /// \brief Useful analytic functions
554
555 /*!
556  * \defgroup NumericalAlgorithmsGroup Numerical Algorithms
557  * \brief Generic numerical algorithms
558  */
559
560 /*!
561  * \defgroup NumericalFluxesGroup Numerical Fluxes
562  * \brief The set of available numerical fluxes
563  */
564
565 /*!
566  * \defgroup ObserversGroup Observers
567  * \brief Observing/writing data to disk.
568  */
569
570 /*!
571  * \defgroup OptionParsingGroup Option Parsing
572  * Things related to parsing YAML input files.
573  */
574
575 /*!
576  * \defgroup ParallelGroup Parallelization
577  * \brief Functions, classes and documentation related to parallelization and
578  * Charm++
579
580 SpECTRE builds a layer on top of Charm++ that performs various safety checks and
581 initialization for the user that can otherwise lead to difficult-to-debug
582 undefined behavior. The central concept is what is called a %Parallel
583 Component. A %Parallel Component is a struct with several type aliases that
584 is used by SpECTRE to set up the Charm++ chares and allowed communication
585 patterns. %Parallel Components are input arguments to the compiler, which then
586 writes the parallelization infrastructure that you requested for the executable.
587 There is no restriction on the number of %Parallel Components, though
588 practically it is best to have around 10 at most.
589
590 Here is an overview of what is described in detail in the sections below:
591
592 - Metavariables: Provides high-level configuration to the compiler, e.g. the
593  physical system to be simulated.
594 - Phase: Defines distinct simulation phases separated by a global
595  synchronization point, e.g. Initialize, Evolve and Exit.
596 - Algorithm: In each phase, iterates over a list of actions until the current
597  phase ends.
598 - %Parallel component: Maintains and executes its algorithm.
599 - Action: Performs a computational task, e.g. evaluating the right hand side of
600  the time evolution equations. May require data to be received from another
601  action potentially being executed on a different core or node.
602
603 ### The Metavariables Class
604
605 SpECTRE takes a different approach to input options passed to an executable than
606 is common. SpECTRE not only reads an input file at runtime but also has many
607 choices made at compile time. The compile time options are specified by what is
608 referred to as the metavariables. What exactly the metavariables struct
609 specifies depends somewhat on the executable, but all metavariables structs must
610 specify the following:
611
612 - help: a static constexpr OptionString that will be printed as part of the
613  help message. It should describe the executable and basic usage of it, as well
614  as any non-standard options that must be specified in the metavariables and
615  their current values. An example of a help string for one of the testing
616  executables is:
617  \snippet Test_AlgorithmCore.cpp help_string_example
618 - component_list: a tmpl::list of the parallel components (described below)
619  that are to be created. Most evolution executables will have the
620  DgElementArray parallel component listed. An example of a component_list
621  for one of the test executables is:
622  \snippet Test_AlgorithmCore.cpp component_list_example
623 - using const_global_cache_tag_list is set to a (possibly empty) tmpl::list
624  of OptionTags that are needed by the metavariables.
625 - Phase: an enum class that must contain at least Initialization and
626  Exit. Phases are described in the next section.
627 - determine_next_phase: a static function with the signature
628  \code
629  static Phase determine_next_phase(
630  const Phase& current_phase,
631  const Parallel::CProxy_ConstGlobalCache<EvolutionMetavars>& cache_proxy)
632  noexcept;
633  \endcode
634  What this function does is described below in the discussion of phases.
635
636 There are also several optional members:
637
638 - input_file: a static constexpr OptionString that is the default name of
639  the input file that is to be read. This can be overridden at runtime by
640  passing the --input-file argument to the executable.
641 - ignore_unrecognized_command_line_options: a static constexpr bool that
642  defaults to false. If set to true then unrecognized command line options
643  are ignored. Ignoring unrecognized options is generally only necessary for
644  tests where arguments for the testing framework, Catch, are passed to the
645  executable.
646
647 ### Phases of an Execution
648
649 Global synchronization points, where all cores wait for each other, are
650 undesirable for scalability reasons. However, they are sometimes inevitable for
651 algorithmic reasons. That is, in order to actually get a correct solution you
652 need to have a global synchronization. SpECTRE executables can have multiple
653 phases, where after each phase a global synchronization occurs. By global
654 synchronization we mean that no parallel components are executing or have more
655 tasks to execute: everything is waiting on a task to perform.
656
657 Every executable must have at least two phases, Initialization and Exit. The
658 next phase is decided by the static member function determine_next_phase in
659 the metavariables. Currently this function has access to the phase that is
660 ending, and also the global cache. In the future we will add support for
661 receiving data from various components to allow for more complex decision
662 making. Here is an example of a determine_next_phase function and the Phase
663 enum class:
664 \snippet Test_AlgorithmCore.cpp determine_next_phase_example
665
666 In contrast, an evolution executable might have phases Initialization,
667 SetInitialData, Evolve, and Exit, but have a similar switch or if-else
668 logic in the determine_next_phase function. The first phase that is entered is
669 always Initialization. During the Initialization phase the initialize
670 function is called on all parallel components. Once all parallel components'
671 initialize function is complete, the next phase is determined and the
672 execute_next_phase function is called after on all the parallel components.
673
674 At the end of an execution the Exit phase has the executable wait to make sure
675 no parallel components are performing or need to perform any more tasks, and
676 then exits. An example where this approach is important is if we are done
677 evolving a system but still need to write data to disk. We do not want to exit
678 the simulation until all data has been written to disk, even though we've
679 reached the final time of the evolution.
680
681 ### The Algorithm
682
683 Since most numerical algorithms repeat steps until some criterion such as the
684 final time or convergence is met, SpECTRE's parallel components are designed to
685 do such iterations for the user. An Algorithm executes an ordered list of
686 actions until one of the actions cannot be evaluated, typically because it is
687 waiting on data from elsewhere. When an algorithm can no longer evaluate actions
688 it passively waits by handing control back to Charm++. Once an algorithm
689 receives data, typically done by having another parallel component call the
690 receive_data function, the algorithm will try again to execute the next
691 action. If the algorithm is still waiting on more data then the algorithm will
692 again return control to Charm++ and passively wait for more data. This is
693 repeated until all required data is available. The actions that are iterated
694 over by the algorithm are called iterable actions and are described below.
695
696 \note
697 Currently all Algorithms must execute the same actions (described below) in all
698 phases. This restriction is also planned on being relaxed if the need arises.
699
700 ### %Parallel Components
701
702 Each %Parallel Component struct must have the following type aliases:
703 1. using chare_type is set to one of:
704  1. Parallel::Algorithms::Singletons have one object in the entire execution
705  of the program.
706  2. Parallel::Algorithms::Arrays hold zero or more elements, each of which
707  is an object distributed to some core. An array can grow and shrink in
708  size dynamically if need be and can also be bound to another array. A
709  bound array has the same number of elements as the array it is bound to,
710  and elements with the same ID are on the same core. See Charm++'s chare
711  arrays for details.
712  3. Parallel::Algorithms::Groups are arrays with
713  one element per core which are not able to be moved around between
714  cores. These are typically useful for gathering data from array elements
715  on their core, and then processing or reducing the data further. See
716  [Charm++'s](http://charm.cs.illinois.edu/help) group chares for details.
717  4. Parallel::Algorithms::Nodegroups are similar to
718  groups except that there is one element per node. For Charm++ SMP (shared
719  memory parallelism) builds, a node corresponds to the usual definition of
720  a node on a supercomputer. However, for non-SMP builds nodes and cores are
721  equivalent. We ensure that all entry method calls done through the
722  Algorithm's simple_action and receive_data functions are
724  method member function threaded_action.
725 2. using metavariables is set to the Metavariables struct that stores the
726  global metavariables. It is often easiest to have the %Parallel
727  Component struct have a template parameter Metavariables that is the
728  global metavariables struct. Examples of this technique are given below.
729 3. using action_list is set to a tmpl::list of the %Actions (described
730  below) that the Algorithm running on the %Parallel Component executes. The
731  %Actions are executed in the order that they are given in the tmpl::list.
732 4. using initial_databox is set to the type of the DataBox that will be passed
733  to the first Action of the action_list. Typically it is the output of some
734  simple action called during the Initialization Phase.
735 5. using options is set to a (possibly empty) tmpl::list of the option
736  structs. The options are read in from the input file specified in the main
737  Metavariables struct. After being read in they are passed to the
738  initialize function of the parallel component, which is described below.
739 6. using const_global_cache_tag_list is set to a tmpl::list of OptionTags
740  that are required by the parallel component. This is usually obtained from
741  the action_list using the Parallel::get_const_global_cache_tags
742  metafunction.
743
744 \note Array parallel components must also specify the type alias using
745 array_index, which is set to the type that indexes the %Parallel Component
746 Array. Charm++ allows arrays to be 1 through 6 dimensional or be indexed by a
747 custom type. The Charm++ provided indexes are wrapped as
748 Parallel::ArrayIndex1D through Parallel::ArrayIndex6D. When writing custom
749 array indices, the [Charm++ manual](http://charm.cs.illinois.edu/help) tells you
750 to write your own CkArrayIndex, but we have written a general implementation
751 that provides this functionality; all that you need to provide is a
752 plain-old-data
753 ([POD](http://en.cppreference.com/w/cpp/concept/PODType)) struct of the size of
754 at most 3 integers.
755
756 %Parallel Components have a static initialize function that is used
757 effectively as the constructor of the components. The signature of the
758 initialize functions must be:
759 \code
760 static void initialize(
761  Parallel::CProxy_ConstGlobalCache<metavariables>& global_cache, opts...);
762 \endcode
763 The initialize function is called by the Main %Parallel Component when
764 the execution starts and will typically call a simple %Action
765 to set up the initial state of the Algorithm, similar to what a constructor
766 does for classes. The initialize function also receives arguments that
767 are read from the input file which were specified in the options typelist
768 described above. The options are usually used to initialize the %Parallel
769 Component's DataBox, or even the component itself. An example of initializing
770 the component itself would be using the value of an option to control the size
771 of the %Parallel Component Array. The initialize functions of different
772 %Parallel Components are called in random order and so it is not safe to have
773 them depend on each other.
774
775 Each parallel component must also decide what to do in the different phases of
776 the execution. This is controlled by an execute_next_phase function with
777 signature:
778 \code
779 static void execute_next_phase(
780  const typename metavariables::Phase next_phase,
781  const Parallel::CProxy_ConstGlobalCache<metavariables>& global_cache);
782 \endcode
783 The determine_next_phase function in the Metavariables determines the next
784 phase, after which the execute_next_phase function gets called. The
785 execute_next_phase function determines what the %Parallel Component should do
786 during the next phase. For example, it may simply call perform_algorithm, call
787 a series of simple actions, perform a reduction over an Array, or not do
788 anything at all. Note that perform_algorithm performs the same actions (the
789 ones in action_list) no matter what Phase it is called in.
790
791 An example of a singleton %Parallel Component is:
792 \snippet Test_AlgorithmParallel.cpp singleton_parallel_component
793
794 An example of an array %Parallel Component is:
795 \snippet Test_AlgorithmParallel.cpp array_parallel_component
796 Elements are inserted into the Array by using the Charm++ insert member
797 function of the CProxy for the array. The insert function is documented in
798 the Charm++ manual. In the above Array example array_proxy is a CProxy and
799 so all the documentation for Charm++ array proxies applies. SpECTRE always
800 creates empty Arrays with the constructor and requires users to insert however
801 many elements they want and on which cores they want them to be placed. Note
802 that load balancing calls may result in Array elements being moved.
803
804 ### %Actions
805
806 For those familiar with Charm++, actions should be thought of as effectively
807 being entry methods. They are functions that can be invoked on a remote object
808 (chare/parallel component) using a CProxy (see the [Charm++
810 ConstGlobalCache using the parallel component struct and the
811 Parallel::get_parallel_component() function. %Actions are structs with a
812 static apply method and come in three variants: simple actions, iterable
813 actions, and reduction actions. One important thing to note
814 is that actions cannot return any data to the caller of the remote method.
815 Instead, "returning" data must be done via callbacks or a callback-like
816 mechanism.
817
818 The simplest signature of an apply method is for iterable actions:
819 \snippet Test_AlgorithmCore.cpp apply_iterative
820 The return type is discussed at the end of each section describing a particular
821 type of action. Simple actions can have additional arguments but must have at
822 least the arguments shown above. Reduction actions must have the above arguments
823 and an argument taken by value that is of the type the reduction was made over.
824 The db::DataBox should be thought of as the member data of the parallel
825 component while the actions are the member functions. The combination of a
826 db::DataBox and actions allows building up classes with arbitrary member data
827 and methods using template parameters and invocation of actions. This approach
828 allows us to eliminate the need for users to work with Charm++'s interface
829 files, which can be error prone and difficult to use.
830
831 The ConstGlobalCache is passed to each action so that the action has access
832 to global data and is able to invoke actions on other parallel components. The
833 ParallelComponent template parameter is the tag of the parallel component that
834 invoked the action. A proxy to the calling parallel component can then be
836 slightly different for different types of actions, so they will be discussed
837 below. However, one thing that is disallowed for all actions is calling an
838 action locally from within an action on the same parallel component.
839 Specifically,
840
842
843 Here ckLocal() is a Charm++ provided method that returns a pointer to the
844 local (currently executing) parallel component. See the [Charm++
846 However, you are able to queue a new action to be executed later on the same
847 parallel component by getting your own parallel component from the
848 ConstGlobalCache (Parallel::get_parallel_component<ParallelComponent>(cache)).
849 The difference between the two calls is that by calling an action through the
850 parallel component you will first finish the series of actions you are in, then
851 when they are complete Charm++ will call the next queued action.
852
853 Array, group, and nodegroup parallel components can have actions invoked in two
854 ways. First is a broadcast where the action is called on all elements of the
855 array:
856
858
859 The second case is invoking an action on a specific array element by using the
860 array element's index. The below example shows how a broadcast would be done
861 manually by looping over all elements in the array:
862
863 \snippet Test_AlgorithmParallel.cpp call_on_indexed_array
864
865 Note that in general you will not know what all the elements in the array are
866 and so a broadcast is the correct method of sending data to or invoking an
867 action on all elements of an array parallel component.
868
869 The array_index argument passed to all apply methods is the index into the
870 parallel component array. If the parallel component is not an array the value
871 and type of array_index is implementation defined and cannot be relied on. The
872 ActionList type is the tmpl::list of iterable actions run on the algorithm.
873 That is, it is equal to the action_list type alias in the parallel component.
874
875 #### 1. Simple %Actions
876
877 Simple actions are designed to be called in a similar fashion to member
878 functions of classes. They are the direct analog of entry methods in Charm++
879 except that the member data is stored in the db::DataBox that is passed in as
880 the first argument. There are a couple of important things to note with simple
881 actions:
882
883 1. A simple action must return void but can use db::mutate to change values
884  of items in the DataBox if the DataBox is taken as a non-const reference.
885  There is one exception: if the input DataBox is empty, then the
886  simple action can return a DataBox of type initial_databox. That is, an
887  action taking an empty DataBox and returning the initial_databox is
888  effectively constructing the DataBox in its initial state.
889 2. A simple action is instantiated once for an empty
890  db::DataBox<tmpl::list<>>, once for a DataBox of type
891  initial_databox (listed in the parallel component), and once for each
892  returned DataBox from the iterable actions in the action_list in the
893  parallel component. In some cases you will need specific items to be in the
894  DataBox otherwise the action won't compile. To restrict which DataBoxes can
895  be passed you should use Requires in the action's apply function
896  template parameter list. For example,
897  \snippet Test_AlgorithmCore.cpp requires_action
898  where the conditional checks if any element in the parameter pack DbTags is
899  CountActionsCalled.
900
901
902 A simple action that does not take any arguments can be called using a CProxy
903 from the ConstGlobalCache as follows:
904
905 \snippet Test_AlgorithmCore.cpp simple_action_call
906
907 If the simple action takes arguments then the arguments must be passed to the
908 simple_action method as a std::tuple (because Charm++ doesn't yet support
909 variadic entry method templates). For example,
910
911 \snippet Test_AlgorithmNodelock.cpp simple_action_with_args
912
913 Multiple arguments can be passed to the std::make_tuple call.
914
915 \note
916 You must be careful about type deduction when using std::make_tuple because
917 std::make_tuple(0) will be of type std::tuple<int>, which will not work if
918 the action is expecting to receive a size_t as its extra argument. Instead,
919 you can get a std::tuple<size_t> in one of two ways. First, you can pass in
920 std::tuple<size_t>(0), second you can include the header
921 Utilities/Literals.hpp and then pass in std::make_tuple(0_st).
922
923 #### 2. Iterable %Actions
924
925 %Actions in the algorithm that are part of the action_list are
926 executed one after the other until one of them cannot be evaluated. Iterable
927 actions may have an is_ready method that returns true or false depending
928 on whether or not the action is ready to be evaluated. If no is_ready method
929 is provided then the action is assumed to be ready to be evaluated. The
930 is_ready method typically checks that required data from other parallel
931 components has been received. For example, it may check that all data from
932 neighboring elements has arrived to be able to continue integrating in time.
933 The signature of an is_ready method must be:
934
936
937 The inboxes is a collection of the tags passed to receive_data and are
938 specified in the iterable actions member type alias inbox_tags, which must be
939 a tmpl::list. The inbox_tags must have two member type aliases, a
940 temporal_id which is used to identify when the data was sent, and a type
941 which is the type of the data to be stored in the inboxes. The types are
942 typically a std::unordered_map<temporal_id, DATA>. In the discussed scenario
943 of waiting for neighboring elements to send their data the DATA type would be
944 a std::unordered_map<TheElementIndex, DataSent>. Having DATA be a
945 std::unordered_multiset is currently also supported. Here is an example of a
947
949
950 The inbox_tags type alias for the action is:
951
953
954 and the is_ready function is:
955
957
958 Once all of the ints have been received, the iterable action is executed, not
959 before.
960
961 \warning
962 It is the responsibility of the iterable action to remove data from the inboxes
963 that will no longer be needed. The removal of unneeded data should be done in
964 the apply function.
965
966 Iterable actions can change the type of the DataBox by adding or removing
967 elements/tags from the DataBox. The only requirement is that the last action in
968 the action_list returns a DataBox that is the same type as the
969 initial_databox. Iterable actions can also request that the algorithm no
970 longer be executed, and choose which action in the ActionList/action_list to
971 execute next. This is all done via the return value from the apply function.
972 The apply function for iterable actions must return a std::tuple of one,
973 two, or three elements. The first element of the tuple is the new DataBox,
974 which can be the same as the type passed in or a DataBox with different tags.
975 Most iterable actions will simply return:
976
977 \snippet Test_AlgorithmParallel.cpp return_forward_as_tuple
978
979 By returning the DataBox as a reference in a std::tuple we avoid any
980 unnecessary copying of the DataBox. The second argument is an optional bool, and
981 controls whether or not the algorithm is terminated. If the bool is true then
982 the algorithm is terminated, by default it is false. Here is an example of how
983 to return a DataBox with the same type that is passed in and also terminate
984 the algorithm:
985
986 \snippet Test_AlgorithmParallel.cpp return_with_termination
987
988 Notice that we again return a reference to the DataBox, which is done to avoid
989 any copying. After an algorithm has been terminated it can be restarted by
990 passing false to the set_terminate method followed by calling the
991 perform_algorithm or receive_data methods.
992
993 The third optional element in the returned std::tuple is a size_t whose
994 value corresponds to the index of the action to be called next in the
995 action_list. The metafunction tmpl::index_of<list, element> can be used to
996 get an tmpl::integral_constant with the value of the index of the element
997 element in the typelist list. For example,
998
999 \snippet Test_AlgorithmCore.cpp out_of_order_action
1000
1001 Again a reference to the DataBox is returned, while the termination bool and
1002 next action size_t are returned by value. The metafunction call
1003 tmpl::index_of<ActionList, iterate_increment_int0>::%value returns a size_t
1004 whose value is that of the action iterate_increment_int0 in the action_list.
1005 The indexing of actions in the action_list starts at 0.
1006
1007 Iterable actions are invoked as part of the algorithm and so the only way
1008 to request they be invoked is by having the algorithm run on the parallel
1009 component. The algorithm can be explicitly evaluated by call the
1010 perform_algorithm method:
1011
1012 \snippet Test_AlgorithmCore.cpp perform_algorithm
1013
1014 The algorithm is also evaluated by calling the receive_data function, either
1015 on an entire array or singleton (this does a broadcast), or an on individual
1016 element of the array. Here is an example of a broadcast call:
1017
1019
1020 and of calling individual elements:
1021
1022 \snippet Test_AlgorithmParallel.cpp call_on_indexed_array
1023
1024 The receive_data function always takes a ReceiveTag, which is set in the
1025 actions inbox_tags type alias as described above. The first argument is the
1026 temporal identifier, and the second is the data to be sent.
1027
1028 Normally when remote functions are invoked they go through the Charm++ runtime
1029 system, which adds some overhead. The receive_data function tries to elide
1030 the call to the Charm++ RTS for calls into array components. Charm++ refers to
1031 these types of remote calls as "inline entry methods". With the Charm++ method
1032 of eliding the RTS, the code becomes susceptible to stack overflows because
1033 of infinite recursion. The receive_data function is limited to at most 64 RTS
1034 elided calls, though in practice reaching this limit is rare. When the limit is
1035 reached the remote method invocation is done through the RTS instead of being
1036 elided.
1037
1038 #### 3. Reduction %Actions
1039
1040 Finally, there are reduction actions which are used when reducing data over an
1041 array. For example, you may want to know the sum of a int from every
1042 element in the array. You can do this as follows:
1043
1044 \snippet Test_AlgorithmReduction.cpp contribute_to_reduction_example
1045
1046 This reduces over the parallel component
1047 ArrayParallelComponent<Metavariables>, reduces to the parallel component
1048 SingletonParallelComponent<Metavariables>, and calls the action
1049 ProcessReducedSumOfInts after the reduction has been performed. The reduction
1050 action is:
1051
1052 \snippet Test_AlgorithmReduction.cpp reduce_sum_int_action
1053
1054 As you can see, the last argument to the apply function is of type int, and
1055 is the reduced value.
1056
1057 You can also broadcast the result back to an array, even yourself. For example,
1058
1060
1061 It is often necessary to reduce custom data types, such as std::vector or
1062 std::unordered_map. Charm++ supports such custom reductions, and so does our
1063 layer on top of Charm++.
1064 Custom reductions require one additional step to calling
1065 contribute_to_reduction, which is writing a reduction function to reduce the
1066 custom data. We provide a generic type that can be used in custom reductions,
1067 Parallel::ReductionData, which takes a series of Parallel::ReductionDatum as
1068 template parameters and ReductionDatum::value_types as the arguments to the
1069 constructor. Each ReductionDatum takes up to four template parameters (two
1070 are required). The first is the type of data to reduce, and the second is a
1071 binary invokable that is called at each step of the reduction to combine two
1072 messages. The last two template parameters are used after the reduction has
1073 completed. The third parameter is an n-ary invokable that is called once the
1074 reduction is complete, whose first argument is the result of the reduction. The
1075 additional arguments can be any ReductionDatum::value_type in the
1076 ReductionData that are before the current one. The fourth template parameter
1077 of ReductionDatum is used to specify which data should be passed. It is a
1078 std::index_sequence indexing into the ReductionData.
1079
1080 The action that is invoked with the result of the reduction is:
1081
1082 \snippet Test_AlgorithmReduction.cpp custom_reduction_action
1083
1084 Note that it takes a Parallel::ReductionData object as its last argument.
1085
1086 \warning
1087 All elements of the array must call the same reductions in the same order. It is
1088 defined behavior to do multiple reductions at once as long as all contribute
1089 calls on all array elements occurred in the same order. It is undefined behavior
1090 if the contribute calls are made in different orders on different array
1091 elements.
1092
1093 ### Charm++ Node and Processor Level Initialization Functions
1094
1095 Charm++ allows running functions once per core and once per node before the
1096 construction of any parallel components. This is commonly used for setting up
1097 error handling and enabling floating point exceptions. Other functions could
1098 also be run. Which functions are run on each node and core is set by specifying
1099 a std::vector<void (*)()> called charm_init_node_funcs and
1100 charm_init_proc_funcs with function pointers to the functions to be called.
1101 For example,
1102 \snippet Test_AlgorithmCore.cpp charm_init_funcs_example
1103
1104 Finally, the user must include the Parallel/CharmMain.tpp file at the end of
1105 the main executable cpp file. So, the end of an executables main cpp file will
1106 then typically look as follows:
1107 \snippet Test_AlgorithmParallel.cpp charm_include_example
1108  */
1109
1110 /*!
1111  * \defgroup PeoGroup Performance, Efficiency, and Optimizations
1112  * \brief Classes and functions useful for performance optimizations.
1113  */
1114
1115 /*!
1116  * \defgroup PrettyTypeGroup Pretty Type
1117  * \brief Pretty printing of types
1118  */
1119
1120 /*!
1121  * \defgroup PythonBindingsGroup Python Bindings
1122  * \brief Classes and functions useful when writing python bindings.
1123  *
1124  * See the \ref spectre_writing_python_bindings "Writing Python Bindings"
1125  * section of the dev guide for details on how to write python bindings.
1126  */
1127
1128 /*!
1129  * \defgroup SlopeLimitersGroup Slope Limiters
1130  * \brief Slope limiters to control shocks and surfaces in the solution.
1131  */
1132
1133 /*!
1134  * \defgroup SpectralGroup Spectral
1135  * Things related to spectral transformations.
1136  */
1137
1138 /*!
1139  * \defgroup SurfacesGroup Surfaces
1140  * Things related to surfaces.
1141  */
1142
1143 /*!
1144  * \defgroup SwshGroup Spin-weighted spherical harmonics
1145  * Utilities, tags, and metafunctions for using and manipulating spin-weighted
1146  * spherical harmonics
1147  */
1148
1149 /*!
1150  * \defgroup TensorGroup Tensor
1151  * Tensor use documentation.
1152  */
1153
1154 /*!
1155  * \defgroup TensorExpressionsGroup Tensor Expressions
1156  * Tensor Expressions allow writing expressions of
1157  * tensors in a way similar to what is used with pen and paper.
1158  *
1159  * Tensor expressions are implemented using (smart) expression templates. This
1160  * allows a domain specific language making expressions such as
1161  * \code
1162  * auto T = evaluate<Indices::_a_t, Indices::_b_t>(F(Indices::_b,
1163  * Indices::_a));
1164  * \endcode
1165  * possible.
1166  */
1167
1168 /*!
1169  * \defgroup TestingFrameworkGroup Testing Framework
1170  * \brief Classes, functions, macros, and instructions for developing tests
1171  *
1172  * \details
1173  *
1174  * SpECTRE uses the testing framework
1175  * [Catch](https://github.com/philsquared/Catch). Catch supports a variety of
1176  * different styles of tests including BDD and fixture tests. The file
1177  * cmake/SpectreAddCatchTests.cmake parses the source files and adds the found
1178  * tests to ctest with the correct properties specified by tags and attributes.
1179  *
1180  * ### Usage
1181  *
1182  * To run the tests, type ctest in the build directory. You can specify
1183  * a regex to match the test name using ctest -R Unit.Blah, or run all
1184  * tests with a certain tag using ctest -L tag.
1185  *
1186  * ### Comparing double-precision results
1187  *
1188  * To compare two floating-point numbers that may differ by round-off, use the
1189  * helper object approx. This is an instance of Catch's comparison class
1190  * Approx in which the relative tolerance for comparisons is set to roughly
1191  * \f$10^{-14}\f$ (i.e. std::numeric_limits<double>::%epsilon()*100).
1192  * When possible, we recommend using approx for fuzzy comparisons as follows:
1193  * \example
1194  * \snippet Test_TestingFramework.cpp approx_default
1195  *
1196  * For checks that need more control over the precision (e.g. an algorithm in
1197  * which round-off errors accumulate to a higher level), we recommend using
1198  * the approx helper with a one-time tolerance adjustment. A comment
1199  * should explain the reason for the adjustment:
1200  * \example
1201  * \snippet Test_TestingFramework.cpp approx_single_custom
1202  *
1203  * For tests in which the same precision adjustment is re-used many times, a new
1204  * helper object can be created from Catch's Approx with a custom precision:
1205  * \example
1206  * \snippet Test_TestingFramework.cpp approx_new_custom
1207  *
1208  * Note: We provide the approx object because Catch's Approx defaults to a
1209  * very loose tolerance (std::numeric_limits<float>::%epsilon()*100, or
1210  * roughly \f$10^{-5}\f$ relative error), and so is poorly-suited to checking
1211  * many numerical algorithms that rely on double-precision accuracy. By
1212  * providing a tighter tolerance with approx, we avoid having to redefine the
1213  * tolerance in every test.
1214  *
1215  * ### Attributes
1216  *
1217  * Attributes allow you to modify properties of the test. Attributes are
1218  * specified as follows:
1219  * \code
1220  * // [[TimeOut, 10]]
1221  * // [[OutputRegex, The error message expected from the test]]
1222  * SPECTRE_TEST_CASE("Unit.Blah", "[Unit]") {
1223  * \endcode
1224  *
1225  * Available attributes are:
1226  *
1227  * <table class="doxtable">
1228  * <tr>
1229  * <th>Attribute </th><th>Description </th>
1230  * </tr>
1231  * <tr>
1232  * <td>TimeOut </td>
1233  * <td>override the default timeout and set the timeout to N seconds. This
1234  * should be set very sparingly since unit tests are designed to be
1235  * short. If your test is too long you should consider testing smaller
1236  * portions of the code if possible, or writing an integration test instead.
1237  * </td>
1238  * </tr>
1239  * <tr>
1240  * <td>OutputRegex </td>
1241  * <td>
1242  * When testing failure modes the exact error message must be tested, not
1243  * just that the test failed. Since the string passed is a regular
1244  * expression you must escape any regex tokens. For example, to match
1245  * some (word) and you must specify the string some $word$ and.
1246  * If your error message contains a newline, you can match it using the
1247  * dot operator ., which matches any character.
1248  * </td>
1249  * </tr>
1250  * </table>
1251  *
1252  * \example
1253  * \snippet Test_H5.cpp willfail_example_for_dev_doc
1254  *
1255  * ### Testing static assert
1256  *
1257  * You are able to test that a static_assert is being triggered using
1258  * the compilation failure test framework. When creating a new static_assert
1259  * test you must be sure to not have it in the same file as the runtime tests
1260  * since the file will not compile. The new file, say
1261  * Test_StaticAssertDataBox.cpp must be added to the
1262  * SPECTRE_COMPILATION_TESTS CMake variable, not SPECTRE_TESTS. Here is
1263  * an example of how to write a compilation failure test:
1264  *
1265  * \snippet TestCompilationFramework.cpp compilation_test_example
1266  *
1267  * Each individual test must be inside an #%ifdef COMPILATION_TEST_.* block
1268  * and each compilation test cpp file must contain
1269  * FILE_IS_COMPILATION_TEST outside of any #%ifdefs and at the end of
1270  * the file.
1271  *
1272  * Specific compiler versions can be specified for which the regex changes.
1273  * That is, the compiler version specified and all versions newer than that
1274  * will use the regex, until a newer compiler version is specified. For
1275  * example, see the below code prints a different static_assert for pre-GCC 6
1276  * and GCC 6 and newer.
1277  *
1278  * \snippet TestCompilationFramework.cpp gnu_versions_example
1279  *
1280  * ### Debugging Tests in GDB or LLDB
1281  *
1282  * Several tests fail intentionally at the executable level to test error
1283  * handling like ASSERT statements in the code. CTest is aware of which
1284  * should fail and passes them. If you want to debug an individual test
1285  * in a debugger you need to run a single test
1286  * using the %RunTests executable (in dg-charm-build/bin/RunTests) you
1287  * must specify the name of the test as the first argument. For example, if you
1288  * want to run just the "Unit.Gradient" test you can run
1289  * ./bin/RunTests Unit.Gradient. If you are using a debugger launch the
1290  * debugger, for example if you're using LLDB then run lldb ./bin/RunTests
1291  * and then to run the executable inside the debugger use run Unit.Gradient
1292  * inside the debugger.
1293  */
1294
1295 /*!
1296  * \defgroup TimeGroup Time
1297  * \brief Code related to the representation of time during simulations.
1298  *
1299  * The time covered by a simulation is divided up into a sequence of
1300  * adjacent, non-overlapping (except at endpoints) intervals referred
1301  * to as "slabs". The boundaries between slabs can be placed at
1302  * arbitrary times. Slabs, as represented in the code as the Slab
1303  * class, provide comparison operators comparing slabs agreeing with
1304  * the definition as a sequence of intervals. Slabs that do not
1305  * jointly belong to any such sequence should not be compared.
1306  *
1307  * The specific time is represented by the Time class, which encodes
1308  * the slab containing the time and the fraction of the slab that has
1309  * elapsed as an exact rational. Times are comparable according to
1310  * their natural time ordering, except for times belonging to
1311  * incomparable slabs.
1312  *
1313  * Differences in time within a slab are represented as exact
1314  * fractions of that slab by the TimeDelta class. TimeDeltas are only
1315  * meaningful within a single slab, with the exception that the ratio
1316  * of objects with different slabs may be taken, resulting in an
1317  * inexact floating-point result. Longer intervals of time are
1318  * represented using floating-point values.
1319  */
1320
1321 /*!
1322  * \defgroup TimeSteppersGroup Time Steppers
1323  * A collection of ODE integrators primarily used for time stepping.
1324  */
1325
1326 /*!
1327  * \defgroup TypeTraitsGroup Type Traits
1328  * A collection of useful type traits, including C++14 and C++17 additions to
1329  * the standard library.
1330  */
1331
1332 /*!
1333  * \defgroup UtilitiesGroup Utilities
1334  * \brief A collection of useful classes, functions and metafunctions.
1335  */
1336
1337 /*!
1338  * \defgroup VariableFixingGroup Variable Fixing
1339  * \brief A collection of different variable fixers ranging in sophistication.
1340  *
1341  * Build-up of numerical error can cause physical quantities to evolve
1342  * toward non-physical values. For example, pressure and density may become
1343  * negative. This will subsequently lead to failures in numerical inversion
1344  * schemes to recover the corresponding convervative values. A rough fix that
1345  * enforces physical quantities stay physical is to simply change them by hand
1346  * when needed. This can be done at various degrees of sophistication, but in
1347  * general the fixed quantities make up a negligible amount of the physics of
1348  * the simulation; a rough fix is vastly preferred to a simulation that fails
1349  * to complete due to nonphysical quantities.
1350  */