Parallelization, Charm++, and Core Concepts

Introduction

SpECTRE builds a layer on top of Charm++ that performs various safety checks and initialization for the user that can otherwise lead to difficult-to-debug undefined behavior. The central concept is what is called a Parallel Component. A Parallel Component is a struct with several type aliases that is used by SpECTRE to set up the Charm++ chares and allowed communication patterns. Parallel Components are input arguments to the compiler, which then writes the parallelization infrastructure that you requested for the executable. There is no restriction on the number of Parallel Components, though practically it is best to have around 10 at most.

Here is an overview of what is described in detail in the sections below:

  • Metavariables: Provides high-level configuration to the compiler, e.g. the physical system to be simulated.
  • Phases: Defines distinct simulation phases separated by a global synchronization point, e.g. Initialization, Evolve and Exit.
  • Algorithm: In each phase, repeatedly iterates over a list of actions until the current phase ends.
  • Parallel component: Maintains and executes its algorithm.
  • Action: Performs a computational task, e.g. evaluating the right hand side of the time evolution equations. May require data to be received from another action potentially being executed on a different core or node.

The Metavariables Class

SpECTRE takes a different approach to input options passed to an executable than is common. SpECTRE not only reads an input file at runtime but also has many choices made at compile time. The compile time options are specified by what is referred to as the metavariables. What exactly the metavariables struct specifies depends on the executable, but all metavariables structs must specify the following:

  • help: a static constexpr Options::String that will be printed as part of the help message. It should describe the executable and basic usage of it, as well as any non-standard options that must be specified in the metavariables and their current values. An example of a help string for one of the testing executables is:
    static constexpr Options::String help =
    "An executable for testing the core functionality of the Algorithm. "
    "Actions that do not perform any operations (no-ops), invoking simple "
    "actions, mutating data in the DataBox, adding and removing items from "
    "the DataBox, receiving data from other parallel components, and "
    "out-of-order execution of Actions are all tested. All tests are run "
    "just by running the executable, no input file or command line arguments "
    "are required";
  • component_list: a tmpl::list of the parallel components (described below) that are to be created. Most evolution executables will have the DgElementArray parallel component listed. An example of a component_list for one of the test executables is:
    using component_list = tmpl::list<NoOpsComponent<TestMetavariables>,
    MutateComponent<TestMetavariables>,
    ReceiveComponent<TestMetavariables>,
    AnyOrderComponent<TestMetavariables>>;
  • using const_global_cache_tags: a tmpl::list of tags that are used to place const items in the GlobalCache. The alias may be omitted if the list is empty.
  • using mutable_global_cache_tags: a tmpl::list of tags that are used to place mutable items in the GlobalCache. The alias may be omitted if the list is empty.
  • Phase: an enum class that must contain at least Initialization and Exit. Phases are described in the next section.
  • determine_next_phase: a static function with the signature
    static Phase determine_next_phase(
    const Phase& current_phase,
    const Parallel::CProxy_GlobalCache<EvolutionMetavars>& cache_proxy)
    noexcept;
    What this function does is described below in the discussion of phases.

There are also several optional members:

  • input_file: a static constexpr Options::String that is the default name of the input file that is to be read. This can be overridden at runtime by passing the --input-file argument to the executable.
  • ignore_unrecognized_command_line_options: a static constexpr bool that defaults to false. If set to true then unrecognized command line options are ignored. Ignoring unrecognized options is generally only necessary for tests where arguments for the testing framework, Catch, are passed to the executable.

Phases of an Execution

Global synchronization points, where all cores wait for each other, are undesirable for scalability reasons. However, they are sometimes inevitable for algorithmic reasons. That is, in order to actually get a correct solution you need to have a global synchronization. SpECTRE executables can have multiple phases, where after each phase a global synchronization occurs. By global synchronization we mean that no parallel components are executing or have more tasks to execute: everything is waiting to be told what tasks to perform next.

Every executable must have at least two phases, Initialization and Exit. The next phase is decided by the static member function determine_next_phase in the metavariables. Currently this function has access to the phase that is ending, and also the global cache. In the future we will add support for receiving data from various components to allow for more complex decision making. Here is an example of a determine_next_phase function and the Phase enum class:

enum class Phase {
Initialization,
NoOpsStart,
NoOpsFinish,
MutateStart,
MutateFinish,
ReceiveStart,
ReceiveFinish,
AnyOrderStart,
AnyOrderFinish,
Exit
};
static Phase determine_next_phase(
const Phase& current_phase,
const Parallel::CProxy_GlobalCache<
TestMetavariables>& /*cache_proxy*/) noexcept {
switch (current_phase) {
case Phase::Initialization:
return Phase::NoOpsStart;
case Phase::NoOpsStart:
return Phase::NoOpsFinish;
case Phase::NoOpsFinish:
return Phase::MutateStart;
case Phase::MutateStart:
return Phase::MutateFinish;
case Phase::MutateFinish:
return Phase::ReceiveStart;
case Phase::ReceiveStart:
return Phase::ReceiveFinish;
case Phase::ReceiveFinish:
return Phase::AnyOrderStart;
case Phase::AnyOrderStart:
return Phase::AnyOrderFinish;
case Phase::AnyOrderFinish:
[[fallthrough]];
case Phase::Exit:
return Phase::Exit;
default:
ERROR("Unknown Phase...");
}
return Phase::Exit;
}

In contrast, an evolution executable might have phases Initialization, SetInitialData, Evolve, and Exit, but have a similar switch or if-else logic in the determine_next_phase function. The first phase that is entered is always Initialization. During the Initialization phase the Parallel::GlobalCache is created, all non-array components are created, and empty array components are created. Next, the function allocate_array_components_and_execute_initialization_phase is called which allocates the elements of each array component, and then starts the Initialization phase on all parallel components. Once all parallel components' Initialization phase is complete, the next phase is determined and the execute_next_phase function is called on all the parallel components.

At the end of an execution the Exit phase has the executable wait to make sure no parallel components are performing or need to perform any more tasks, and then exits. An example where this approach is important is if we are done evolving a system but still need to write data to disk. We do not want to exit the simulation until all data has been written to disk, even though we've reached the final time of the evolution.

Warning
Currently dead-locks are treated as successful termination. In the future checks against deadlocks will be performed before terminating.

The Algorithm

Since most numerical algorithms repeat steps until some criterion such as the final time or convergence is met, SpECTRE's parallel components are designed to do such iterations for the user. An Algorithm executes an ordered list of actions until one of the actions cannot be evaluated, typically because it is waiting on data from elsewhere. When an algorithm can no longer evaluate actions it passively waits by handing control back to Charm++. Once an algorithm receives data, typically done by having another parallel component call the receive_data function, the algorithm will try again to execute the next action. If the algorithm is still waiting on more data then the algorithm will again return control to Charm++ and passively wait for more data. This is repeated until all required data is available. The actions that are iterated over by the algorithm are called iterable actions and are described below. Since the action list is phase dependent we refer to them generally as phase-dependent action lists (PDALs, pronounced "pedals").

Parallel Components

Each Parallel Component struct must have the following type aliases:

  1. using chare_type is set to one of:
    1. Parallel::Algorithms::Singletons have one object in the entire execution of the program.
    2. Parallel::Algorithms::Arrays hold zero or more elements, each of which is an object distributed to some core. An array can grow and shrink in size dynamically if need be and can also be bound to another array. A bound array has the same number of elements as the array it is bound to, and elements with the same ID are on the same core. See Charm++'s chare arrays for details.
    3. Parallel::Algorithms::Groups are arrays with one element per core which are not able to be moved around between cores. These are typically useful for gathering data from array elements on their core, and then processing or reducing the data further. See Charm++'s group chares for details.
    4. Parallel::Algorithms::Nodegroups are similar to groups except that there is one element per node. For Charm++ SMP (shared memory parallelism) builds, a node corresponds to the usual definition of a node on a supercomputer. However, for non-SMP builds nodes and cores are equivalent. We ensure that all entry method calls done through the Algorithm's simple_action and receive_data functions are threadsafe. User controlled threading is possible by calling the non-entry method member function threaded_action.
  2. using metavariables is set to the Metavariables struct that stores the global metavariables. It is often easiest to have the Parallel Component struct have a template parameter Metavariables that is the global metavariables struct. Examples of this technique are given below.
  3. using phase_dependent_action_list is set to a tmpl::list of Parallel::PhaseActions<PhaseType, Phase, tmpl::list<Actions...>> where each PhaseAction represents a PDAL that will be executed on the parallel component during the specified phase. The Actions are executed in the order that they are given in the tmpl::lists of the PDALs, but the phases need not be run in linear order. However, db::DataBox types are constructed assuming the phases are performed from first in the phase_dependent_action_list to the last. Simple actions (described below) can be executed in any phase. If there are no iterable actions in a phase then a PhaseAction need not be specified for that phase. However, at least one PhaseAction, even if it is empty, must be specified.
  4. using initialization_tags which is a tmpl::list of all the tags that will be inserted into the initial db::DataBox of each component. These tags are db::SimpleTags that have have a using option_tags type alias and a static function create_from_options (see the example below). This list can usually be constructed from the initialization actions of the component (i.e. the list of actions in the PhaseAction list for the Initialization phase) using the helper function Parallel::get_initialization_tags (see the examples of components below). Each initialization action may specify a type alias using initialization_tags which are a tmpl::list of tags that will be fetched from the db::DataBox by the action. All initialization_tags are removed from the db::DataBox of the component at the end of the Initialization phase, except for tags listed in a type alias using initialization_tags_to_keep that may appear in each initialization action.
  5. using const_global_cache_tags is set to a tmpl::list of tags that are required by the allocate_array function of an array component, or simple actions called on the parallel component. These tags correspond to const items that are stored in the Parallel::GlobalCache (of which there is one copy per Charm++ node). The alias can be omitted if the list is empty. (See array_allocation_tags below for specifying tags needed for the allocate_array function, but will not be added to the Parallel::GlobalCache.)
  6. using mutable_global_cache_tags is set to a tmpl::list of tags that correspond to mutable items that are stored in the Parallel::GlobalCache (of which there is one copy per Charm++ core). The alias can be omitted if the list is empty.
Note
Array parallel components must also specify the type alias using array_index, which is set to the type that indexes the Parallel Component Array. Charm++ allows arrays to be 1 through 6 dimensional or be indexed by a custom type. The Charm++ provided indexes are wrapped as Parallel::ArrayIndex1D through Parallel::ArrayIndex6D. When writing custom array indices, the Charm++ manual tells you to write your own CkArrayIndex, but we have written a general implementation that provides this functionality (see Parallel::ArrayIndex); all that you need to provide is a plain-old-data (POD) struct of the size of at most 3 integers.

Parallel array components have a static allocate_array function that is used to construct the elements of the array. The signature of the allocate_array functions must be:

static void allocate_array(
Parallel::CProxy_GlobalCache<metavariables>& global_cache,
const tuples::tagged_tuple_from_typelist<initialization_tags>&
initialization_items) noexcept;

The allocate_array function is called by the Main parallel component when the execution starts and will typically insert elements into array parallel components. If the allocate_array function depends upon input options, the array component must specify a using array_allocation_tags type alias that is a tmpl::list of tags which are db::SimpleTags that have have a using option_tags type alias and a static function create_from_options. An example is:

struct LinearOperator : db::SimpleTag {
using option_tags = tmpl::list<OptionTags::LinearOperator>;
static constexpr bool pass_metavariables = false;
linear_operator) noexcept {
return linear_operator;
}
};

The allocate_array functions of different array components are called in random order and so it is not safe to have them depend on each other.

Each parallel component must also decide what to do in the different phases of the execution. This is controlled by an execute_next_phase function with signature:

static void execute_next_phase(
const typename metavariables::Phase next_phase,
const Parallel::CProxy_GlobalCache<metavariables>& global_cache);

The determine_next_phase function in the Metavariables determines the next phase, after which the execute_next_phase function gets called. The execute_next_phase function determines what the parallel component should do during the next phase. Typically the execute_next_phase function should just call start_phase(phase) on the parallel component. In the future execute_next_phase may be removed.

An example of a singleton parallel component is:

template <class Metavariables>
struct SingletonParallelComponent {
using chare_type = Parallel::Algorithms::Singleton;
using metavariables = Metavariables;
using phase_dependent_action_list = tmpl::list<
Parallel::PhaseActions<typename Metavariables::Phase,
Metavariables::Phase::PerformSingletonAlgorithm,
tmpl::list<SingletonActions::CountReceives>>>;
using initialization_tags = Parallel::get_initialization_tags<
static void execute_next_phase(
const typename Metavariables::Phase next_phase,
const Parallel::CProxy_GlobalCache<Metavariables>&
global_cache) noexcept {
if (next_phase == Metavariables::Phase::PerformSingletonAlgorithm) {
auto& local_cache = *(global_cache.ckLocalBranch());
/// [perform_algorithm]
Parallel::get_parallel_component<SingletonParallelComponent>(local_cache)
.perform_algorithm();
/// [perform_algorithm]
return;
}
}
};

An example of an array parallel component is:

template <class Metavariables>
struct ArrayParallelComponent {
using chare_type = Parallel::Algorithms::Array;
using metavariables = Metavariables;
using phase_dependent_action_list = tmpl::list<
Parallel::PhaseActions<typename Metavariables::Phase,
Metavariables::Phase::Initialization,
tmpl::list<ArrayActions::Initialize>>,
typename Metavariables::Phase,
Metavariables::Phase::PerformArrayAlgorithm,
tmpl::list<ArrayActions::AddIntValue10, ArrayActions::IncrementInt0,
ArrayActions::RemoveInt0, ArrayActions::SendToSingleton>>>;
using initialization_tags = Parallel::get_initialization_tags<
using array_index = int;
static void allocate_array(
Parallel::CProxy_GlobalCache<Metavariables>& global_cache,
const tuples::tagged_tuple_from_typelist<initialization_tags>&
/*initialization_items*/) noexcept {
auto& local_cache = *(global_cache.ckLocalBranch());
auto& array_proxy =
Parallel::get_parallel_component<ArrayParallelComponent>(local_cache);
for (int i = 0, which_proc = 0,
i < number_of_1d_array_elements; ++i) {
array_proxy[i].insert(global_cache, {}, which_proc);
which_proc = which_proc + 1 == number_of_procs ? 0 : which_proc + 1;
}
array_proxy.doneInserting();
}
static void execute_next_phase(
const typename Metavariables::Phase next_phase,
Parallel::CProxy_GlobalCache<Metavariables>& global_cache) noexcept {
auto& local_cache = *(global_cache.ckLocalBranch());
if (next_phase == Metavariables::Phase::PerformArrayAlgorithm) {
Parallel::get_parallel_component<ArrayParallelComponent>(local_cache)
.perform_algorithm();
}
}
};

Elements are inserted into the array by using the Charm++ insert member function of the CProxy for the array. The insert function is documented in the Charm++ manual. In the above Array example array_proxy is a CProxy and so all the documentation for Charm++ array proxies applies. SpECTRE always creates empty arrays with the constructor and requires users to insert however many elements they want and on which cores they want them to be placed. Note that load balancing calls may result in array elements being moved.

Actions

For those familiar with Charm++, actions should be thought of as effectively being entry methods. They are functions that can be invoked on a remote object (chare/parallel component) using a CProxy (see the Charm++ manual), which is retrieved from the Parallel::GlobalCache using the parallel component struct and the Parallel::get_parallel_component() function. Actions are structs with a static apply method and come in three variants: simple actions, iterable actions, and reduction actions. One important thing to note is that actions cannot return any data to the caller of the remote method. Instead, "returning" data must be done via callbacks or a callback-like mechanism.

The simplest signature of an apply method is for iterable actions:

template <typename DbTags, typename... InboxTags, typename Metavariables,
typename ArrayIndex, typename ActionList,
typename ParallelComponent>
static auto apply(db::DataBox<DbTags>& box,
const ArrayIndex& /*array_index*/,
const ActionList /*meta*/,
const ParallelComponent* const /*meta*/) noexcept

The return type is discussed at the end of each section describing a particular type of action. Simple actions can have additional arguments, do not receive the inboxes or ActionList, and take the ParallelComponent as an explicit first template parameter. Reduction actions have the same signature as simple actions except that the additional arguments must be of the types reduced over. The db::DataBox should be thought of as the member data of the parallel component while the actions are the member functions. The combination of a db::DataBox and actions allows building up classes with arbitrary member data and methods using template parameters and invocation of actions. This approach allows us to eliminate the need for users to work with Charm++'s interface files, which can be error prone and difficult to use.

The Parallel::GlobalCache is passed to each action so that the action has access to global data and is able to invoke actions on other parallel components. The ParallelComponent template parameter is the tag of the parallel component that invoked the action. A proxy to the calling parallel component can then be retrieved from the Parallel::GlobalCache. The remote entry method invocations are slightly different for different types of actions, so they will be discussed below. However, one thing that is disallowed for all actions is calling an action locally from within an action on the same parallel component. Specifically,

auto& local_parallel_component =
*Parallel::get_parallel_component<ParallelComponent>(cache).ckLocal();
Parallel::simple_action<error_call_single_action_from_action>(
local_parallel_component);

Here ckLocal() is a Charm++ provided method that returns a pointer to the local (currently executing) parallel component. See the Charm++ manual for more information. However, you are able to queue a new action to be executed later on the same parallel component by getting your own parallel component from the Parallel::GlobalCache (Parallel::get_parallel_component<ParallelComponent>(cache)). The difference between the two calls is that by calling an action through the parallel component you will first finish the series of actions you are in, then when they are complete Charm++ will call the next queued action.

Array, group, and nodegroup parallel components can have actions invoked in two ways. First is a broadcast where the action is called on all elements of the array:

auto& group_parallel_component = Parallel::get_parallel_component<
GroupParallelComponent<Metavariables>>(cache);
Parallel::receive_data<Tags::IntReceiveTag>(
group_parallel_component,
db::get<Tags::CountActionsCalled>(box) + 100 * array_index,
db::get<Tags::CountActionsCalled>(box));

The second case is invoking an action on a specific array element by using the array element's index. The below example shows how a broadcast would be done manually by looping over all elements in the array:

auto& array_parallel_component =
Parallel::get_parallel_component<ArrayParallelComponent<Metavariables>>(
cache);
for (int i = 0; i < number_of_1d_array_elements; ++i) {
Parallel::receive_data<Tags::IntReceiveTag>(array_parallel_component[i],
0, 101, true);
}

Note that in general you will not know what all the elements in the array are and so a broadcast is the correct method of sending data to or invoking an action on all elements of an array parallel component.

The array_index argument passed to all apply methods is the index into the parallel component array. If the parallel component is not an array the value and type of array_index is implementation defined and cannot be relied on. The ActionList type is the tmpl::list of iterable actions in the current phase. That is, it is equal to the action_list type alias in the current PDAL.

1. Simple Actions

Simple actions are designed to be called in a similar fashion to member functions of classes. They are the direct analog of entry methods in Charm++ except that the member data is stored in the db::DataBox that is passed in as the first argument. A simple action must return void but can use db::mutate to change values of items in the db::DataBox if the db::DataBox is taken as a non-const reference. In some cases you will need specific items to be in the db::DataBox otherwise the action won't compile. To restrict which db::DataBoxes can be passed you should use Requires in the action's apply function template parameter list. For example,

template <typename ParallelComponent, typename... DbTags,
typename Metavariables, typename ArrayIndex,
std::is_same_v<CountActionsCalled, DbTags>...>> = nullptr>
static void apply(db::DataBox<tmpl::list<DbTags...>>& box,
const ArrayIndex& /*array_index*/) noexcept {

where the conditional checks if any element in the parameter pack DbTags is CountActionsCalled.

A simple action that does not take any arguments can be called using a CProxy from the Parallel::GlobalCache as follows:

Parallel::simple_action<add_remove_test::finalize>(
Parallel::get_parallel_component<MutateComponent>(local_cache));

If the simple action takes arguments then the arguments must be as follows:

Parallel::simple_action<nodegroup_receive>(local_nodegroup, array_index);

2. Iterable Actions

Actions in the algorithm that are part of the current PDAL are executed one after the other until one of them cannot be evaluated. Iterable actions may have an is_ready method that returns true or false depending on whether or not the action is ready to be evaluated. If no is_ready method is provided then the action is assumed to be ready to be evaluated. The is_ready method typically checks that required data from other parallel components has been received. For example, it may check that all data from neighboring elements has arrived to be able to continue integrating in time. The signature of an is_ready method must be:

template <typename DbTags, typename... InboxTags, typename Metavariables,
typename ArrayIndex>
static bool is_ready(
const db::DataBox<DbTags>& box,
const ArrayIndex& /*array_index*/) noexcept

The inboxes is a collection of the tags passed to receive_data and are specified in the iterable actions member type alias inbox_tags, which must be a tmpl::list. The inbox_tags must have two member type aliases, a temporal_id which is used to identify when the data was sent, and a type which is the type of the data to be stored in the inboxes. The types are typically a std::unordered_map<temporal_id, DATA>. In the discussed scenario of waiting for neighboring elements to send their data the DATA type would be a std::unordered_map<TheElementId, DataSent>. Inbox tags must also specify a static void insert_into_inbox() function. For example,

struct IntReceiveTag {
using temporal_id = int;
template <typename Inbox, typename ReceiveDataType>
static void insert_into_inbox(const gsl::not_null<Inbox*> inbox,
const temporal_id& temporal_id_v,
ReceiveDataType&& data) noexcept {
(*inbox)[temporal_id_v].insert(std::forward<ReceiveDataType>(data));
}
};

For common types of DATA, such as a map, a data structure with an insert function, a data structure with a push_back function, or copy/move assignment that is used to insert the received data, inserters are available in Parallel::InboxInserters. For example, there is Parallel::InboxInserters::Map for map data structures. The inbox tag can inherit publicly off the inserters to gain the required insertion capabilities:

struct IntReceiveTag
: public Parallel::InboxInserters::MemberInsert<IntReceiveTag> {
using temporal_id = TestAlgorithmArrayInstance;
};

The inbox_tags type alias for the action is:

using inbox_tags = tmpl::list<Tags::IntReceiveTag>;

and the is_ready function is:

template <typename DbTags, typename... InboxTags, typename Metavariables,
typename ArrayIndex>
static bool is_ready(
const db::DataBox<DbTags>& /*box*/,
const ArrayIndex& /*array_index*/) noexcept {
auto& int_receives = tuples::get<Tags::IntReceiveTag>(inboxes);
return int_receives.size() == 70;
}

Once all of the ints have been received, the iterable action is executed, not before.

Warning
It is the responsibility of the iterable action to remove data from the inboxes that will no longer be needed. The removal of unneeded data should be done in the apply function.

Iterable actions can change the type of the db::DataBox by adding or removing elements/tags from the db::DataBox. The only requirement is that the last action in each PDAL returns a db::DataBox that is the same type for each iteration. Iterable actions can also request that the algorithm no longer be executed, and control which action in the current PDAL will be executed next. This is all done via the return value from the apply function. The apply function for iterable actions must return a std::tuple of one, two, or three elements. The first element of the tuple is the new db::DataBox, which can be the same as the type passed in or a db::DataBox with different tags. Most iterable actions will simply return:

return std::forward_as_tuple(std::move(box));

By returning the db::DataBox as a reference in a std::tuple we avoid any unnecessary copying of the db::DataBox. The second argument is an optional bool, and controls whether or not the algorithm is terminated. If the bool is true then the algorithm is terminated, by default it is false. Here is an example of how to return a db::DataBox with the same type that is passed in and also terminate the algorithm:

return std::tuple<db::DataBox<DbTags>&&, bool>(std::move(box), true);

Notice that we again return a reference to the db::DataBox, which is done to avoid any copying. After an algorithm has been terminated it can be restarted by passing false to the set_terminate method or by calling receive_data(..., true). Since the order in which messages are received is undefined in most cases the receive_data(..., true) call should be used to restart the algorithm.

The third optional element in the returned std::tuple is a size_t whose value corresponds to the index of the action to be called next in the PDAL. The metafunction tmpl::index_of<list, element> can be used to get an tmpl::integral_constant with the value of the index of the element element in the typelist list. For example,

std::move(box), true,
tmpl::index_of<ActionList, iterate_increment_int0>::value + 1);

Again a reference to the db::DataBox is returned, while the termination bool and next action size_t are returned by value. The metafunction call tmpl::index_of<ActionList, iterate_increment_int0>::value returns a size_t whose value is that of the action iterate_increment_int0 in the PDAL. The indexing of actions in the PDAL starts at 0.

Iterable actions are invoked as part of the algorithm and so the only way to request they be invoked is by having the algorithm run on the parallel component. The algorithm can be explicitly evaluated in a new phase by calling start_phase(Phase::TheCurrentPhase):

Parallel::get_parallel_component<NoOpsComponent>(local_cache)
.start_phase(next_phase);

Alternatively, to evaluate the algorithm without changing phases the perform_algorithm() method can be used:

Parallel::get_parallel_component<SingletonParallelComponent>(local_cache)
.perform_algorithm();

By passing true to perform_algorithm the algorithm will be restarted if it was terminated.

The algorithm is also evaluated by calling the receive_data function, either on an entire array or singleton (this does a broadcast), or an on individual element of the array. Here is an example of a broadcast call:

auto& group_parallel_component = Parallel::get_parallel_component<
GroupParallelComponent<Metavariables>>(cache);
Parallel::receive_data<Tags::IntReceiveTag>(
group_parallel_component,
db::get<Tags::CountActionsCalled>(box) + 100 * array_index,
db::get<Tags::CountActionsCalled>(box));

and of calling individual elements:

auto& array_parallel_component =
Parallel::get_parallel_component<ArrayParallelComponent<Metavariables>>(
cache);
for (int i = 0; i < number_of_1d_array_elements; ++i) {
Parallel::receive_data<Tags::IntReceiveTag>(array_parallel_component[i],
0, 101, true);
}

The receive_data function always takes a ReceiveTag, which is set in the actions inbox_tags type alias as described above. The first argument is the temporal identifier, and the second is the data to be sent.

Normally when remote functions are invoked they go through the Charm++ runtime system, which adds some overhead. The receive_data function tries to elide the call to the Charm++ RTS for calls into array components. Charm++ refers to these types of remote calls as "inline entry methods". With the Charm++ method of eliding the RTS, the code becomes susceptible to stack overflows because of infinite recursion. The receive_data function is limited to at most 64 RTS elided calls, though in practice reaching this limit is rare. When the limit is reached the remote method invocation is done through the RTS instead of being elided.

3. Reduction Actions

Finally, there are reduction actions which are used when reducing data over an array. For example, you may want to know the sum of a int from every element in the array. You can do this as follows:

Parallel::ReductionData<Parallel::ReductionDatum<int, funcl::Plus<>>>
my_send_int{array_index};
Parallel::contribute_to_reduction<ProcessReducedSumOfInts>(
my_send_int, my_proxy, singleton_proxy);

This reduces over the parallel component ArrayParallelComponent<Metavariables>, reduces to the parallel component SingletonParallelComponent<Metavariables>, and calls the action ProcessReducedSumOfInts after the reduction has been performed. The reduction action is:

struct ProcessReducedSumOfInts {
template <typename ParallelComponent, typename DbTags, typename Metavariables,
typename ArrayIndex>
static void apply(db::DataBox<DbTags>& /*box*/,
const ArrayIndex& /*array_index*/,
const int& value) noexcept {
SPECTRE_PARALLEL_REQUIRE(number_of_1d_array_elements *
(number_of_1d_array_elements - 1) / 2 ==
value);
}
};

As you can see, the last argument to the apply function is of type int, and is the reduced value.

You can also broadcast the result back to an array, even yourself. For example,

Parallel::contribute_to_reduction<ProcessReducedSumOfInts>(
my_send_int, my_proxy, array_proxy);

It is often necessary to reduce custom data types, such as std::vector or std::unordered_map. Charm++ supports such custom reductions, and so does our layer on top of Charm++. Custom reductions require one additional step to calling contribute_to_reduction, which is writing a reduction function to reduce the custom data. We provide a generic type that can be used in custom reductions, Parallel::ReductionData, which takes a series of Parallel::ReductionDatum as template parameters and ReductionDatum::value_types as the arguments to the constructor. Each ReductionDatum takes up to four template parameters (two are required). The first is the type of data to reduce, and the second is a binary invokable that is called at each step of the reduction to combine two messages. The last two template parameters are used after the reduction has completed. The third parameter is an n-ary invokable that is called once the reduction is complete, whose first argument is the result of the reduction. The additional arguments can be any ReductionDatum::value_type in the ReductionData that are before the current one. The fourth template parameter of ReductionDatum is used to specify which data should be passed. It is a std::index_sequence indexing into the ReductionData.

The action that is invoked with the result of the reduction is:

struct ProcessCustomReductionAction {
template <typename ParallelComponent, typename DbTags, typename Metavariables,
typename ArrayIndex>
static void apply(db::DataBox<DbTags>& /*box*/,
const ArrayIndex& /*array_index*/, int reduced_int,
std::vector<int>&& reduced_vector) noexcept {
SPECTRE_PARALLEL_REQUIRE(reduced_int == 10);
SPECTRE_PARALLEL_REQUIRE(reduced_map.at("unity") ==
number_of_1d_array_elements - 1);
SPECTRE_PARALLEL_REQUIRE(reduced_map.at("double") ==
2 * number_of_1d_array_elements - 2);
SPECTRE_PARALLEL_REQUIRE(reduced_map.at("negative") == 0);
reduced_vector ==
(std::vector<int>{-reduced_int * number_of_1d_array_elements *
(number_of_1d_array_elements - 1) / 2,
-reduced_int * number_of_1d_array_elements * 10,
8 * reduced_int * number_of_1d_array_elements}));
}
};

Note that it takes objects of the types that the reduction was done over as additional arguments.

Warning
All elements of the array must call the same reductions in the same order. It is defined behavior to do multiple reductions at once as long as all contribute calls on all array elements occurred in the same order. It is undefined behavior if the contribute calls are made in different orders on different array elements.

Mutable items in the GlobalCache

Most items in the GlobalCache are constant, and are specified by type aliases called const_global_cache_tags as described above. However, the GlobalCache can also store mutable items. Because of asynchronous execution, care must be taken when mutating items in the GlobalCache, as described below.

A mutable item can be of any type, as long as that type is something that can be checked for whether it is "up-to-date". Here "up-to-date" means that the item can be safely used (even read-only) without needing to be mutated first. For example, a mutable item might be a function of time that knows the range of times for which it is valid; the mutable item is then deemed up-to-date if it will be called for a time within its range of validity, and it is deemed not up-to-date if it will be called for a time outside its range of validity. Thus the up-to-date status of a mutable item is determined by both the state of the item itself and by the code that wishes to use that item.

1. Specification of mutable GlobalCache items

Mutable GlobalCache items are specified by a type alias mutable_global_cache_tags, which is treated the same way as const_global_cache_tags for const items.

2. Use of mutable GlobalCache items

1. Checking if the item is up-to-date

Because execution is asynchronous, any code that uses a mutable item in the GlobalCache must first check whether that item is up-to-date. The information about whether an item is up-to-date is assumed to be stored in the item itself. For example, a mutable object stored in the GlobalCache might have type std::map<temporal_id,T> (for some type T), and then any code that uses the stored object can check whether an entry exists for a particular temporal_id. To avoid race conditions, it is important that up-to-date checks are based on something that is independent of the order of mutation (like a temporal_id, and not like checking the size of a vector).

To check an item, use the function Parallel::mutable_cache_item_is_ready, which returns a bool indicating whether the item is up-to-date. If the item is up-to-date, then it can be used. Parallel::mutable_cache_item_is_ready takes a lambda as an argument. This lambda is passed a single argument: a const reference to the item being retrieved. The lambda should determine whether the item is up-to-date. If so, it should return a default_constructed std::optional<CkCallback>; if not, it should return a std::optional<CkCallback> to a callback function that will be called on the next Parallel::mutate of that item. The callback will typically check again if the item is up-to-date and if so will execute some code that gets the item via Parallel::get.

For the case of iterable actions, Parallel::mutable_cache_item_is_ready is typically called from the is_ready function of the iterable action, and the callback is perform_algorithm(). In the example below, the vector is considered up-to-date if it is non-empty:

template <typename DbTags, typename... InboxTags, typename Metavariables,
typename ArrayIndex>
static bool is_ready(const db::DataBox<DbTags>& /*box*/,
const ArrayIndex& /*array_index*/) noexcept {
++number_of_calls_to_use_stored_double_is_ready;
UseMutatedCacheComponent<Metavariables>>(cache);
auto callback = CkCallback(
Parallel::index_from_parallel_component<
UseMutatedCacheComponent<Metavariables>>::perform_algorithm(),
this_proxy);
return ::Parallel::mutable_cache_item_is_ready<Tags::VectorOfDoubles>(
cache,
[&callback](const std::vector<double>& VectorOfDoubles)
return VectorOfDoubles.empty() ? std::optional<CkCallback>(callback)
});
}

Note that Parallel::mutable_cache_item_is_ready is called on a local core and does no parallel communication.

2. Retrieving the item

The item is retrieved using Parallel::get just like for constant items. For example, to retrieve the item Tags::VectorOfDoubles:

SPECTRE_PARALLEL_REQUIRE(Parallel::get<Tags::VectorOfDoubles>(cache) ==
expected_result);

Note that Parallel::get is called on a local core and does no parallel communication.

Whereas we support getting non-mutable items in the GlobalCache from a DataBox via db::get, we intentionally do not support db::get of mutable items in the GlobalCache from a DataBox. The reason is that mutable items should be retrieved only after a Parallel::mutable_cache_item_is_ready check, and being able to retrieve a mutable item from a DataBox makes it difficult to enforce that check, especially when automatically-executing compute items are considered.

3. Modifying a mutable GlobalCache item

To modify a mutable item, pass Parallel::mutate two template parameters: the tag to mutate, and a struct with an apply function that does the mutating. Parallel::mutate takes two arguments: a proxy to the GlobalCache, and a tuple that is passed into the mutator function. For the following example,

Parallel::mutate<Tags::VectorOfDoubles,
MutationFunctions::add_stored_double>(cache.thisProxy,
42.0);

the mutator function is defined as below:

namespace MutationFunctions {
struct add_stored_double {
static void apply(const gsl::not_null<std::vector<double>*> data,
const double new_value) noexcept {
data->emplace_back(new_value);
}
};
} // namespace MutationFunctions

Parallel::mutate broadcasts to every core, where it calls the mutator function and then calls all the callbacks that have been set on that core by Parallel::mutable_cache_item_is_ready. The Parallel::mutate operation is guaranteed to be thread-safe without any further action by the developer.

Charm++ Node and Processor Level Initialization Functions

Charm++ allows running functions once per core and once per node before the construction of any parallel components. This is commonly used for setting up error handling and enabling floating point exceptions. Other functions could also be run. Which functions are run on each node and core is set by specifying a std::vector<void (*)()> called charm_init_node_funcs and charm_init_proc_funcs with function pointers to the functions to be called. For example,

static const std::vector<void (*)()> charm_init_node_funcs{
&setup_error_handling};
static const std::vector<void (*)()> charm_init_proc_funcs{

Finally, the user must include the Parallel/CharmMain.tpp file at the end of the main executable cpp file. So, the end of an executables main cpp file will then typically look as follows:

static const std::vector<void (*)()> charm_init_node_funcs{
&setup_error_handling, &disable_openblas_multithreading};
static const std::vector<void (*)()> charm_init_proc_funcs{
using charmxx_main_component = Parallel::Main<TestMetavariables>;
#include "Parallel/CharmMain.tpp" // IWYU pragma: keep
std::apply
T apply(T... args)
Parallel::GlobalCache
Definition: ElementReceiveInterpPoints.hpp:15
Parallel::get_parallel_component
auto get_parallel_component(GlobalCache< Metavariables > &cache) noexcept -> Parallel::proxy_from_parallel_component< GlobalCache_detail::get_component_if_mocked< typename Metavariables::component_list, ParallelComponentTag >> &
Access the Charm++ proxy associated with a ParallelComponent.
Definition: GlobalCache.hpp:454
std::vector
tmpl2::flat_any_v
constexpr bool flat_any_v
A non-short-circuiting logical OR between bools 'B"".
Definition: TMPL.hpp:530
std::tuple
Parallel::number_of_procs
int number_of_procs()
Number of processing elements.
Definition: Info.hpp:16
db::SimpleTag
Mark a struct as a simple tag by inheriting from this.
Definition: Tag.hpp:36
SPECTRE_PARALLEL_REQUIRE
#define SPECTRE_PARALLEL_REQUIRE(expr)
A similar to Catch's REQUIRE statement, but can be used in tests that spawn several chares with possi...
Definition: TestingFramework.hpp:65
disable_openblas_multithreading
void disable_openblas_multithreading() noexcept
Disable OpenBLAS multithreading since it conflicts with Charm++ parallelism.
Definition: Blas.cpp:15
Parallel::get_initialization_tags
tmpl::remove_duplicates< tmpl::flatten< tmpl::list< AllocationTagsList, tmpl::transform< InitializationActionsList, detail::get_initialization_tags_from_action< tmpl::_1 > >> >> get_initialization_tags
Given a list of initialization actions, and possibly a list of tags needed for allocation of an array...
Definition: ParallelComponentHelpers.hpp:252
DenseMatrix< double, blaze::columnMajor >
Parallel::get_initialization_actions_list
tmpl::flatten< tmpl::transform< PhaseDepActionList, detail::get_initialization_actions_list< tmpl::_1 > >> get_initialization_actions_list
Given the phase dependent action list, return the list of actions in the Initialization phase (or an ...
Definition: ParallelComponentHelpers.hpp:216
enable_floating_point_exceptions
void enable_floating_point_exceptions()
Definition: FloatingPointExceptions.cpp:27
ERROR
#define ERROR(m)
prints an error message to the standard error stream and aborts the program.
Definition: Error.hpp:36
Parallel::PhaseActions
List of all the actions to be executed in the specified phase.
Definition: PhaseDependentActionList.hpp:16
Parallel::create_from_options
tuples::TaggedTuple< Tags... > create_from_options(const tuples::TaggedTuple< OptionTags... > &options, tmpl::list< Tags... >) noexcept
Given a list of tags and a tagged tuple containing items created from input options,...
Definition: CreateFromOptions.hpp:40
ActionTesting::is_ready
bool is_ready(MockRuntimeSystem< Metavariables > &runner, const typename Component::array_index &array_index) noexcept
Runs the is_ready function and returns the result for the next action in the current phase on the arr...
Definition: ActionTesting.hpp:2057
Parallel::InboxInserters::MemberInsert
Inserter for inserting data that is received as the value_type of a data structure that has an insert...
Definition: InboxInserters.hpp:67
Parallel::Main
Definition: Main.hpp:40
tuples::TaggedTuple
An associative container that is indexed by structs.
Definition: TaggedTuple.hpp:271
Parallel::mutate
void mutate(GlobalCache< Metavariables > &cache, Args &&... args) noexcept
Mutates non-const data in the cache, by calling Function::apply()
Definition: GlobalCache.hpp:553
Options::String
const char *const String
The string used in option structs.
Definition: Options.hpp:32
std::optional
Requires
typename Requires_detail::requires_impl< B >::template_error_type_failed_to_meet_requirements_on_template_parameters Requires
Express requirements on the template parameters of a function or class, replaces std::enable_if_t
Definition: Requires.hpp:67
std::unordered_map
std::data
T data(T... args)
gsl::not_null
Require a pointer to not be a nullptr
Definition: Gsl.hpp:183