SpECTRE
v2024.08.03
|
A distributed object (Charm++ Chare) that executes a series of Actions and is capable of sending and receiving data. Acts as an interface to Charm++. More...
#include <DistributedObject.hpp>
Public Types | |
using | all_actions_list = tmpl::flatten< tmpl::list< typename PhaseDepActionListsPack::action_list... > > |
List of Actions in the order that generates the DataBox types. | |
using | metavariables = typename ParallelComponent::metavariables |
The metavariables class passed to the Algorithm. | |
using | inbox_tags_list = Parallel::get_inbox_tags< all_actions_list > |
List off all the Tags that can be received into the Inbox. | |
using | array_index = typename get_array_index< typename ParallelComponent::chare_type >::template f< ParallelComponent > |
The type of the object used to uniquely identify the element of the array, group, or nodegroup. The default depends on the component, see ParallelComponentHelpers. | |
using | parallel_component = ParallelComponent |
using | chare_type = typename parallel_component::chare_type |
The type of the Chare. | |
using | cproxy_type = typename chare_type::template cproxy< parallel_component, array_index > |
The Charm++ proxy object type. | |
using | cbase_type = typename chare_type::template cbase< parallel_component, array_index > |
The Charm++ base object type. | |
using | phase_dependent_action_lists = tmpl::list< PhaseDepActionListsPack... > |
using | inbox_type = tuples::tagged_tuple_from_typelist< inbox_tags_list > |
using | all_cache_tags = get_const_global_cache_tags< metavariables > |
using | distributed_object_tags = typename Tags::distributed_object_tags< metavariables, array_index > |
using | databox_type = db::compute_databox_type< tmpl::flatten< tmpl::list< distributed_object_tags, typename parallel_component::simple_tags_from_options, Tags::GlobalCacheImplCompute< metavariables >, Tags::ResourceInfoReference< metavariables >, db::wrap_tags_in< Tags::FromGlobalCache, all_cache_tags >, Algorithm_detail::action_list_simple_tags< parallel_component >, Algorithm_detail::action_list_compute_tags< parallel_component > > > > |
Public Member Functions | |
template<class... InitializationTags> | |
DistributedObject (const Parallel::CProxy_GlobalCache< metavariables > &global_cache_proxy, tuples::TaggedTuple< InitializationTags... > initialization_items) | |
Constructor used by Main to initialize the algorithm. | |
DistributedObject (const Parallel::CProxy_GlobalCache< metavariables > &global_cache_proxy, Parallel::Phase current_phase, std::unordered_map< Parallel::Phase, size_t > phase_bookmarks, const std::unique_ptr< Parallel::Callback > &callback) | |
Constructor used to dynamically add a new element of an array The callback is executed after the element is created. | |
DistributedObject (CkMigrateMessage *) | |
Charm++ migration constructor, used after a chare is migrated. | |
std::string | print_types () const |
Print the expanded type aliases. | |
std::string | print_state () const |
Print the current state of the algorithm. | |
std::string | print_inbox () const |
Print the current contents of the inboxes. | |
std::string | print_databox () const |
Print the current contents of the DataBox. | |
const auto & | get_inboxes () const |
Get read access to all the inboxes. | |
auto & | get_node_lock () |
void | pup (PUP::er &p) override |
template<typename Action , typename Arg > | |
void | reduction_action (Arg arg) |
Calls the apply function Action after a reduction has been completed. More... | |
template<typename Action , typename... Args> | |
void | simple_action (std::tuple< Args... > args) |
Explicitly call the action Action . | |
template<typename Action > | |
void | simple_action () |
template<typename Action , typename... Args> | |
Action::return_type | local_synchronous_action (Args &&... args) |
Call the Action sychronously, returning a result without any parallelization. The action is called immediately and control flow returns to the caller immediately upon completion. More... | |
template<typename ReceiveTag , typename ReceiveDataType > | |
void | receive_data (typename ReceiveTag::temporal_id instance, ReceiveDataType &&t, bool enable_if_disabled=false) |
Receive data and store it in the Inbox, and try to continue executing the algorithm. More... | |
template<typename ReceiveTag , typename MessageType > | |
void | receive_data (MessageType *message) |
void | start_phase (const Parallel::Phase next_phase) |
Start execution of the phase-dependent action list in next_phase . If next_phase has already been visited, execution will resume at the point where the previous execution of the same phase left off. | |
Phase | phase () const |
Get the current phase. | |
const std::unordered_map< Parallel::Phase, size_t > & | phase_bookmarks () const |
Get the phase bookmarks. More... | |
constexpr void | set_terminate (const bool t) |
Tell the Algorithm it should no longer execute the algorithm. This does not mean that the execution of the program is terminated, but only that the algorithm has terminated. An algorithm can be restarted by passing true as the second argument to the receive_data method or by calling perform_algorithm(true). | |
constexpr bool | get_terminate () const |
Check if an algorithm should continue being evaluated. | |
template<typename ThisAction , typename PhaseIndex , typename DataBoxIndex > | |
bool | invoke_iterable_action () |
void | contribute_termination_status_to_main () |
Does a reduction over the component of the reduction status sending the result to Main's did_all_elements_terminate member function. | |
const std::string & | deadlock_analysis_next_iterable_action () const |
Returns the name of the last "next iterable action" to be run before a deadlock occurred. | |
template<typename Action , typename... Args, Requires<((void) sizeof...(Args), std::is_same_v< Parallel::Algorithms::Nodegroup, chare_type >)> = nullptr> | |
void | threaded_action (std::tuple< Args... > args) |
Call an Action on a local nodegroup requiring the Action to handle thread safety. More... | |
template<typename Action > | |
void | threaded_action () |
Call an Action on a local nodegroup requiring the Action to handle thread safety. More... | |
void | perform_algorithm () |
Start evaluating the algorithm until it is stopped by an action. | |
void | perform_algorithm (const bool restart_if_terminated) |
Start evaluating the algorithm until it is stopped by an action. | |
int | number_of_procs () const |
Wrappers for charm++ informational functions. More... | |
int | my_proc () const |
Index of my processing element. | |
int | number_of_nodes () const |
Number of nodes. | |
int | my_node () const |
Index of my node. | |
int | procs_on_node (const int node_index) const |
Number of processing elements on the given node. | |
int | my_local_rank () const |
The local index of my processing element on my node. This is in the interval 0, ..., procs_on_node(my_node()) - 1. | |
int | first_proc_on_node (const int node_index) const |
Index of first processing element on the given node. | |
int | node_of (const int proc_index) const |
Index of the node for the given processing element. | |
int | local_rank_of (const int proc_index) const |
The local index for the given processing element on its node. | |
A distributed object (Charm++ Chare) that executes a series of Actions and is capable of sending and receiving data. Acts as an interface to Charm++.
Charm++ chares can be one of four types, which is specified by the type alias chare_type
inside the ParallelComponent
. The four available types of Algorithms are:
An Algorithm is a distributed object, a Charm++ chare, that repeatedly executes a series of Actions. An Action is a struct that has a static
apply function with signature:
Note that any of the arguments can be const or non-const references except array_index
, which must be a const&
.
The code in src/Parallel/CharmMain.tpp registers all entry methods, and if one is not properly registered then a static_assert explains how to have it be registered. If there is a bug in the implementation and an entry method isn't being registered or hitting a static_assert then Charm++ will give an error of the following form:
* registration happened after init Entry point: simple_action(), addr: * 0x555a3d0e2090 * ------------- Processor 0 Exiting: Called CmiAbort ------------ * Reason: Did you forget to instantiate a templated entry method in a .ci file? *
If you encounter this issue please file a bug report supplying everything necessary to reproduce the issue.
Action::return_type Parallel::DistributedObject< ParallelComponent, tmpl::list< PhaseDepActionListsPack... > >::local_synchronous_action | ( | Args &&... | args | ) |
Call the Action
sychronously, returning a result without any parallelization. The action is called immediately and control flow returns to the caller immediately upon completion.
Action
must have a type alias return_type
specifying its return type. This constraint is to simplify the variant visitation logic for the DataBox.
|
inline |
Wrappers for charm++ informational functions.
Number of processing elements
|
inline |
Get the phase bookmarks.
These are used to allow a phase to be resumed at a specific step in its iterable action list after PhaseControl is used to temporarily switch to other phases.
void Parallel::DistributedObject< ParallelComponent, tmpl::list< PhaseDepActionListsPack... > >::receive_data | ( | typename ReceiveTag::temporal_id | instance, |
ReceiveDataType && | t, | ||
bool | enable_if_disabled = false |
||
) |
Receive data and store it in the Inbox, and try to continue executing the algorithm.
When an algorithm has terminated it can be restarted by passing enable_if_disabled = true
. This allows long-term disabling and re-enabling of algorithms
void Parallel::DistributedObject< ParallelComponent, tmpl::list< PhaseDepActionListsPack... > >::reduction_action | ( | Arg | arg | ) |
Calls the apply
function Action
after a reduction has been completed.
The apply
function must take arg
as its last argument.
void Parallel::DistributedObject< ParallelComponent, tmpl::list< PhaseDepActionListsPack... > >::threaded_action | ( | ) |
Call an Action on a local nodegroup requiring the Action to handle thread safety.
The Parallel::NodeLock
of the nodegroup is passed to the Action instead of the action_list
as a const gsl::not_null<Parallel::NodeLock*>&
. The node lock can be locked with the Parallel::NodeLock::lock()
function, and unlocked with Parallel::unlock()
. Parallel::NodeLock::try_lock()
is also provided in case something useful can be done if the lock couldn't be acquired.
|
inline |
Call an Action on a local nodegroup requiring the Action to handle thread safety.
The Parallel::NodeLock
of the nodegroup is passed to the Action instead of the action_list
as a const gsl::not_null<Parallel::NodeLock*>&
. The node lock can be locked with the Parallel::NodeLock::lock()
function, and unlocked with Parallel::unlock()
. Parallel::NodeLock::try_lock()
is also provided in case something useful can be done if the lock couldn't be acquired.