SpECTRE  v2024.04.12
Parallel::ResourceInfo< Metavariables > Struct Template Reference

Holds resource info for all singletons and for avoiding placing array elements/singletons on the global proc 0. More...

#include <ResourceInfo.hpp>

Classes

struct  AvoidGlobalProc0
 
struct  Singletons
 

Public Types

using options = tmpl::push_front< tmpl::conditional_t< tmpl::size< singletons >::value !=0, tmpl::list< Singletons >, tmpl::list<> >, AvoidGlobalProc0 >
 

Public Member Functions

 ResourceInfo (const bool avoid_global_proc_0, const std::optional< SingletonPack< singletons > > &singleton_pack, const Options::Context &context={})
 The main constructor. All other constructors that take options will call this one. This constructor holds all checks able to be done during option parsing.
 
 ResourceInfo (const bool avoid_global_proc_0, const Options::Context &context={})
 This constructor is used when only AvoidGlobalProc0 is specified, but no SingletonInfoHolders are specified. Calls the main constructor with an empty SingletonPack.
 
 ResourceInfo (const ResourceInfo &)=default
 
ResourceInfooperator= (const ResourceInfo &)=default
 
 ResourceInfo (ResourceInfo &&)=default
 
ResourceInfooperator= (ResourceInfo &&)=default
 
void pup (PUP::er &p)
 
bool avoid_global_proc_0 () const
 Returns whether we should avoid placing array elements and singletons on the global zeroth proc. Default false.
 
template<typename Component >
auto get_singleton_info () const
 Return a SingletonInfoHolder corresponding to Component
 
const std::unordered_set< size_t > & procs_to_ignore () const
 Returns a std::unordered_set<size_t> of processors that array components should avoid placing elements on. This should be passed to the allocate_array function of the array component.
 
const std::set< size_t > & procs_available_for_elements () const
 Returns a std::set<size_t> that has all processors available to put elements on, meaning processors that aren't ignored.
 
template<typename Component >
size_t proc_for () const
 Returns the proc that the singleton Component should be placed on.
 
template<typename Cache >
void build_singleton_map (const Cache &cache)
 Actually builds the singleton map and allocates all the singletons. More...
 

Static Public Attributes

static constexpr Options::String help
 

Friends

template<typename Metavars >
bool operator== (const ResourceInfo< Metavars > &lhs, const ResourceInfo< Metavars > &rhs)
 

Detailed Description

template<typename Metavariables>
struct Parallel::ResourceInfo< Metavariables >

Holds resource info for all singletons and for avoiding placing array elements/singletons on the global proc 0.

Details

This can be used for placing all singletons in an executable.

If you have no singletons, you'll need the following block in the input file (where you can set the value of AvoidGlobalProc0 to true or false):

ResourceInfo:
AvoidGlobalProc0: true

If you have singletons, but do not want to assign any of them to a specific proc or be exclusive on a proc, you'll need the following block in the input file (where you can set the value of AvoidGlobalProc0 to true or false):

ResourceInfo:
AvoidGlobalProc0: true
Singletons: Auto

Otherwise, you will need to specify a block in the input file as below, where you will need to specify the options for each singleton:

ResourceInfo:
AvoidGlobalProc0: true
Singletons:
MySingleton1:
Proc: 2
Exclusive: true
MySingleton2: Auto

where MySingleton1 is the pretty_type::name of the singleton component and the options for each singleton are described in Parallel::SingletonInfoHolder (You can use Auto for each singleton that you want to have it's proc determined automatically and be non-exclusive, like MySingleton2).

Several consistency checks are done during option parsing to avoid user error. However, some checks can't be done during option parsing because the number of nodes/procs is needed to determine if there is an inconsistency. These checks are done during runtime, just before the map of singletons is created.

To automatically place singletons, we use a custom algorithm that will distribute singletons evenly over the number of nodes, and evenly over the procs on a node. This will help keep communication costs down by distributing the workload over all of the communication cores (one communication core per charm node), and ensure that our resources are being maximally utilized (i.e. one core doesn't have all the singletons on it).

Defining some terminology for singletons: requested means that a specific processor was requested in the input file; auto means that the processor should be chosen automatically; exclusive means that no other singletons or array elements should be placed on this singleton's processor; nonexclusive means that you can place other singletons or array elements on this singleton's processor. The algorithm that distributes the singletons is as follows:

  1. Allocate all singletons that requested specific processors, both exclusive and nonexclusive. This is done during option parsing.
  2. Allocate auto exclusive singletons, distributing the total number of exclusive singletons (auto + requested) as evenly as possibly over the number of nodes. We say "as evenly as possible" because this depends on the requested exclusive singletons. For example, if we have 4 nodes and 5 cores per node, the number of requested exclusive singletons on each node is (0, 1, 4, 1), and we have 3 auto exclusive singletons to place, the best distribution of exclusive singletons we can achieve given our constraints is (2, 2, 4, 1). Clearly this is not the most evenly distributed the exclusive singletons could be. However, this is the most evenly distributed they could be given the starting distribution from the input file.
  3. Allocate auto nonexclusive singletons, distributing the total number of nonexclusive singletons (auto + requested): First, as evenly as possibly over the number of nodes. Then, on each node, distributing the singletons as evenly as possibly over the number of processors on that node. The same disclaimer about "as evenly as possibly" from the previous step applies here.

The goal of this algorithm is to mimic, as best as possible, how a human would distribute this workload. It isn't perfect, but is a significant improvement over placing singletons on one proc after another starting from global proc 0.

Member Function Documentation

◆ build_singleton_map()

template<typename Metavariables >
template<typename Cache >
void Parallel::ResourceInfo< Metavariables >::build_singleton_map ( const Cache &  cache)

Actually builds the singleton map and allocates all the singletons.

Details

This could be done in the constructor, however, since we need the number of nodes to do some sanity checks, it can't. If an executable is run with the –check-options flag, we will be running on 1 proc and 1 node so some of the checks done in this function would fail. Unfortunately, that means the checks that require knowing the number of nodes now occur at runtime instead of option parsing. This is why the singleton_map_has_been_set_ bool is necessary and why we check if this function has been called in most other member functions.

To avoid a cyclic dependency between the GlobalCache and ResourceInfo, we template this function rather than explicitly use the GlobalCache because the GlobalCache depends on ResourceInfo

This function should only be called once.

Member Data Documentation

◆ help

template<typename Metavariables >
constexpr Options::String Parallel::ResourceInfo< Metavariables >::help
staticconstexpr
Initial value:
= {
"Resource options for a simulation. This information will be used when "
"placing Array and Singleton parallel components on the requested "
"resources."}

The documentation for this struct was generated from the following file: