|
SpECTRE
v2025.08.19
|
ParallelComponent representing a set of points to be interpolated to and a function to call upon interpolation to those points. More...
#include <InterpolationTarget.hpp>
Static Public Member Functions | |
| static std::string | name () |
| static void | execute_next_phase (Parallel::Phase next_phase, Parallel::CProxy_GlobalCache< metavariables > &global_cache) |
Static Public Attributes | |
| static constexpr bool | checkpoint_data = true |
ParallelComponent representing a set of points to be interpolated to and a function to call upon interpolation to those points.
Each InterpolationTarget will communicate with the Interpolator.
InterpolationTargetTag must conform to the intrp::protocols::InterpolationTargetTag protocol.
The metavariables must contain the following type aliases:
tmpl::list of tags that define a Variables sent from all Elements to the local Interpolator.tmpl::list of all InterpolationTargetTags.Metavariables must contain the following static constexpr members:
Each set of points to be interpolated onto is labeled by a temporal_id. If any step of the interpolation procedure ever uses a time-dependent CoordinateMap, then it needs to grab FunctionOfTimes from the GlobalCache. Before doing so, it must verify that those FunctionOfTimes are up-to-date for the given temporal_id.
Note that once the FunctionOfTime has been verified to be up-to-date for a particular temporal_id at one step in the interpolation procedure, all subsequent steps of the interpolation procedure for that same temporal_id need not worry about re-verifying the FunctionOfTime. Therefore, we need only focus on the first step in the interpolation procedure that needs FunctionOfTimes: computing the points on which to interpolate.
Each InterpolationTarget has a function InterpolationTargetTag::compute_target_points that returns the points to be interpolated onto, expressed in the frame InterpolationTargetTag::compute_target_points::frame. Then the function block_logical_coordinates (and eventually element_logical_coordinates) is called to convert those points to the element logical frame to do the interpolation. If InterpolationTargetTag::compute_target_points::frame is different from the grid frame, and if the CoordinateMap is time-dependent, then block_logical_coordinates grabs FunctionOfTimes from the GlobalCache. So therefore any Action calling block_logical_coordinates must wait until the FunctionOfTimes in the GlobalCache are up-to-date for the temporal_id being passed into block_logical_coordinates.
Here we describe the logic used in all the Actions that call block_logical_coordinates.
Recall that InterpolationTarget can be used with the Interpolator ParallelComponent (as for the horizon finder), or by having the Elements interpolate directly (as for most Observers). Here we discuss the case when the Interpolator is used; the other case is discussed below.
Ensuring the FunctionOfTimes are up-to-date is done via two Tags in the DataBox and a helper Action. When interpolation is requested for a new temporal_id (e.g. by intrp::Events::Interpolate), the temporal_id is added to Tags::PendingTemporalIds, which holds a std::deque<temporal_id>, and represents temporal_ids that we want to interpolate onto, but for which FunctionOfTimes are not necessarily up-to-date. We also keep another list of temporal_ids: Tags::TemporalIds, for which FunctionOfTimes are guaranteed to be up-to-date.
The action Actions::VerifyTemporalIdsAndSendPoints moves temporal_ids from PendingTemporalIds to TemporalIds as appropriate, and if any temporal_ids have been so moved, it generates the block_logical_coordinates and sends them to the Interpolator ParallelComponent. The logic is illustrated in pseudocode below. Recall that some InterpolationTargets are sequential, (i.e. you cannot interpolate onto one temporal_id until interpolation on previous ones are done, like the apparent horizon finder), and some are non-sequential (i.e. you can interpolate in any order).
Note that VerifyTemporalIdsAndSendPoints always exits in one of three ways:
TemporalIds, and there are no PendingTemporalIds left.We now describe the logic of the Actions that use VerifyTemporalIdsAndSendPoints.
Actions::AddTemporalIdsToInterpolationTarget is called by intrp::Events::Interpolate to trigger interpolation for new temporal_ids. Its logic is as follows, in pseudocode:
Actions::InterpolationTargetReceiveVars is called by the Interpolator when it is finished interpolating the current temporal_id. For the sequential case, it needs to start interpolating for the next temporal_id. The logic is, in pseudocode:
intrp::callbacks::FindApparentHorizon calls block_logical_coordinates when it needs to start a new iteration of the horizon finder at the same temporal_id, so one might think you need to worry about up-to-date FunctionOfTimes. But since intrp::callbacks::FindApparentHorizon always works on the same temporal_id for which the FunctionOfTimes have already been verified as up-to-date from the last iteration, no special consideration of FunctionOfTimes need be done here.
This case is easier than the case with the Interpolator, because the target points are always time-independent in the frame compute_target_points::frame.
Actions::EnsureFunctionOfTimeUpToDate verifies that the FunctionOfTimes are up-to-date at the DgElementArrays current time.
Actions::EnsureFunctionOfTimeUpToDateis placed inDgElementArrays PDAL before any use of interpolation.
Actions::InterpolationTargetSendTimeIndepPointsToElements is invoked on InterpolationTarget during the Registration PDAL, to send time-independent point information to Elements.
Send the result of
compute_target_pointsto allElements.
Note that this may need to be revisited because every Element has a copy of every target point, which may use a lot of memory. An alternative is for each Element to invoke an Action on each InterpolationTarget (presumably from an Event) at each time, and then the InterpolationTarget invokes another Action to send points to only those Elements that contain the points; this alternative uses less memory but much more communication. Another alternative would be to place the points in the GlobalCache (one copy per node) since the points need be computed only once.