Line data Source code
1 0 : \cond NEVER 2 : Distributed under the MIT License. 3 : See LICENSE.txt for details. 4 : \endcond 5 : # Using Variables in SpECTRE {#variables_foundations} 6 : 7 : # What is a Variables and Why Use Them? 8 : Variables are a data structure that hold a contiguous memory block with Tensors 9 : pointing to it. Variables temporaries allow you to declare temporary tensors and 10 : scalars so that you can do all allocations needed for a computation at one time. 11 : Since physical memory is shared between CPU cores, processes can't allocate in 12 : parallel since they might try to allocate to the same chunk of memory. As more 13 : CPU cores are used, this becomes a bottleneck, slowing down or stopping other 14 : processes while memory is being allocated. Using a Variables to allocate all 15 : memory needed at once can improve efficiency allowing the computation to 16 : operate smoothly and uninterrupted. 17 : 18 : # Defining a Variables of Temporary Tags 19 : To define a Variables, you'll need the TempTensor and Variables headers 20 : ```cpp 21 : #include "DataStructures/Tags/TempTensor.hpp" 22 : #include "DataStructures/Variables.hpp" 23 : ``` 24 : this will give you access to temporary Scalars and Tensors we'll need to 25 : allocate. You can define a Variables that allocates one Scalar with something 26 : like this: 27 : ```cpp 28 : Variables<tmpl::list<::Tags::TempScalar<0>>> 29 : temp_buffer{get<0,0>(spatial_metric).size()}; 30 : ``` 31 : Here, the Variables we've defined `temp_buffer` provides a tmpl::list with a 32 : TempScalar inside as the template argument, this will allocate a single 33 : temporary scalar. The size and DataType of the TempScalar is deduced by what's 34 : inside the {}, you can provide any tensor or std::array with the correct size 35 : needed. Now, to use the allocation you've made, you can do: 36 : ```cpp 37 : auto& useful_scalar = get<::Tags::TempScalar<0>>(temp_buffer); 38 : ``` 39 : 40 : # Real Use Example 41 : Now that we've got the basics, using them to allocate multiple Scalars and 42 : Tensors is quite easy. For instance, let's say I need to allocate 2 scalars, 43 : a spatial vector and 2 lower rank 2 tensors for my function. 44 : ```cpp 45 : Variables<tmpl::list<::Tags::TempScalar<0>, ::Tags::TempScalar<1>, 46 : ::Tags::TempI<0, 3, Frame::Inertial>, ::Tags::Tempij<0, 3, Frame::Inertial>, 47 : ::Tags::Tempij<1, 3, Frame::Inertial>>> 48 : temp_buffer{get<0,0>(spatial_metric).size()}; 49 : ``` 50 : Here, when we allocate for the same type of scalar or tensor (rank 2 lower) the 51 : way we distinguish multiple allocations is through the number within the <>. 52 : Now, to use each individual allocation, we can do something like: 53 : ```cpp 54 : auto& cool_scalar1 = get<::Tags::TempScalar<0>>(temp_buffer); 55 : auto& cool_scalar2 = get<::Tags::TempScalar<1>>(temp_buffer); 56 : auto& cool_tensor1 = get<::Tags::Tempij<0, 3, Frame::Inertial>>(temp_buffer); 57 : auto& cool_tensor2 = get<::Tags::Tempij<1, 3, Frame::Inertial>>(temp_buffer); 58 : ``` 59 : 60 : # Tips 61 : In the interest of reducing memory allocations, there a certain scenarios where 62 : you can resuse old allocations that are no longer useful to your computation. 63 : 64 : To see this, let's say that you're trying to make two unit vectors. You might 65 : start by saying you'll need two different vectors (rank 1 upper tensors) and 2 66 : different scalars as the magnitude of each vector. 67 : The way we'd allocate for this is by doing: 68 : ```cpp 69 : Variables<tmpl::list<::Tags::TempScalar<0>, ::Tags::TempScalar<1>, 70 : ::Tags::TempI<0, 3, Frame::Inertial>, ::Tags::TempI<1, 3, Frame::Inertial>>> 71 : temp_buffer{get<0,0>(spatial_metric).size()}; 72 : ``` 73 : However, doing this allocates more memory than we actually need. Once we finish 74 : calculating the first unit vector, the memory we've allocated for the scalar 75 : magnitude of the first vector will just sit there unused. We can reuse the 76 : allocation for the TempScalar<0> and use it when calculating the second unit 77 : vector without having to allocate another TempScalar. Now, allocating an extra 78 : scalar is not very expensive, but when using tensors, the memory required really 79 : adds up, so this is just another way to help make SpECTRE a bit more efficient.