Line data Source code
1 0 : \cond NEVER
2 : Distributed under the MIT License.
3 : See LICENSE.txt for details.
4 : \endcond
5 : # Installation {#installation}
6 :
7 : \tableofcontents
8 :
9 : This page details how to install SpECTRE on personal machines and on clusters
10 : that have no official support (yet).
11 :
12 : - For details on installing SpECTRE on a number of clusters that we support
13 : please refer to:
14 : \subpage installation_on_clusters
15 : - For configuring SpECTRE please refer to:
16 : \subpage spectre_build_system
17 : - For instructions on installing SpECTRE on Apple Silicon Macs please refer to:
18 : \subpage installation_on_apple_silicon
19 : - For information on our versioning scheme and public releases please refer to:
20 : \subpage versioning_and_releases
21 :
22 : ### Running containerized releases
23 :
24 : #### CLI Entrypoint
25 :
26 : A quick way to run the code without installing anything at all is with our
27 : containerized releases:
28 :
29 : ```
30 : docker run sxscollaboration/spectre --help
31 : ```
32 :
33 : You can also use [Apptainer/Singularity](https://apptainer.org) instead of
34 : Docker, which works better on computing clusters and is more convenient because
35 : it shares the host's file system:
36 :
37 : ```
38 : apptainer run docker://sxscollaboration/spectre --help
39 : ```
40 :
41 : The entrypoint to this container is the SpECTRE
42 : \ref tutorial_cli "command-line interface (CLI)".
43 : For example, you can generate initial data for a simulation of merging black
44 : holes and plot the result like this:
45 :
46 : ```
47 : apptainer run docker://sxscollaboration/spectre bbh generate-id \
48 : -q 1 --chi-A 0 0 0 --chi-B 0 0 0 -D 16 -w 0.015 -a 0 -o ./bbh_id
49 : apptainer run docker://sxscollaboration/spectre plot slice \
50 : bbh_id/BbhVolume*.h5 -C 0,0,0 -n 0,0,1 -u 0,1,0 -X 24 24 \
51 : -y ConformalFactor -o plot.pdf
52 : ```
53 :
54 : The containers currently have precompiled code only for Linux x86_64 platforms,
55 : and only have a limited set of executables precompiled. The supported features
56 : available in the precompiled containers are:
57 :
58 : - Generating initial data
59 : - Running CCE (see \ref tutorial_cce)
60 : - Running Python support code with the SpECTRE CLI (see \ref tutorial_cli)
61 :
62 : #### Starting a container {#start_deploy_container}
63 :
64 : If you'd rather use an image to start a container, you can run
65 :
66 : ```
67 : docker run --name spectre -i --entrypoint /bin/bash
68 : -t sxscollaboration/spectre:deploy
69 : ```
70 :
71 : \note The `--entrypoint /bin/bash` is important so you don't run the CLI.
72 :
73 : ### Running static binaries
74 :
75 : Another way of running the code without installing anything is with our
76 : precompiled static binaries, which are published on GitHub:
77 :
78 : - Releases with precompiled executables:
79 : https://github.com/sxs-collaboration/spectre/releases
80 :
81 : These are currently compiled only for Linux x86_64 platforms and for Intel
82 : Haswell architecture, so they should be compatible with machines newer than mid
83 : 2013.
84 :
85 : We only publish a limited set of precompiled static binaries that are useful as
86 : stand-alone tools, such as the CCE executables (see \ref tutorial_cce).
87 :
88 : ### Quick-start guide for code development with Docker and Visual Studio Code
89 :
90 : If you're new to writing code for SpECTRE and would like to jump right into a
91 : working development environment, a good place to start is our
92 : \subpage dev_guide_quick_start_docker_vscode.
93 :
94 : ### Quick-start installation {#quick_start_install}
95 :
96 : The easiest way of installing SpECTRE natively on a new machine is this:
97 :
98 : 1. Collect dependencies. You need a C++ compiler (GCC or Clang), CMake,
99 : BLAS/LAPACK, Boost, GSL, HDF5, and Python installed. For details on these
100 : required dependencies see \ref build_dependencies. On many computing clusters
101 : they are available as modules. On personal machines you can install them with
102 : a package manager.
103 :
104 : 2. Clone the SpECTRE repository:
105 :
106 : ```sh
107 : git clone git@github.com:sxs-collaboration/spectre.git
108 : export SPECTRE_HOME=$PWD/spectre
109 : ```
110 :
111 : 3. Install Charm++:
112 :
113 : ```sh
114 : git clone https://github.com/UIUC-PPL/charm
115 : cd charm
116 : git checkout v7.0.0
117 : git apply $SPECTRE_HOME/support/Charm/v7.0.0.patch
118 : ./build charm++ <version> --with-production --build-shared
119 : export CHARM_ROOT=$PWD/<version>
120 : ```
121 :
122 : Choose the `<version>` from [this list in the Charm++
123 : documentation](https://github.com/charmplusplus/charm?tab=readme-ov-file#how-to-choose-a-version).
124 : For example, choose `multicore-linux-x86_64` on a Linux laptop,
125 : `multicore-darwin-arm8` on an Apple Silicon laptop, and `mpi-linux-x86_64` on
126 : a standard computing cluster (you will also need MPI for this). See
127 : \ref building-charm for details.
128 :
129 : 4. Configure and build SpECTRE:
130 :
131 : ```sh
132 : cd $SPECTRE_HOME
133 : mkdir build
134 : cd build
135 : cmake \
136 : -D CMAKE_C_COMPILER=<clang or gcc> \
137 : -D CMAKE_CXX_COMPILER=<clang++ or g++> \
138 : -D CMAKE_Fortran_COMPILER=gfortran \
139 : -D CMAKE_BUILD_TYPE=<Debug or Release> \
140 : -D CHARM_ROOT=$CHARM_ROOT \
141 : -D SPECTRE_FETCH_MISSING_DEPS=ON \
142 : -D MEMORY_ALLOCATOR=SYSTEM \
143 : $SPECTRE_HOME
144 : ```
145 :
146 : See \ref building-spectre for details and \ref common_cmake_flags for a list
147 : of possible configuration options. For example, set `-D ENABLE_OPENMP=ON`
148 : to enable OpenMP-parallelization for the exporter library.
149 :
150 : Now you can compile executables (again, see \ref building-spectre for
151 : details). For example:
152 :
153 : ```sh
154 : make -j12 cli
155 : make -j12 BundledExporter
156 : ```
157 :
158 : ### Installation with Spack
159 :
160 : You can also install SpECTRE with the [Spack](https://github.com/spack/spack)
161 : package manager:
162 :
163 : ```sh
164 : git clone https://github.com/spack/spack
165 : source ./spack/share/spack/setup-env.sh
166 : spack compiler find
167 : spack external find
168 : spack install spectre executables=ExportCoordinates3D \
169 : ^charmpp backend=multicore
170 : ```
171 :
172 : You probably want to customize your installation, e.g., to select a particular
173 : version of SpECTRE, the executables you want to install, additional options such
174 : as Python bindings, or the Charm++ backend. You can display all possible options
175 : with:
176 :
177 : ```sh
178 : spack info spectre # or charmpp, etc.
179 : ```
180 :
181 : Refer to the [Spack documentation](https://spack.readthedocs.io/en/latest/) for
182 : more information.
183 :
184 : \warning We have not found the Spack installation particularly stable since the
185 : Spack package manager is still in development.
186 :
187 : ## Detailed installation instructions
188 :
189 : This remainder of this page details the installation procedure for SpECTRE.
190 :
191 : ### Dependencies {#build_dependencies}
192 :
193 : \note You don't need to install any of these dependencies by hand if you
194 : use a container or follow the \ref quick_start_install.
195 :
196 : #### Required:
197 :
198 : * [GCC](https://gcc.gnu.org/) 9.1 or later,
199 : [Clang](https://clang.llvm.org/) 13.0 or later (see
200 : [here](https://apt.llvm.org/) for how to get newer versions of clang through
201 : apt), or AppleClang 13.0.0 or later
202 : * [CMake](https://cmake.org/) 3.18.0 or later
203 : * [Git](https://git-scm.com/)
204 : * BLAS & LAPACK (e.g. [OpenBLAS](http://www.openblas.net))
205 : * [Boost](http://www.boost.org/) 1.60.0 or later
206 : * [GSL](https://www.gnu.org/software/gsl/) \cite Gsl
207 : * [GNU make](https://www.gnu.org/software/make/)
208 : * [HDF5](https://support.hdfgroup.org/HDF5/) (non-mpi version on macOS)
209 : \cite Hdf5
210 : * [Python](https://www.python.org/) 3.8 or later.
211 : * [Charm++](http://charm.cs.illinois.edu/) 7.0.0, or later (experimental).
212 : See also \ref building-charm. \cite Charmpp1 \cite Charmpp2 \cite Charmpp3
213 :
214 : The following dependencies will be fetched automatically if you set
215 : `SPECTRE_FETCH_MISSING_DEPS=ON`:
216 :
217 : * [Blaze](https://bitbucket.org/blaze-lib/blaze/overview) v3.8.
218 : When installing manually, it can be beneficial to install Blaze with CMake so
219 : some configuration options are determined automatically, such as cache sizes.
220 : \cite Blaze1 \cite Blaze2
221 : * [Catch2](https://github.com/catchorg/Catch2) 3.4.0 or later.
222 : You can also install Catch2 from your package manager or do a standard CMake
223 : build and installation (as detailed in the [Catch2
224 : docs](https://github.com/catchorg/Catch2/blob/devel/docs/cmake-integration.md#installing-catch2-from-git-repository)).
225 : Compile with `CMAKE_POSITION_INDEPENDENT_CODE=ON`.
226 : * [LIBXSMM](https://github.com/libxsmm/libxsmm) version 1.16.1 or later.
227 : \cite Libxsmm
228 : * [yaml-cpp](https://github.com/jbeder/yaml-cpp) version 0.7.0 or later.
229 : Building with shared library support is recommended when installing from
230 : source. \cite Yamlcpp
231 : * Python dependencies listed in `support/Python/requirements.txt`.
232 : Install with `pip3 install -r support/Python/requirements.txt`.
233 : Make sure you are working in a [Python venv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment)
234 : before installing packages.
235 : Alternatively, you can set `BOOTSTRAP_PY_DEPS=ON` when configuring a build
236 : with CMake to install missing Python packages into the build directory
237 : automatically.
238 : <details>
239 : \include support/Python/requirements.txt
240 : </details>
241 :
242 : #### Optional:
243 :
244 : * [Pybind11](https://pybind11.readthedocs.io) 2.7.0 or later for SpECTRE Python
245 : bindings. Included in `support/Python/requirements.txt`. \cite Pybind11
246 : * [jemalloc](https://github.com/jemalloc/jemalloc)
247 : * [Doxygen](https://www.doxygen.nl/index.html) 1.9.1 to 1.9.6 — to
248 : generate documentation
249 : * Python dev dependencies listed in `support/Python/dev_requirements.txt`
250 : — for documentation pre- and post-processing, formatting code, etc.
251 : Install with `pip3 install -r support/Python/dev_requirements.txt`.
252 : Make sure you are working in a [Python venv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment)
253 : before installing packages.
254 : <details>
255 : \include support/Python/dev_requirements.txt
256 : </details>
257 : * [Google Benchmark](https://github.com/google/benchmark) - to do
258 : microbenchmarking inside the SpECTRE framework. v1.2 or newer is required
259 : * [LCOV](http://ltp.sourceforge.net/coverage/lcov.php) and
260 : [gcov](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html) — to check code test
261 : coverage
262 : * [PAPI](http://icl.utk.edu/papi/) — to access hardware performance counters
263 : * [ClangFormat](https://clang.llvm.org/docs/ClangFormat.html) — to format C++
264 : code in a clear and consistent fashion
265 : * [Clang-Tidy](http://clang.llvm.org/extra/clang-tidy/) — to "lint" C++ code
266 : * [Scotch](https://gitlab.inria.fr/scotch/scotch) - to build the `ScotchLB`
267 : graph partition based load balancer in charm++.
268 : * [ffmpeg](https://www.ffmpeg.org/) - for animating 1d simulations with
269 : matplotlib
270 : * [xsimd](https://github.com/xtensor-stack/xsimd) 11.0.1 or newer - for manual
271 : vectorization
272 : * [libbacktrace](https://github.com/ianlancetaylor/libbacktrace) - to show
273 : source files and line numbers in backtraces of errors and asserts. Available
274 : by default on many systems, so you may not have to install it at all. The
275 : CMake configuration will tell you if you have libbacktrace installed.
276 : * [ParaView](https://www.paraview.org/) - for visualization \cite Paraview1
277 : \cite Paraview2 . Make sure your ParaView installation uses the same (major
278 : and minor) version of Python as the rest of the build.
279 : * [SpEC](https://www.black-holes.org/code/SpEC.html) - to load SpEC data.
280 : Compile the exporter in SpEC's `Support/ApplyObservers/Exporter/` directory
281 : (see the `Makefile` in that directory). Also make sure to compile SpEC with
282 : the same compiler and MPI as SpECTRE to avoid compatibility issues.
283 :
284 : #### Bundled:
285 :
286 : * [Brigand](https://github.com/edouarda/brigand)
287 : * [libsharp](https://github.com/Libsharp/libsharp) \cite Libsharp
288 :
289 : ## Clone the SpECTRE repository
290 :
291 : First, clone the [SpECTRE repository](https://github.com/sxs-collaboration/spectre)
292 : to a directory of your choice. In the following we will refer to it as
293 : SPECTRE_ROOT. You may `git clone` from GitHub, in which case SPECTRE_ROOT will
294 : be `<your_current_directory>/spectre`. That is, inside SPECTRE_ROOT are `docs`,
295 : `src`, `support`, `tests` etc. You can also download the source and extract them
296 : to your desired working directory, making sure not to leave out hidden files
297 : when you `cp` or `mv` the source files.
298 :
299 : ## Using Docker to obtain a SpECTRE environment {#docker_install}
300 :
301 : A [Docker](https://www.docker.com/) image is available from
302 : [DockerHub](https://hub.docker.com/r/sxscollaboration/spectre/) and can
303 : be used to build SpECTRE on a personal machine.
304 :
305 : **Note**: If you have SELinux active
306 : on your system you must figure out how to enable sharing files with the host
307 : OS. If you receive errors that you do not have permission to access a shared
308 : directory it is likely that your system has SELinux enabled. One option is to
309 : disable SELinux at the expense of reducing the security of your system.
310 :
311 : To build with the Docker image:
312 :
313 : 1. Install [Docker-Desktop](https://docs.docker.com/get-docker/). For Linux, if
314 : you want to be able to run the following steps without `sudo`, follow the
315 : [post-installation-guide](https://docs.docker.com/engine/install/linux-postinstall/)
316 : to add a non-root user.
317 :
318 : 2. Retrieve the Docker image (you may need `sudo` in front of this command)
319 : ```
320 : docker pull sxscollaboration/spectre:dev
321 : ```
322 : 3. Start the Docker container (you may need `sudo`)
323 : ```
324 : docker run -v $SPECTRE_ROOT/:$SPECTRE_ROOT/ --name spectre_dev \
325 : -i -t sxscollaboration/spectre:dev /bin/bash
326 : ```
327 : - `-v $SPECTRE_ROOT/:$SPECTRE_ROOT/` binds the directory `$SPECTRE_ROOT`
328 : (which is an environment variable you must set up or just use the actual
329 : path) outside the container to `$SPECTRE_ROOT` inside the container. In this
330 : way, files in the `$SPECTRE_ROOT` on your host system (outside the container)
331 : become accessible within the container through the directory SPECTRE_ROOT
332 : inside the container. If you wonder why the same SPECTRE_ROOT needs to be
333 : used for both inside and outside the container, which is why `$SPECTRE_ROOT`
334 : is repeated in the command above with separated by a colon, please see one of
335 : the notes below regarding `-v` flag.
336 : - The `--name spectre_dev` is optional. If you don't name your container,
337 : docker will generate an arbitrary name.
338 : - On macOS you can significantly increase the performance of file system
339 : operations by appending the flag `:delegated` to `-v`, e.g.
340 : `-v $SPECTRE_ROOT/:$SPECTRE_ROOT/:delegated` (see
341 : https://docs.docker.com/docker-for-mac/osxfs-caching/).
342 : - The `-i` flag is for interactive mode, which will drop you into the
343 : container.
344 : - It can be useful to expose a port to the host so you can run servers such
345 : as [Jupyter](https://jupyter.org/index.html) for accessing the Python
346 : bindings (see \ref spectre_using_python) or a Python web server to view the
347 : documentation. To do so, append the `-p` option, e.g. `-p 8000:8000`.
348 :
349 : You will end up in a bash shell in the docker container,
350 : as root (you need to be root).
351 : Within the container, the files in `$SPECTRE_ROOT` are available and Charm++
352 : is installed in `/work/charm_7_0_0`. For the following steps, stay inside the
353 : docker container as root.
354 : 4. Proceed with [building SpECTRE](#building-spectre).
355 :
356 : **Notes:**
357 : * Everything in your build directory is owned by root, and is
358 : accessible only within the container.
359 : * You should edit source files in SPECTRE_ROOT in a separate terminal
360 : outside the container, and use the container only for compiling and
361 : running the code.
362 : * If you exit the container (e.g. ctrl-d),
363 : your compilation directories are still saved, as are any other changes to
364 : the container that you have made.
365 : To restart the container, try the following commands
366 : (you may need `sudo`):
367 : 1. `docker ps -a`,
368 : to list all containers with their CONTAINER_IDs and CONTAINER_NAMEs,
369 : 2. `docker start -i CONTAINER_NAME` or `docker start -i CONTAINER_ID`,
370 : to restart your container (above, the CONTAINER_NAME was spectre_dev).
371 : * When the Docker container gets updated, you can stop it with
372 : `docker stop CONTAINER_NAME`, remove it with `docker rm CONTAINER_NAME`
373 : and then start at step 2 above to run it again.
374 : * You can run more than one shell in the same container, for instance
375 : one shell for compiling with gcc and another for compiling
376 : with clang.
377 : To add a new shell, run `docker exec -it CONTAINER_NAME /bin/bash`
378 : (or `docker exec -it CONTAINER_ID /bin/bash`) from
379 : a terminal outside the container.
380 : * In step 4 above, technically docker allows you to say
381 : `-v $SPECTRE_ROOT/:/my/new/path` to map `$SPECTRE_ROOT` outside the
382 : container to any path you want inside the container, but **do not do this**.
383 : Compiling inside the container sets up git hooks in SPECTRE_ROOT that
384 : contain hardcoded pathnames to SPECTRE_ROOT *as seen from inside the
385 : container*. So if your source paths inside and outside the container are
386 : different, commands like `git commit` run *from outside the container* will
387 : die with `No such file or directory`.
388 : * If you want to use Docker within VSCode, take a look at our
389 : [quick start guide](../DevGuide/QuickStartDockerVSCode.md) for using Docker
390 : with VSCode.
391 :
392 : ## Using Singularity to obtain a SpECTRE environment
393 :
394 : [Singularity](https://sylabs.io) is a container alternative
395 : to Docker with better security and nicer integration.
396 :
397 : To build SpECTRE with Singularity you must:
398 :
399 : 1. Build [Singularity](https://sylabs.io) and add it to your
400 : `$PATH`
401 : 2. `cd` to the directory where you want to store the SpECTRE Singularity image,
402 : source, and build directories, let's call it WORKDIR. The WORKDIR must be
403 : somewhere in your home directory. If this does not work for you, follow the
404 : Singularity instructions on setting up additional [bind
405 : points](https://sylabs.io/guides/3.7/user-guide/bind_paths_and_mounts.html)
406 : (version 3.7. For other versions, see the [docs](https://sylabs.io/docs/)).
407 : Once inside the WORKDIR, clone SpECTRE into `WORKDIR/SPECTRE_ROOT`.
408 : 3. Run `sudo singularity build spectre.img
409 : docker://sxscollaboration/spectre:dev`.
410 : You can also use spectre:ci instead of spectre:dev if you want more
411 : compilers installed.
412 :
413 : If you get the error message that `makesquashfs` did not have enough space to
414 : create the image you need to set a different `SINGULARITY_TMPDIR`. This can
415 : be done by running: `sudo SINGULARITY_TMPDIR=/path/to/new/tmp singularity
416 : build spectre.img docker://sxscollaboration/spectre:dev`. Normally
417 : `SINGULARITY_TMPDIR` is `/tmp`, but building the image will temporarily need
418 : almost 8GB of space.
419 :
420 : You can control where Singularity stores the downloaded image files from
421 : DockerHub by specifying the `SINGULARITY_CACHEDIR` environment variable. The
422 : default is `$HOME/.singularity/`. Note that `$HOME` is `/root` when running
423 : using `sudo`.
424 : 4. To start the container run `singularity shell spectre.img` and you
425 : will be dropped into a bash shell.
426 : 5. Proceed with [building SpECTRE](#building-spectre).
427 :
428 : **Notes:**
429 : - You should edit source files in SPECTRE_ROOT in a separate terminal
430 : outside the container, and use the container only for compiling and running
431 : the code.
432 : - If you don't have the same Python version in your environment outside the
433 : container as the version inside the container, this will create problems
434 : with git hooks. The Singularity container uses python3.8 by default. Thus, it
435 : is up to the user to ensure that they are using the same Python version inside
436 : and outside the container. To use a different Python version in the container
437 : add `-D Python_EXECUTABLE=/path/to/python` to the cmake command where
438 : `/path/to/python` is usually `/usr/bin/pythonX` and `X` is the version you
439 : want.
440 : - Unlike Docker, Singularity does not keep the state between runs. However, it
441 : shares the home directory with the host OS so you should do all your work
442 : somewhere in your home directory.
443 : - To run more than one container just do `singularity shell spectre.img` in
444 : another terminal.
445 : - Since the data you modify lives on the host OS there is no need to worry about
446 : losing any data, needing to clean up old containers, or sharing data between
447 : containers and the host.
448 :
449 : ## Using Spack to set up a SpECTRE environment
450 :
451 : SpECTRE's dependencies can be installed with
452 : [Spack](https://github.com/spack/spack), a package manager tailored for HPC use.
453 : [Install Spack](https://spack.readthedocs.io/en/latest/getting_started.html) by
454 : cloning it into `SPACK_DIR` (a directory of your choice). Then, enable Spack's
455 : shell support with `source SPACK_DIR/share/spack/setup-env.sh`. Consider adding
456 : this line to your `.bash_profile`, `.bashrc`, or similar. Refer to [Spack's
457 : getting started guide](https://spack.readthedocs.io/en/latest/getting_started.html)
458 : for more information.
459 :
460 : Once you have Spack installed, one way to install the SpECTRE dependencies is
461 : with a [Spack environment](https://spack.readthedocs.io/en/latest/environments.html):
462 :
463 : \include support/DevEnvironments/spack.yaml
464 :
465 : You can also install the Spack packages listed in the environment file above
466 : with a plain `spack install` if you prefer.
467 :
468 : **Notes:**
469 : - Spack allows very flexible configurations and we recommended you read the
470 : [documentation](https://spack.readthedocs.io) if you require features such as
471 : packages installed with different compilers.
472 : - For security, it is good practice to make Spack [use the system's
473 : OpenSSL](https://spack.readthedocs.io/en/latest/getting_started.html#openssl)
474 : rather than allow it to install a new copy.
475 : - To avoid reinstalling lots of system-provided packages with Spack, use the
476 : `spack external find` feature and the `--reuse` flag to `spack concretize` (or
477 : `spack install`). You can also install some of the dependencies with your
478 : system's package manager in advance, e.g., with `apt` or `brew`. If they are
479 : not picked up by `spack external find` automatically, register them with Spack
480 : manually. See the [Spack documentation on external
481 : packages](https://spack.readthedocs.io/en/latest/build_settings.html#external-packages)
482 : for details.
483 : - Spack works well with a module environment, such as
484 : [LMod](https://github.com/TACC/Lmod). See the [Spack documentation on
485 : modules](https://spack.readthedocs.io/en/latest/module_file_support.html) for
486 : details.
487 :
488 : ## Building Charm++ {#building-charm}
489 :
490 : If you are not using a container, haven't installed Charm++ with Spack, or want
491 : to install Charm++ manually for other reasons, follow the installation
492 : instructions in the [Charm++ repository](https://github.com/UIUC-PPL/charm)
493 : and in their [documentation](https://charm.readthedocs.io/en/latest/quickstart.html#installing-charm).
494 : Here are a few notes:
495 :
496 : - Once you cloned the [Charm++ repository](https://github.com/UIUC-PPL/charm),
497 : run `git checkout v7.0.0` to switch to a supported, stable release of
498 : Charm++.
499 : - Apply the appropriate patch (if there is one) for the version from
500 : `${SPECTRE_ROOT}/support/Charm`. For example, if you have Charm++ v7.0.0
501 : then the patch will be `v7.0.0.patch`.
502 : - Choose the `LIBS` target to compile. This is needed so that we can support the
503 : more sophisticated load balancers in SpECTRE executables.
504 : - On a personal machine the correct target architecture is likely
505 : `multicore-linux-x86_64`, or `multicore-darwin-x86_64` on macOS. On an HPC
506 : system the correct Charm++ target architecture depends on the machine's
507 : inter-node communication architecture. It might take some experimenting to
508 : figure out which Charm++ configuration provides the best performance.
509 : - Compile Charm++ with support for shared libraries by appending the option
510 : `--build-shared` to the `./build` command or pass `BUILD_SHARED=ON` to the
511 : CMake configuration (see the [Charm++ installation
512 : instructions](https://github.com/UIUC-PPL/charm#building-dynamic-libraries)).
513 : - When compiling Charm++ you can specify the compiler using, for example,
514 : ```
515 : ./build LIBS ARCH clang
516 : ```
517 :
518 : ## Building SpECTRE {#building-spectre}
519 :
520 : Once you have set up your development environment you can compile SpECTRE.
521 : Follow these steps:
522 :
523 : 1. Create a build directory where you would like to compile SpECTRE. In the
524 : Docker container you could create, e.g., `/work/spectre-build`. It can be
525 : useful to add a descriptive label to the name of the build directory since
526 : you may create more later, e.g., `build-clang-Debug`. Then, `cd` into the
527 : build directory.
528 : 2. Determine the location of your Charm++ installation. In the Docker container
529 : it is `/work/charm_7_0_0/multicore-linux-x86_64-gcc` for GCC builds and
530 : `/work/charm_7_0_0/mpi-linux-x86_64-smp-clang` for clang builds. For Spack
531 : installations you can determine it with
532 : `spack location --install-dir charmpp`. We refer to the install directory as
533 : `CHARM_ROOT` below.
534 : 3. In your new SpECTRE build directory, configure the build with CMake:
535 : ```
536 : cmake -D CHARM_ROOT=$CHARM_ROOT SPECTRE_ROOT
537 : ```
538 : Add options to the `cmake` command to configure the build, select
539 : compilers, etc. For instance, to build with clang you may run:
540 : ```
541 : cmake -D CMAKE_CXX_COMPILER=clang++ \
542 : -D CMAKE_C_COMPILER=clang \
543 : -D CMAKE_Fortran_COMPILER=gfortran \
544 : -D CHARM_ROOT=$CHARM_ROOT \
545 : SPECTRE_ROOT
546 : ```
547 : See \ref common_cmake_flags for documentation on possible configuration
548 : options.
549 : 4. When cmake configuration is done, you are ready to build target executables.
550 : - You can see the list of available targets by running `make list` (or `ninja
551 : list` if you are using the Ninja generator) or by using tab completion.
552 : Compile targets with `make -jN TARGET` (or `ninja -jN TARGET`), where `N`
553 : is the number of cores to build on in parallel (e.g. `-j4`). Note that the
554 : Ninja generator allows you to compile individual source files too.
555 : - Compile the `unit-tests` target and run `ctest -L unit` to run unit tests.
556 : Compile `test-executables` and run `ctest` to run all tests, including
557 : executables. To compile `test-executables` you may have to reduce the
558 : number of cores you build on in parallel to avoid running out of memory.
559 : - To use the command-line interface (CLI), compile the `cli` target (see
560 : \ref tutorial_cli).
561 : - To use the Python bindings, compile the `all-pybindings` target (see
562 : \ref spectre_using_python).
563 :
564 : ## Code Coverage Analysis
565 :
566 : For any coverage analysis you will need to have LCOV installed on the system.
567 : For documentation coverage analysis you will also need to install
568 : [coverxygen](https://github.com/psycofdj/coverxygen) and for test coverage
569 : analysis [gcov](https://gcc.gnu.org/onlinedocs/gcc/Gcov.html).
570 :
571 : If you have these installed (which is already done if
572 : you are using the docker container), you can look at code coverage as follows:
573 :
574 : 1. On a gcc build, pass `-D COVERAGE=ON` to `cmake`
575 : 2. `make unit-test-coverage`
576 : 3. The output is in `docs/html/unit-test-coverage`.
|