SpECTRE  v2022.01.03
Installation

This page details the installation procedure for SpECTRE on personal computers. For instructions on installing SpECTRE on clusters please refer to the Installation on Clusters page. Refer to the Versioning and releases page for information on specific versions to install.

Quick-start guide for code development with Docker and Visual Studio Code

If you're new to writing code for SpECTRE and would like to jump right into a working development environment, a good place to start is our Code development quick-start with Docker and Visual Studio Code. If you prefer setting up your development environment differently, read on!

Dependencies

Note: You don't need to install any of these dependencies by hand, or by using yum, apt, or other package managers; it is much easier to instead use Singularity, Docker, or Spack (see the corresponding sections below) to obtain an environment that includes all of these dependencies.

Required:

  • GCC 7.0 or later, Clang 8.0 or later, or AppleClang 11.0.0 or later
  • CMake 3.12.0 or later
  • Charm++ 6.10.2, or 7.0.0 or later (experimental)
  • Git
  • BLAS (e.g. OpenBLAS)
  • Blaze v3.8
  • Boost 1.60.0 or later
  • Brigand
  • Catch 2.8.0 or later, but not 3.x as SpECTRE doesn't support v3 yet (If installing from source, it is easiest to use single-header installation)
  • GSL
  • HDF5 (non-mpi version on macOS)
  • jemalloc
  • LAPACK
  • libsharp should be built with support disabled for openmp and mpi, as we want all of our parallelism to be accomplished via Charm++.
  • LIBXSMM version 1.16.1 or later
  • yaml-cpp version 0.6.3 is recommended. Building with shared library support is also recommended.
  • Python 2.7, or 3.5 or later
  • NumPy 1.10 or later
  • SciPy
  • matplotlib
  • PyYAML
  • h5py

Optional:

  • Pybind11 2.6.0 or later for SpECTRE Python bindings
  • Doxygen 1.9.1 or later — to generate documentation
  • Python with BeautifulSoup4 and Pybtex — for documentation post-processing
  • Google Benchmark - to do microbenchmarking inside the SpECTRE framework. v1.2 or newer is required
  • LCOV and gcov — to check code test coverage
  • coverxygen — to check documentation coverage
  • PAPI — to access hardware performance counters
  • ClangFormat — to format C++ code in a clear and consistent fashion
  • Clang-Tidy — to "lint" C++ code
  • Cppcheck — to analyze C++ code
  • yapf 0.29.0 - to format python code
  • Scotch - to build the ScotchLB graph partition based load balancer in charm++.
  • ffmpeg - for animating 1d simulations with matplotlib

Clone the SpECTRE repository

First, clone the SpECTRE repository to a directory of your choice. In the following we will refer to it as SPECTRE_ROOT. You may git clone from GitHub, in which case SPECTRE_ROOT will be <your_current_directory>/spectre. That is, inside SPECTRE_ROOT are docs, src, support, tests etc. You can also download the source and extract them to your desired working directory, making sure not to leave out hidden files when you cp or mv the source files.

Using Docker to obtain a SpECTRE environment

A Docker image is available from DockerHub and can be used to build SpECTRE on a personal machine.

Note: The Docker image or the Singularity image (see below) are the recommended ways of using SpECTRE on a personal Linux machine. Because of the wide variety of operating systems available today it is not possible for us to support all configurations. However, using Spack as outlined below is a supported alternative to Docker or Singularity images.

Note: If you have SELinux active on your system you must figure out how to enable sharing files with the host OS. If you receive errors that you do not have permission to access a shared directory it is likely that your system has SELinux enabled. One option is to disable SELinux at the expense of reducing the security of your system.

To build with the Docker image:

  1. Retrieve the Docker image (you may need sudo in front of this command)
    docker pull sxscollaboration/spectrebuildenv:latest
  2. Start the Docker container (you may need sudo)

    docker run -v SPECTRE_ROOT:SPECTRE_ROOT --name CONTAINER_NAME \
    -i -t sxscollaboration/spectrebuildenv:latest /bin/bash
    • -v SPECTRE_ROOT:SPECTRE_ROOT binds the directory SPECTRE_ROOT outside the container to SPECTRE_ROOT inside the container. In this way, files in the SPECTRE_ROOT on your host system (outside the container) become accessible within the container through the directory SPECTRE_ROOT inside the container. If you wonder why the same SPECTRE_ROOT needs to be used for both inside and outside the container, which is why SPECTRE_ROOT is repeated in the command above with seperated by a colon, please see one of the notes below regarding -v flag.
    • The --name CONTAINER_NAME is optional, where CONTAINER_NAME is a name of your choice. If you don't name your container, docker will generate an arbitrary name.
    • On macOS you can significantly increase the performance of file system operations by appending the flag :delegated to -v, e.g. -v SPECTRE_ROOT:SPECTRE_ROOT:delegated (see https://docs.docker.com/docker-for-mac/osxfs-caching/).
    • It can be useful to expose a port to the host so you can run servers such as Jupyter for accessing the Python bindings (see Using SpECTRE's Python modules) or a Python web server to view the documentation. To do so, append the -p option, e.g. -p 8000:8000.

    You will end up in a bash shell in the docker container, as root (you need to be root). Within the container, the files in SPECTRE_ROOT are available and Charm++ is installed in /work/charm_6_10_2. For the following steps, stay inside the docker container as root.

  3. Proceed with building SpECTRE.

Notes:

  • Everything in your build directory is owned by root, and is accessible only within the container.
  • You should edit source files in SPECTRE_ROOT in a separate terminal outside the container, and use the container only for compiling and running the code.
  • If you exit the container (e.g. ctrl-d), your compilation directories are still saved, as are any other changes to the container that you have made. To restart the container, try the following commands (you may need sudo):
    1. docker ps -a, to list all containers with their CONTAINER_IDs and CONTAINER_NAMEs,
    2. docker start -i CONTAINER_NAME or docker start -i CONTAINER_ID, to restart your container.
  • When the Docker container gets updated, you can stop it with docker stop CONTAINER_NAME, remove it with docker rm CONTAINER_NAME and then start at step 2 above to run it again.
  • You can run more than one shell in the same container, for instance one shell for compiling with gcc and another for compiling with clang. To add a new shell, run docker exec -it CONTAINER_NAME /bin/bash (or docker exec -it CONTAINER_ID /bin/bash) from a terminal outside the container.
  • In step 3 above, technically docker allows you to say -v SPECTRE_ROOT:/my/new/path to map SPECTRE_ROOT outside the container to any path you want inside the container, but do not do this. Compiling inside the container sets up git hooks in SPECTRE_ROOT that contain hardcoded pathnames to SPECTRE_ROOT as seen from inside the container. So if your source paths inside and outside the container are different, commands like git commit run from outside the container will die with No such file or directory.

Using Singularity to obtain a SpECTRE environment

Singularity is a container alternative to Docker with better security and nicer integration.

To build SpECTRE with Singularity you must:

  1. Build Singularity and add it to your $PATH
  2. cd to the directory where you want to store the SpECTRE Singularity image, source, and build directories, let's call it WORKDIR. The WORKDIR must be somewhere in your home directory. If this does not work for you, follow the Singularity instructions on setting up additional bind points (version 3.7. For other versions, see the docs). Once inside the WORKDIR, clone SpECTRE into WORKDIR/SPECTRE_ROOT.
  3. Run sudo singularity build spectre.img docker://sxscollaboration/spectrebuildenv:latest.

    If you get the error message that makesquashfs did not have enough space to create the image you need to set a different SINGULARITY_TMPDIR. This can be done by running: sudo SINGULARITY_TMPDIR=/path/to/new/tmp singularity build spectre.img docker://sxscollaboration/spectrebuildenv:latest. Normally SINGULARITY_TMPDIR is /tmp, but building the image will temporarily need almost 8GB of space.

    You can control where Singularity stores the downloaded image files from DockerHub by specifying the SINGULARITY_CACHEDIR environment variable. The default is $HOME/.singularity/. Note that $HOME is /root when running using sudo.

  4. To start the container run singularity shell spectre.img and you will be dropped into a bash shell.
  5. Proceed with building SpECTRE.

Notes:

  • You should edit source files in SPECTRE_ROOT in a separate terminal outside the container, and use the container only for compiling and running the code.
  • If you don't have the same Python version in your environment outside the container as the version inside the container, this will create problems with git hooks. The Singularity container uses python3.8 by default. Thus, it is up to the user to ensure that they are using the same Python version inside and outside the container. To use a different Python version in the container add -D Python_EXECUTABLE=/path/to/python to the cmake command where /path/to/python is usually /usr/bin/pythonX and X is the version you want.
  • Unlike Docker, Singularity does not keep the state between runs. However, it shares the home directory with the host OS so you should do all your work somewhere in your home directory.
  • To run more than one container just do singularity shell spectre.img in another terminal.
  • Since the data you modify lives on the host OS there is no need to worry about losing any data, needing to clean up old containers, or sharing data between containers and the host.

Using Spack to set up a SpECTRE environment

SpECTRE's dependencies can be installed with Spack, a package manager tailored for HPC use. Install Spack by cloning it into SPACK_DIR (a directory of your choice). Then, enable Spack's shell support with source SPACK_DIR/share/spack/setup-env.sh. Consider adding this line to your .bash_profile, .bashrc, or similar. Refer to Spack's getting started guide for more information.

Once you have Spack installed, one way to install the SpECTRE dependencies is with a Spack environment:

# Distributed under the MIT License.
# See LICENSE.txt for details.
# SpECTRE development environment that can be installed with the Spack package
# manager, both on clusters and personal machines.
#
# To install this environment, first clone
# [Spack](https://github.com/spack/spack) and refer to the
# [docs](https://spack.readthedocs.io/) for an introduction to Spack. Then:
#
# $ spack env create YOUR_ENV_NAME support/DevEnvironments/spack.yaml
# $ spack env activate YOUR_ENV_NAME -p
#
# Now you can adjust the environment for your system. You may want to `spack
# remove` and `spack add` some packages, e.g., to customize the Charm++
# installation or to omit packages provided by your system or installed via
# another package manager. To generate the list of packages that will be
# installed, run:
#
# $ spack concretize -f [--reuse]
#
# You may want to run `spack external find` and concretize with `--reuse` to
# avoid reinstalling a bunch of system-provided packages. When you are happy
# with the concretized environment, run:
#
# $ spack install
#
# All dependencies will be installed in the Spack build tree and linked into the
# environment. Now you can run CMake, build SpECTRE, etc. To pass options like
# `CHARM_ROOT` to CMake, if necessary, you can find the location of installed
# packages with `spack location --install-dir`.
#
# Since the `spack` command is quite slow, you can also generate a module file
# that is much faster to source:
#
# $ spack env loads -r
#
# Now you can activate the environment by sourcing the generated module file.
#
# See the [Spack docs on environments](https://spack.readthedocs.io/en/latest/environments.html)
# for more information.
spack:
specs:
- blaze@3.8
- 'boost@1.60:1.72'
- brigand@master
- 'catch2@2.8:'
# Charm++:
# - The 'multicore' backend runs with shared memory on a single node. On
# clusters you should choose one of the multi-node backends instead.
- charmpp@6.10.2 backend=multicore
- 'cmake@3.12:'
- 'doxygen@1.9.2:'
- git
- gsl
- hdf5 -mpi
- jemalloc
- libsharp -mpi -openmp
- 'libxsmm@1.16.1:'
- openblas
- 'python@3.7:'
- py-h5py -mpi
- py-numpy
- py-pip
- py-scipy
- 'py-pybind11@2.6:'
- yaml-cpp
concretization: together
view: true

You can also install the Spack packages listed in the environment file above with a plain spack install if you prefer.

Notes:

  • Spack allows very flexible configurations and we recommended you read the documentation if you require features such as packages installed with different compilers.
  • For security, it is good practice to make Spack use the system's OpenSSL rather than allow it to install a new copy.
  • To avoid reinstalling lots of system-provided packages with Spack, use the spack external find feature and the --reuse flag to spack concretize (or spack install). You can also install some of the dependencies with your system's package manager in advance, e.g., with apt or brew. If they are not picked up by spack external find automatically, register them with Spack manually. See the Spack documentation on external packages for details.
  • Spack works well with a module environment, such as LMod. See the Spack documentation on modules for details.
  • On macOS:

Building Charm++

If you are not using a container, haven't installed Charm++ with Spack, or want to install Charm++ manually for other reasons, follow the installation instructions in the Charm++ repository and in their documentation. Here are a few notes:

  • Once you cloned the Charm++ repository, run git checkout v6.10.2 to switch to a supported, stable release of Charm++.
  • Choose the LIBS target to compile. This is needed so that we can support the more sophisticated load balancers in SpECTRE executables.
  • On a personal machine the correct target architecture is likely multicore-linux-x86_64, or multicore-darwin-x86_64 on macOS. On an HPC system the correct Charm++ target architecture depends on the machine's inter-node communication architecture. It might take some experimenting to figure out which Charm++ configuration provides the best performance.
  • When compiling Charm++ you can specify the compiler using, for example,
    ./build LIBS ARCH clang

Building SpECTRE

Once you have set up your development environment you can compile SpECTRE. Follow these steps:

  1. Create a build directory where you would like to compile SpECTRE. In the Docker container you could create, e.g., /work/spectre-build. It can be useful to add a descriptive label to the name of the build directory since you may create more later, e.g., build-clang-Debug. Then, cd into the build directory.
  2. Determine the location of your Charm++ installation. In the Docker container it is /work/charm_6_10_2/multicore-linux-x86_64-gcc for GCC builds and /work/charm_6_10_2/multicore-linux-x86_64-clang for clang builds. For Spack installations you can determine it with spack location --install-dir charmpp. We refer to the install directory as CHARM_ROOT below.
  3. In your new SpECTRE build directory, configure the build with CMake:
    cmake -D CHARM_ROOT=$CHARM_ROOT SPECTRE_ROOT
    Add options to the cmake command to to configure the build, select compilers, etc. For instance, to build with clang you may run:
    cmake -D CMAKE_CXX_COMPILER=clang++ \
    -D CMAKE_C_COMPILER=clang \
    -D CMAKE_Fortran_COMPILER=gfortran \
    -D CHARM_ROOT=$CHARM_ROOT \
    SPECTRE_ROOT
    See Commonly Used CMake flags for documentation on possible configuration options.
  4. When cmake configuration is done, you are ready to build target executables.
    • You can see the list of available targets by running make list (or ninja list if you are using the Ninja generator) or by using tab completion. Compile targets with make -jN TARGET (or ninja -jN TARGET), where N is the number of cores to build on in parallel (e.g. -j4). Note that the Ninja generator allows you to compile individual source files too.
    • Compile the unit-tests target and run ctest -L unit to run unit tests. Compile test-executables and run ctest to run all tests, including executables. To compile test-executables you may have to reduce the number of cores you build on in parallel to avoid running out of memory.
    • To compile the Python bindings, add the option -D BUILD_PYTHON_BINDINGS=ON to the cmake command and compile the all-pybindings target (see Using SpECTRE's Python modules).

Code Coverage Analysis

For any coverage analysis you will need to have LCOV installed on the system. For documentation coverage analysis you will also need to install coverxygen and for test coverage analysis gcov.

If you have these installed (which is already done if you are using the docker container), you can look at code coverage as follows:

  1. On a gcc build, pass -D COVERAGE=ON to cmake
  2. make unit-test-coverage
  3. The output is in docs/html/unit-test-coverage.