SpECTRE Documentation Coverage Report
Current view: top level - __w/spectre/spectre/docs/Installation - InstallationOnClusters.md Hit Total Coverage
Commit: 3c072f0ce967e2e56649d3fa12aa2a0e4fe2a42e Lines: 0 1 0.0 %
Date: 2024-04-23 20:50:18
Legend: Lines: hit not hit

          Line data    Source code
       1           0 : \cond NEVER
       2             : Distributed under the MIT License.
       3             : See LICENSE.txt for details.
       4             : \endcond
       5             : # Installation on Clusters {#installation_on_clusters}
       6             : 
       7             : \tableofcontents
       8             : 
       9             : The installation instructions are the same for most systems because we use shell
      10             : scripts to set up the environment for each supercomputer. We describe the
      11             : generic installation instructions once, and only note special instructions if
      12             : necessary for the particular system. If you have already built SpECTRE and just
      13             : want to load the modules, source the shell file for your system and run
      14             : `spectre_load_modules`.
      15             : 
      16             : \note Sample submit scripts for some systems are available in
      17             : `support/SubmitScripts`.
      18             : 
      19             : ## General Instructions
      20             : 
      21             : 1. Run `export SPECTRE_HOME=/path/to/where/you/want/to/clone`
      22             : 2. Clone SpECTRE using `git clone SPECTRE_URL $SPECTRE_HOME`
      23             : 3. Run `cd $SPECTRE_HOME && mkdir build && cd build`
      24             : 4. Run `. $SPECTRE_HOME/support/Environments/SYSTEM_TO_RUN_ON_gcc.sh`, where
      25             :    `SYSTEM_TO_RUN_ON` is replaced by the name of the system as described in the
      26             :    relevant section below.
      27             : 5. If you haven't already installed the dependencies, run
      28             :    `export SPECTRE_DEPS=/path/to/where/you/want/the/deps`
      29             :    Then run `spectre_setup_modules $SPECTRE_DEPS`. This
      30             :    will take a while to finish. Near the end the command will tell you how to
      31             :    make the modules available by providing a `module use` command. Make
      32             :    sure you are providing an absolute path to `spectre_setup_modules`.
      33             : 6. Run `module use $SPECTRE_DEPS/modules`
      34             : 7. Run `spectre_run_cmake`, if you get module loading errors run
      35             :    `spectre_unload_modules` and try running `spectre_run_cmake` again. CMake
      36             :    should set up successfully.
      37             : 8. Build the targets you are interested in by running, e.g.
      38             :    `make -j4 test-executables`
      39             : 
      40             : ## Anvil at Purdue University
      41             : 
      42             : You should build and run tests on a compute node. You can get a compute node by
      43             : running
      44             : ```
      45             : sinteractive -N1 -n 20 -p debug -t 60:00
      46             : ```
      47             : Avoid running `module purge` because this also removes various default modules
      48             : that are necessary for proper operation. Instead, use `module
      49             : restore`. Currently the tests can only be run in serial, e.g. `ctest -j1`
      50             : because all the MPI jobs end up being launched on the same core.
      51             : 
      52             : ## Frontera at TACC
      53             : 
      54             : Follow the general instructions, using `frontera` for `SYSTEM_TO_RUN_ON`.
      55             : 
      56             : Processes running on the head nodes have restrictions on memory use
      57             : that will prevent linking the main executables.  It is better to
      58             : compile on an interactive node.  Interactive nodes can be requested
      59             : with the `idev` command.
      60             : 
      61             : For unknown reasons, incremental builds work poorly on frontera.
      62             : Running `make` will often unnecessarily recompile SpECTRE libraries.
      63             : 
      64             : ## Wheeler at Caltech
      65             : 
      66             : Follow the general instructions using `wheeler` for `SYSTEM_TO_RUN_ON`, except
      67             : you do not need to install any dependencies, so you can skip steps 5 and 6. You
      68             : can optionally compile using LLVM/Clang by sourcing `wheeler_clang.sh` instead
      69             : of `wheeler_gcc.sh`
      70             : 
      71             : If you are running jobs on a Wheeler interactive compute
      72             : node, make sure that when you allocate the interactive node using
      73             : `srun`, use the `-c <CPUS_PER_TASK>` option to `srun`, and not the `-n
      74             : <NUMBER_OF_TASKS>` option.  If you use the `-n <NUMBER_OF_TASKS>`
      75             : option and pass the number of cores for NUMBER_OF_TASKS, then you will
      76             : get multiple MPI ranks on your node and the run will hang.
      77             : 
      78             : ## CaltechHPC at Caltech
      79             : 
      80             : Follow the general instructions, using `caltech_hpc` for `SYSTEM_TO_RUN_ON`,
      81             : except you don't need to install any dependencies, so you can skip step 5 and
      82             : step 6. There are also two different types of compute nodes on CaltechHPC:
      83             : 
      84             : 1. Skylake Intel nodes
      85             : 2. Icelake Intel nodes
      86             : 
      87             : Each type of compute node has it's own environment file, so be sure to choose
      88             : the one you want. When you go to build, you will need to get an interactive node
      89             : (login nodes limit the amount of memory accessible to individual users, to below
      90             : the amount necessary to build SpECTRE).
      91             : 
      92             : To ensure you get an entire node to build on, use the following commands.
      93             : 
      94             : 1. For Skylake Intel nodes
      95             : ```
      96             : srun --partition=expansion -t 02:00:00 -N 1 -c 56 -D . --pty /bin/bash
      97             : ```
      98             : 2. For Icelake Intel nodes
      99             : ```
     100             : srun --partition=expansion --constraint=icelake -t 02:00:00 \
     101             :     -N 1 -c 64 -D . --pty /bin/bash
     102             : ```
     103             : 
     104             : If you are part of the SXS collaboration, you can add `-A sxs` to bill the SXS
     105             : allocation. Also, if you are part of the collaboration, you can use our
     106             : reserved nodes by specifying `--reservation=sxs`. However, our reserved nodes
     107             : are Skylake nodes only, so adding the reservation flag won't work for the
     108             : Icelake nodes.
     109             : 
     110             : Be sure to re-source the correct environment files once you get the interactive
     111             : node shell.
     112             : 
     113             : ## Ocean at Fullerton
     114             : 
     115             : Follow the general instructions, using `ocean` for `SYSTEM_TO_RUN_ON`,
     116             : you do not need to install any dependencies, so you can skip steps 5 and 6.
     117             : 
     118             : ## Mbot at Cornell
     119             : 
     120             : Follow the general instructions, using `mbot` for `SYSTEM_TO_RUN_ON`,
     121             : you do not need to install any dependencies, so you can skip steps 5 and 6.

Generated by: LCOV version 1.14