spectre¶
SpECTRE version: 2024.09.29
spectre [OPTIONS] COMMAND [ARGS]...
Options
- --version¶
Show the version and exit.
- --machine¶
Show the machine we’re running on and exit.
- --debug¶
Enable debug logging.
- --silent¶
Disable all logging.
- -b, --build-dir <build_dir>¶
Prepend a build directory to the PATH so subprocesses can find executables in it. Without this option, executables are found in the current PATH, and fall back to the build directory in which this Python script is installed.
- --profile¶
Enable profiling. Expect slower execution due to profiling overhead. A summary of the results is printed to the terminal. Use the ‘–output-profile’ option to write the results to a file.
- --output-profile <output_profile>¶
Write profiling results to a file. The file can be opened by profiling visualization tools such as ‘pstats’ or ‘gprof2dot’. See the Python ‘cProfile’ docs for details.
- -c, --config-file <config_file>¶
Configuration file in YAML format. Can provide defaults for command-line options and additional configuration. To specify options for subcommands, list them in a section with the same name as the subcommand. All options that are listed in the help string for a subcommand are supported. Unless otherwise specified in the help string, use the name of the option with dashes replaced by underscores. Example:
status:starttime: now-2daysstate_styles:RUNNING: blinkplot:dat:stylesheet: path/to/stylesheet.mplstyleThe path of the config file can also be specified by setting the ‘SPECTRE_CONFIG_FILE’ environment variable.
- Default:
'~/.config/spectre.yaml'
Environment variables
- SPECTRE_CONFIG_FILE
Provide a default for
-c
bbh¶
Pipeline for binary black hole simulations.
spectre bbh [OPTIONS] COMMAND [ARGS]...
find-horizon¶
Find an apparent horizon in volume data.
spectre bbh find-horizon [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times.
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- -l, --l-max <l_max>¶
Required Max l-mode for the horizon search.
- -r, --initial-radius <initial_radius>¶
Required Initial coordinate radius of the horizon.
- -C, --center <center>¶
Required Coordinate center of the horizon.
- --output-surfaces-file <output_surfaces_file>¶
H5 output file where the horizon Ylm coefficients will be written. Can be a new or existing file.
- --output-coeffs-subfile <output_coeffs_subfile>¶
Name of the subfile in the ‘output_surfaces_file’ where the horizon Ylm coefficients will be written. These can be used to reconstruct the horizon, e.g. to initialize excisions in domains.
- --output-coords-subfile <output_coords_subfile>¶
Name of the subfile in the ‘output_surfaces_file’ where the horizon coordinates will be written. These can be used for visualization.
- --output-reductions-file <output_reductions_file>¶
H5 output file where the reduction quantities on the horizon will be written, e.g. masses and spins. Can be a new or existing file.
- --output-quantities-subfile <output_quantities_subfile>¶
Name of the subfile in the ‘output_reductions_file’ where the horizon quantities will be written, e.g. masses and spins.
Arguments
- H5_FILES¶
Required argument(s)
generate-id¶
Generate initial data for a BBH simulation.
Parameters for the initial data will be inserted into the ‘id_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.
The orbital parameters can be computed with the function ‘initial_orbital_parameters’ in ‘support.Pipelines.EccentricityControl.InitialOrbitalParameters’.
- Intrinsic parameters:
mass_a: Mass of the larger black hole. mass_b: Mass of the smaller black hole. dimensionless_spin_a: Dimensionless spin of the larger black hole, chi_A. dimensionless_spin_b: Dimensionless spin of the smaller black hole, chi_B.
- Orbital parameters:
separation: Coordinate separation D of the black holes. orbital_angular_velocity: Omega_0. radial_expansion_velocity: adot_0.
- Control parameters:
- center_of_mass_offset: Offset from the Newtonian center of mass.
(default: [0., 0., 0.])
- linear_velocity: Velocity added to the shift boundary condition.
(default: [0., 0., 0.])
- Scheduling options:
id_input_file_template: Input file template where parameters are inserted. control: If set to True, a postprocessing control loop will adjust the
input parameters to drive the horizon masses and spins to the specified values. If set to False, the horizon masses and spins in the generated data will differ from the input parameters. (default: False)
evolve: Set to True to evolve the initial data after generation. pipeline_dir: Directory where steps in the pipeline are created. Required
when ‘evolve’ is set to True. The initial data will be created in a subdirectory ‘001_InitialData’.
- run_dir: Directory where the initial data is generated. Mutually exclusive
with ‘pipeline_dir’.
out_file_name: Optional. Name of the log file. (Default: “spectre.out”)
spectre bbh generate-id [OPTIONS]
Options
- -q, --mass-ratio <mass_ratio>¶
Required Mass ratio of the binary, defined as q = M_A / M_B >= 1.
- --dimensionless-spin-A, --chi-A <dimensionless_spin_a>¶
Required Dimensionless spin of the larger black hole, chi_A.
- --dimensionless-spin-B, --chi-B <dimensionless_spin_b>¶
Required Dimensionless spin of the smaller black hole, chi_B.
- -D, --separation <separation>¶
Coordinate separation D of the black holes.
- -w, --orbital-angular-velocity <orbital_angular_velocity>¶
Orbital angular velocity Omega_0.
- -a, --radial-expansion-velocity <radial_expansion_velocity>¶
Radial expansion velocity adot0 which is radial velocity over radius.
- -e, --eccentricity <eccentricity>¶
Eccentricity of the orbit. Specify together with _one_ of the other orbital parameters. Currently only an eccentricity of 0 is supported (circular orbit).
- -l, --mean-anomaly-fraction <mean_anomaly_fraction>¶
Mean anomaly of the orbit divided by 2 pi, so it is a number between 0 and 1. The value 0 corresponds to the pericenter of the orbit (closest approach), and the value 0.5 corresponds to the apocenter of the orbit (farthest distance).
- --num-orbits <num_orbits>¶
Number of orbits until merger. Specify together with a zero eccentricity to compute initial orbital parameters for a circular orbit.
- --time-to-merger <time_to_merger>¶
Time to merger. Specify together with a zero eccentricity to compute initial orbital parameters for a circular orbit.
- -L, --refinement-level <refinement_level>¶
h-refinement level.
- Default:
1
- -P, --polynomial-order <polynomial_order>¶
p-refinement level.
- Default:
6
- --id-input-file-template <id_input_file_template>¶
Input file template for the initial data.
- Default:
PosixPath('/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/InitialData.yaml')
- --control, --no-control¶
Control BBH physical parameters.
- Default:
True
- --evolve¶
Evolve the initial data after generation. When this flagis specified, you must also specify a pipeline directory (-d),instead of a run directory (-o).
- -d, --pipeline-dir <pipeline_dir>¶
Directory where steps in the pipeline are created.
- -E, --executable <executable>¶
The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
- Default:
'executable listed in input file'
- -o, --run-dir <run_dir>¶
The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.
- -O, --segments-dir <segments_dir>¶
The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.
- --copy-executable, --no-copy-executable¶
Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.
- -C, --clean-output¶
Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.
- -f, --force¶
Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.
- --scheduler <scheduler>¶
The scheduler invoked to queue jobs on the machine.
- Default:
'none'
- --no-schedule¶
Run the executable directly, without scheduling it.
- --submit-script-template <submit_script_template>¶
Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).
- Default:
'/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh'
- -J, --job-name <job_name>¶
A short name for the job (see main help text for possible placeholders).
- Default:
'executable name'
- -j, -c, --num-procs <num_procs>¶
Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.
- -N, --num-nodes <num_nodes>¶
Number of nodes
- --queue <queue>¶
Name of the queue.
- -t, --time-limit <time_limit>¶
Wall time limit. Must be compatible with the chosen queue.
- -p, --param <extra_params>¶
Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
postprocess-id¶
Postprocess initial data after generation.
This function is called automatically after the initial data has been generated (see the ‘Next’ section in the ‘InitialData.yaml’ input file), or manually by pointing the ID_INPUT_FILE_PATH to the input file of the initial data run. Also specify ‘id_run_dir’ if the initial data was run in a different directory than where the input file is.
This function does the following:
Find apparent horizons in the data to determine quantities like the masses and spins of the black holes. These quantities are stored in the given ‘horizons_file’ in subfiles ‘Ah{A,B}.dat’. In addition, the horizon surface coordinates and coefficients are written to the ‘horizons_file’ in subfiles ‘Ah{A,B}/Coordinates’ and ‘Ah{A,B}/Coefficients’.
If ‘control’ is set to True, run a control loop such that masses and spins of the horizons match the input parameters. See ControlId.py for details.
Start the inspiral if ‘evolve’ is set to True.
- Arguments:
id_input_file_path: Path to the input file of the initial data run. id_run_dir: Directory of the initial data run. Paths in the input file are
relative to this directory. If not provided, the directory of the input file is used.
horizon_l_max: Maximum l-mode for the horizon search. horizons_file: Path to the file where the horizon data is written to.
Default is ‘Horizons.h5’ in the ‘id_run_dir’.
control: Control BBH physical parameters (default: True). control_residual_tolerance: Residual tolerance used for control. control_max_iterations: Maximum of iterations allowed for control. control_refinement_level: h-refinement used for control. control_polynomial_order: p-refinement used for control. control_params: Dictionary used to customize control. See ControlId.py
for details.
evolve: Evolve the initial data after postprocessing (default: False). pipeline_dir: Directory where steps in the pipeline are created.
Required if ‘evolve’ is set to True.
spectre bbh postprocess-id [OPTIONS] ID_INPUT_FILE_PATH
Options
- -i, --id-run-dir <id_run_dir>¶
Directory of the initial data run. Paths in the input file are relative to this directory.
- Default:
'directory of the ID_INPUT_FILE_PATH'
- --control <control>¶
Control BBH physical parameters during postprocessing.
- Default:
True
- --evolve¶
Evolve the initial data after postprocessing.
- -d, --pipeline-dir <pipeline_dir>¶
Directory where steps in the pipeline are created.
- --horizon-l-max <horizon_l_max>¶
Maximum l-mode for the horizon search.
- Default:
16
- --horizons-file <horizons_file>¶
Path to the file where the horizon data is written to.
- Default:
'Horizons.h5 in the ID_RUN_DIR'
- -E, --executable <executable>¶
The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
- Default:
'executable listed in input file'
- -o, --run-dir <run_dir>¶
The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.
- -O, --segments-dir <segments_dir>¶
The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.
- --copy-executable, --no-copy-executable¶
Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.
- -C, --clean-output¶
Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.
- -f, --force¶
Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.
- --scheduler <scheduler>¶
The scheduler invoked to queue jobs on the machine.
- Default:
'none'
- --no-schedule¶
Run the executable directly, without scheduling it.
- --submit-script-template <submit_script_template>¶
Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).
- Default:
'/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh'
- -J, --job-name <job_name>¶
A short name for the job (see main help text for possible placeholders).
- Default:
'executable name'
- -j, -c, --num-procs <num_procs>¶
Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.
- -N, --num-nodes <num_nodes>¶
Number of nodes
- --queue <queue>¶
Name of the queue.
- -t, --time-limit <time_limit>¶
Wall time limit. Must be compatible with the chosen queue.
- -p, --param <extra_params>¶
Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
Arguments
- ID_INPUT_FILE_PATH¶
Required argument
start-inspiral¶
Schedule an inspiral simulation from initial data.
Point the ID_INPUT_FILE_PATH to the input file of your initial data run, or to an ‘ID_Params.perl’ file from SpEC. Also specify ‘id_run_dir’ if the initial data was run in a different directory than where the input file is. Parameters for the inspiral will be determined from the initial data and inserted into the ‘inspiral_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.
## Resource allocation
Runs on 4 nodes by default when scheduled on a cluster. Set ‘num_nodes’ to adjust.
spectre bbh start-inspiral [OPTIONS] ID_INPUT_FILE_PATH
Options
- -i, --id-run-dir <id_run_dir>¶
Directory of the initial data run. Paths in the input file are relative to this directory.
- Default:
'directory of the ID_INPUT_FILE_PATH'
- --inspiral-input-file-template <inspiral_input_file_template>¶
Input file template for the inspiral.
- Default:
PosixPath('/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/Inspiral.yaml')
- --id-horizons-path <id_horizons_path>¶
H5 file that holds information of the horizons of the ID solve. If this file does not exist in your ID directory, run ‘spectre bbh postprocess-id’ in the ID directory to generate it. Note that this is not needed if you are starting from a SpEC ID_Params.perl file.
- Default:
"Horizons.h5 inside 'id-run-dir'"
- -L, --refinement-level <refinement_level>¶
h-refinement level.
- Default:
1
- -P, --polynomial-order <polynomial_order>¶
p-refinement level.
- Default:
8
- --continue-with-ringdown¶
Continue with the ringdown simulation once a common horizon has formed.
- --eccentricity-control¶
Perform eccentricity reduction script that finds current eccentricityand better guesses for the input orbital parameters.
- -d, --pipeline-dir <pipeline_dir>¶
Directory where steps in the pipeline are created.
- -E, --executable <executable>¶
The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
- Default:
'executable listed in input file'
- -o, --run-dir <run_dir>¶
The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.
- -O, --segments-dir <segments_dir>¶
The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.
- --copy-executable, --no-copy-executable¶
Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.
- -C, --clean-output¶
Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.
- -f, --force¶
Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.
- --scheduler <scheduler>¶
The scheduler invoked to queue jobs on the machine.
- Default:
'none'
- --no-schedule¶
Run the executable directly, without scheduling it.
- --submit-script-template <submit_script_template>¶
Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).
- Default:
'/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh'
- -J, --job-name <job_name>¶
A short name for the job (see main help text for possible placeholders).
- Default:
'executable name'
- -j, -c, --num-procs <num_procs>¶
Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.
- -N, --num-nodes <num_nodes>¶
Number of nodes
- --queue <queue>¶
Name of the queue.
- -t, --time-limit <time_limit>¶
Wall time limit. Must be compatible with the chosen queue.
- -p, --param <extra_params>¶
Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
Arguments
- ID_INPUT_FILE_PATH¶
Required argument
start-ringdown¶
Schedule a ringdown simulation from the inspiral.
Point the inspiral_run_dir to the last inspiral segment. Also specify ‘inspiral_input_file’ if the simulation was run in a different directory than where the input file is. Parameters for the ringdown will be determined from the inspiral and inserted into the ‘ringdown_input_file_template’. The remaining options are forwarded to the ‘schedule’ command. See ‘schedule’ docs for details.
Here ‘parameters for the ringdown’ includes the information needed to initialize the time-dependent maps, including the shape map. Common horizon shape coefficients in the ringdown distorted frame will be written to disk that the ringdown input file will point to.
- Arguments:
inspiral_run_dir: Path to the last segment in the inspiral run directory. number_of_ahc_finds_for_fit: The number of ahc finds that will be used in the fit. match_time: The time to match the time dependent maps at. settling_timescale: The settling timescale for the rotation and expansion maps. zero_coefs_eps: “If the sum of a given coefficient over all ‘number_of_ahc_finds_for_fit’ is less than ‘zero_coefs_eps’, set that coefficient to 0.0 exactly.” refinement_level: The initial h refinement level for ringdown. polynomial_order: The initial p refinement level for ringdown. inspiral_input_file: The input file used for during the Inspiral, defaults to the Inspiral.yaml inside the inspiral_run_dir. ahc_reductions_path: The full path to the BbhReductions file that contains AhC data, defaults to BbhReductions.h5 in the inspiral_run_dir. ahc_subfile: Subfile containing reduction data at times of AhC finds, defaults to ‘ObservationAhC_Ylm’. fot_vol_h5_path: The full path to any volume data containing the functions of time at the time of AhC finds, defaults to BbhVolume0.h5 in the inspiral_run_dir. fot_vol_subfile: Subfile containing volume data where at times of AhC finds, defaults to ‘ForContinuation’. path_to_output_h5: H5 file to output horizon coefficients needed for Ringdown. output_subfile_prefix: Subfile prefix for output data, defaults to ‘Distorted’. ringdown_input_file_template: Yaml to insert ringdown coefficients into.
spectre bbh start-ringdown [OPTIONS] INSPIRAL_RUN_DIR
Options
- -i, --inspiral-input-file <inspiral_input_file>¶
Path to Inspiral yaml, defaults to Inspiral.yaml in directory given.
- --ahc-reductions-path <ahc_reductions_path>¶
Path to reduction file containing AhC coefs, defualts to ‘BbhReductions.h5’ in directory given.
- --ahc-subfile <ahc_subfile>¶
Subfile path name in reduction data containing AhC coefs, defaults to ‘ObservationAhC_Ylm’
- --fot-vol-h5-path <fot_vol_h5_path>¶
Path to volume data file containing functions of time, defaults to ‘BbhVolume0.h5’ in directory given.
- --fot-vol-subfile <fot_vol_subfile>¶
Subfile in volume data with functions of time at different times, defaults to ‘ForContinuation’.
- --path-to-output-h5 <path_to_output_h5>¶
Output h5 file for shape coefs, defaults to directory where commandwas run.
- --output-subfile-prefix <output_subfile_prefix>¶
Output subfile prefix for AhC coefs, defaults to ‘Distorted’
- --number-of-ahc-finds-for-fit <number_of_ahc_finds_for_fit>¶
Required Number of AhC finds that will be used for the fit.
- --match-time <match_time>¶
Required Desired match time (volume data must contain data at this time)
- --settling-timescale <settling_timescale>¶
Required Damping timescale for settle to const
- --zero-coefs-eps <zero_coefs_eps>¶
Sets shape coefficients to exactly 0.0 if the sum over all ‘–number-of-ahc-finds-for-fit’ are within zero-coefs-eps close to 0.0
- -L, --refinement-level <refinement_level>¶
h-refinement level.
- Default:
2
- -P, --polynomial-order <polynomial_order>¶
p-refinement level.
- Default:
11
- --ringdown-input-file-template <ringdown_input_file_template>¶
Input file template for the ringdown.
- Default:
PosixPath('/__w/spectre/spectre/build/bin/python/spectre/Pipelines/Bbh/Ringdown.yaml')
- -d, --pipeline-dir <pipeline_dir>¶
Directory where steps in the pipeline are created.
- -E, --executable <executable>¶
The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
- Default:
'executable listed in input file'
- -o, --run-dir <run_dir>¶
The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.
- -O, --segments-dir <segments_dir>¶
The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.
- --copy-executable, --no-copy-executable¶
Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.
- -C, --clean-output¶
Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.
- -f, --force¶
Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.
- --scheduler <scheduler>¶
The scheduler invoked to queue jobs on the machine.
- Default:
'none'
- --no-schedule¶
Run the executable directly, without scheduling it.
- --submit-script-template <submit_script_template>¶
Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).
- Default:
'/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh'
- -J, --job-name <job_name>¶
A short name for the job (see main help text for possible placeholders).
- Default:
'executable name'
- -j, -c, --num-procs <num_procs>¶
Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.
- -N, --num-nodes <num_nodes>¶
Number of nodes
- --queue <queue>¶
Name of the queue.
- -t, --time-limit <time_limit>¶
Wall time limit. Must be compatible with the chosen queue.
- -p, --param <extra_params>¶
Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
Arguments
- INSPIRAL_RUN_DIR¶
Required argument
clean-output¶
Deletes output files specified in the input_file from the output_dir, raising an error if the expected output files were not found.
The input_file must list its expected output files in the metadata. They may contain glob patterns:
spectre clean-output [OPTIONS] INPUT_FILE
Options
- -o, --output-dir <output_dir>¶
Required Output directory of the run to clean up
- -f, --force¶
Suppress all errors
Arguments
- INPUT_FILE¶
Required argument
combine-h5¶
Combines multiple HDF5 files
spectre combine-h5 [OPTIONS] COMMAND [ARGS]...
combine-h5-dat¶
Combines multiple HDF5 dat files
This executable is used for combining a series of HDF5 files, each containing one or more dat files, into a single HDF5 file. A typical use case is to join dat-containing HDF5 files from different segments of a simulation, with each segment containing values of the dat files during different time intervals.
- Arguments:
h5files: List of H5 dat files to join output: Output filename. An extension ‘.h5’ will be added if not present. force: If specified, overwrite output file if it already exists
spectre combine-h5 combine-h5-dat [OPTIONS] H5FILES...
Options
- -o, --output <output>¶
Required Combined output filename.
- -f, --force¶
If the output file already exists, overwrite it.
Arguments
- H5FILES¶
Required argument(s)
vol¶
Combines volume data spread over multiple H5 files into a single file
The typical use case is to combine volume data from multiple nodes into a single file, if this is necessary for further processing (e.g. for the ‘extend-connectivity’ command). Note that for most use cases it is not necessary to combine the volume data into a single file, as most commands can operate on multiple input H5 files (e.g. ‘generate-xdmf’).
Note that this command does not currently combine volume data from different time steps (e.g. from multiple segments of a simulation). All input H5 files must contain the same set of observation IDs.
spectre combine-h5 vol [OPTIONS] H5FILES...
Options
- -d, --subfile-name <subfile_name>¶
subfile name of the volume file in the H5 file
- -o, --output <output>¶
Required combined output filename
- --start-time <start_time>¶
The earliest time at which to start visualizing. The start-time value is included.
- --stop-time <stop_time>¶
The time at which to stop visualizing. The stop-time value is not included.
- -b, --block <block_or_group_names>¶
Name of block or block group to analyze. Can be specified multiple times to plot several block(groups) at once.
- --check-src, --no-check-src¶
flag to check src files, True implies src files exist and can be checked, False implies no src files to check.
- Default:
True
Arguments
- H5FILES¶
Required argument(s)
delete-subfiles¶
Delete subfiles from the ‘H5FILES’
spectre delete-subfiles [OPTIONS] H5FILES...
Options
- -d, --subfile <subfiles>¶
Required Subfile to delete. Can be specified multiple times to delete many subfiles at once.
- --repack, --no-repack¶
Repack the H5 files after deleting subfiles to reduce file size. Otherwise, the subfiles are deleted but the file size remains unchanged.
Arguments
- H5FILES¶
Required argument(s)
eccentricity-control¶
Compute updates based on fits to the coordinate separation for manual eccentricity control
Usage:
Select an appropriate time window without large initial transients and about 2 to 3 orbits of data.
This script uses the coordinate separations between Objects A and B to compute a finite difference approximation to the time derivative. It then fits different models to it.
For each model, the suggested updates dOmega and dadot based on Newtonian estimates are printed. Note that when all models fit the data adequately, their updates are similar. When they differ, examine the output plot to find a model that is good fit and has small residuals (especially at early times).
Finally, replace the updated values to the angular velocity and expansion parameters (respectively) in the Xcts input file, or use the suggested updates to compute them (if the initial xcts parameters were not provided).
See ArXiv:gr-qc/0702106 and ArXiv:0710.0158 for more details.
Limitations:
These eccentricity updates work only for non-precessing binaries.
The time window is manually specified by the user.
The coordinate separation is used, instead of the proper distance.
See OmegaDoEccRemoval.py in SpEC for improved eccentricity control.
spectre eccentricity-control [OPTIONS] H5_FILE
Options
- -A, --subfile-name-aha <subfile_name_aha>¶
Name of subfile containing the apparent horizon centers for object A.
- Default:
'ApparentHorizons/ControlSystemAhA_Centers.dat'
- -B, --subfile-name-ahb <subfile_name_ahb>¶
Name of subfile containing the apparent horizon centers for object B.
- Default:
'ApparentHorizons/ControlSystemAhB_Centers.dat'
- --tmin <tmin>¶
The lower time bound to start the fit. Used to remove initial junk and transients in the coordinate separations. Default tmin=20 (or 60) for tmax<200 (or >200).
- --tmax <tmax>¶
The upper time bound to start the fit. A reasonable value would include 2-3 orbits.
- --angular-velocity-from-xcts <angular_velocity_from_xcts>¶
Value of the angular velocity used in the Xcts file.
- --expansion-from-xcts <expansion_from_xcts>¶
Value of the expansion velocity (adot) used in the Xcts file.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILE¶
Required argument
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
extend-connectivity¶
Extend the connectivity inside a single HDF5 volume file.
This extended connectivity is for some SpECTRE evolution, in order to fill in gaps between elements. Intended to be used as a post-processing routine to improve the quality of visualizations. Note: This does not work with subcell or AMR systems, and the connectivity only extends within each block and not between them. This only works for a single HDF5 volume file. If there are multiple files, the combine-h5 executable must be run first. The extend-connectivity command can then be run on the newly generated HDF5 file.
spectre extend-connectivity [OPTIONS] FILENAME
Options
- -d, --subfile-name <subfile_name>¶
Required subfile name of the volume file in the H5 file (omit file extension)
Arguments
- FILENAME¶
Required argument
extract-dat¶
Extract dat files from an H5 file
Extract all Dat files inside a SpECTRE HDF5 file. The resulting files will be put into the ‘OUT_DIR’ if specified, or printed to standard output. The directory structure will be identical to the group structure inside the HDF5 file.
spectre extract-dat [OPTIONS] FILENAME [OUT_DIR]
Options
- -j, --num-cores <num_cores>¶
Number of cores to run on.
- Default:
1
- -p, --precision <precision>¶
Precision with which to save (or print) the data.
- Default:
16
- -f, --force¶
If the output directory already exists, overwrite it.
- -l, --list¶
List all dat files in the HDF5 file and exit.
- -d, --subfile <subfiles>¶
Full path of subfile to extract (including extension). Can be specified multiple times to extract several subfiles at once. If unspecified, all subfiles will be extracted.
Arguments
- FILENAME¶
Required argument
- OUT_DIR¶
Optional argument
extract-input¶
Extract input file from an H5 file
Extract InputSource.yaml from the ‘H5_FILE’ and write it to the ‘OUTPUT_FILE’, or print to stdout if OUTPUT_FILE is unspecified.
spectre extract-input [OPTIONS] H5_FILE [OUTPUT_FILE]
Arguments
- H5_FILE¶
Required argument
- OUTPUT_FILE¶
Optional argument
find-radial-surface¶
Find a radial surface where a variable equals a target value.
The surface is found by searching along radial rays that are defined by the angular collocation points of the ‘initial_guess’ Strahlkorper. Only blocks that intersect the initial guess and their radial neighbors are considered, and only those neighbors that have a distorted frame. These blocks are assumed to be wedges, so their third logical coordinate is radial.
This function is useful, for example, for finding the surface of a star and then deforming the domain to match the surface.
- Arguments:
h5_files: The H5 files containing the volume data. subfile_name: Name of the volume data subfile in the ‘h5_files’. obs_id: Observation ID in the volume data. obs_time: Time of the observation. var_name: Name of the variable in the volume data to search for the
surface.
target: Value that the variable takes that defines the surface. initial_guess: Initial guess for the surface. Only blocks that intersect
this Strahlkorper and their radial neighbors are considered. These blocks must be wedges.
- output_surfaces_file: Optional. H5 output file where the surface Ylm
coefficients will be written. Can be a new or existing file. Requires either ‘output_coeffs_subfile’ or ‘output_coords_subfile’ is also specified, or both.
- output_coeffs_subfile: Optional. Name of the subfile in the
‘output_surfaces_file’ where the surface Ylm coefficients will be written. These can be used to reconstruct the surface, e.g. to deform the domain.
- output_coords_subfile: Optional. Name of the subfile in the
‘output_surfaces_file’ where the surface coordinates will be written. These can be used for visualization.
spectre find-radial-surface [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. [required]
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- -t, --target <target>¶
Required Target value for the surface.
- -l, --l-max <l_max>¶
Required Max l-mode for the Ylm representation of the surface.
- -r, --initial-radius <initial_radius>¶
Required Coordinate radius of the spherical initial guess for the surface.
- -C, --center <center>¶
Required Coordinate center of the Ylm representation of the surface.
- --output-surfaces-file <output_surfaces_file>¶
H5 output file where the surface Ylm coefficients will be written. Can be a new or existing file.
- --output-coeffs-subfile <output_coeffs_subfile>¶
Name of the subfile in the ‘output_surfaces_file’ where the surface Ylm coefficients will be written. These can be used to reconstruct the surface, e.g. to deform the domain.
- --output-coords-subfile <output_coords_subfile>¶
Name of the subfile in the ‘output_surfaces_file’ where the surface coordinates will be written. These can be used for visualization.
Arguments
- H5_FILES¶
Required argument(s)
generate-tetrahedral-connectivity¶
Generate tetrahedral connectivity using scipy.spatial.Delaunay
Given the coordinates, this generates a tetrahedral connectivity that can be read in by ParaView or other visualization software (e.g. VisIt or yt). It uses the scipy.spatial.Delaunay class to generate the connectivity, which uses the Qhull library (github.com/qhull/qhull/), which uses the quickhull algorithm and scales as O(n^2) in the worst case. Thus, generating the connectivity can take several minutes per temporal ID. Unfortunately, depending on the particular grid point distribution, generating the connectivity may even fail in qhull, which is unfortunately difficult to fix.
You should combine the volume HDF5 files into one before running this in order to get a fully connected domain.
Note that while ParaView has a Delaunay3D filter, it is much slower than qhull, needs to be rerun every time ParaView is opened while the qhull output is stored, and the Delaunay3D filter sometimes produces nonsense connectivity that is very difficult to debug and fix.
After the tetrahedral connectivity has been written you can run ‘generate-xdmf’ with the flag ‘–use-tetrahedral-connectivity’. ParaView volume renderings are sometimes nicer with tetrahedral connectivity and this can be used to fill gaps between finite-difference or Gauss elements.
If this algorithm is too slow, one possible improvement is to apply qhull to each block of the domain and then connect the blocks to each other separately. This keeps the number of grid points lower for each invocation of qhull, which likely reduces total runtime and may also reduce or eliminate failure cases. In the ideal case we would apply qhull to each element and then connect elements that are using FD or Gauss points to their neighbors.
- Arguments:
h5file: The HDF5 file on which to run. subfile_name: Volume data subfile in the H5 files. start_time: Optional. The earliest time at which to start visualizing. The
start-time value is included.
- stop_time: Optional. The time at which to stop visualizing. The stop-time
value is not included.
stride: Optional. View only every stride’th time step. coordinates: Optional. Name of coordinates dataset. Default:
“InertialCoordinates”.
- force: Optional. Overwrite the existing tetrahedral connectivity.
Default: False
spectre generate-tetrahedral-connectivity [OPTIONS] H5FILE
Options
- -d, --subfile-name <subfile_name>¶
Name of the volume data subfile in the H5 files. A ‘.vol’ extension is added if needed. If unspecified, and the first H5 file contains only a single ‘.vol’ subfile, choose that. Otherwise, list all ‘.vol’ subfiles and exit.
- --stride <stride>¶
View only every stride’th time step
- --start-time <start_time>¶
The earliest time at which to start visualizing. The start-time value is included.
- --stop-time <stop_time>¶
The time at which to stop visualizing. The stop-time value is included.
- --coordinates <coordinates>¶
The coordinates to use for visualization
- Default:
'InertialCoordinates'
- --force¶
Overwrite existing tetrahedral connectivity.
Arguments
- H5FILE¶
Required argument
generate-xdmf¶
Generate an XDMF file for ParaView and VisIt
Read volume data from the ‘H5FILES’ and generate an XDMF file. The XDMF file points into the ‘H5FILES’ files so ParaView and VisIt can load the volume data. To process multiple files suffixed with the node number and from multiple segments specify a glob like ‘Segment*/VolumeData*.h5’.
To load the XDMF file in ParaView you must choose the ‘Xdmf Reader’, NOT ‘Xdmf3 Reader’.
- Arguments:
h5files: List of H5 volume data files. output: Output filename. A ‘.xmf’ extension is added if not present. subfile_name: Volume data subfile in the H5 files. relative_paths: If True, use relative paths in the XDMF file (default). If
False, use absolute paths.
- start_time: Optional. The earliest time at which to start visualizing. The
start-time value is included.
- stop_time: Optional. The time at which to stop visualizing. The stop-time
value is not included.
stride: Optional. View only every stride’th time step. coordinates: Optional. Name of coordinates dataset. Default:
“InertialCoordinates”.
- use_tetrahedral_connectivity: Optional. Use “tetrahedral_connectivity”.
Default: False
spectre generate-xdmf [OPTIONS] H5FILES...
Options
- -o, --output <output>¶
Output file name. A ‘.xmf’ extension will be added if not present. If unspecified, the output will be written to stdout.
- -d, --subfile-name <subfile_name>¶
Name of the volume data subfile in the H5 files. A ‘.vol’ extension is added if needed. If unspecified, and the first H5 file contains only a single ‘.vol’ subfile, choose that. Otherwise, list all ‘.vol’ subfiles and exit.
- --relative-paths, --absolute-paths¶
Use relative paths or absolute paths in the XDMF file.
- Default:
True
- --stride <stride>¶
View only every stride’th time step
- --start-time <start_time>¶
The earliest time at which to start visualizing. The start-time value is included.
- --stop-time <stop_time>¶
The time at which to stop visualizing. The stop-time value is included.
- --coordinates <coordinates>¶
The coordinates to use for visualization
- Default:
'InertialCoordinates'
- --use-tetrahedral-connectivity¶
Use a tetrahedral connectivity called tetrahedral_connectivity in the HDF5 file. See the generate-tetrahedral-connectivity CLI for information on how to generate tetrahedral connectivity and what it can be useful for.
Arguments
- H5FILES¶
Required argument(s)
interpolate-to-mesh¶
Interpolates an h5 file to a desired grid
The function reads data from source_volume_data inside source_file_path, interpolates all components specified by components_to_interpolate to the grid specified by target_mesh and writes the results into target_volume_data inside target_file_path. The target_file_path can be the same as the source_file_path if the volume subfile paths are different.
Parameters¶
- source_file_path: str
the path to the source file where the source_volume_data is
- target_mesh: spectre.Spectral.Mesh
the mesh to which the data is interpolated
- components_to_interpolate: list of str, optional
a list of all components that are to be interpolated. accepts regular expressions. By default ALL tensor components are interpolated.
- target_file_path: str, optional
the path to where the interpolated data is written. By default this is set to source_file_path so the interpolated data is written to the same file, but in a different subfile specified by target_volume_data.
- source_volume_data: str, optional
the name of the .vol file inside the source file where the source data can be found.
- target_volume_data: str, optional
the name of the .vol file inside the target file where the target data is written.
- obs_start: float, optional
disregards all observations with observation value strictly before obs_start
- obs_end: float, optional
disregards all observations with observation value strictly after obs_end
- obs_stride: float, optional
will only take every obs_stride observation
spectre interpolate-to-mesh [OPTIONS]
Options
- --source-file-prefix <source_file_prefix>¶
Required The prefix for the .h5 source files. All files starting with the prefix followed by a number will be interpolated.
- --source-subfile-name <source_subfile_name>¶
Required The name of the volume data subfile within the source files in which the data is contained
- --target-file-prefix <target_file_prefix>¶
The prefix for the target files where the interpolated data is written. When no target file is specified, the interpolated data is written to the corresponding source file in a new volume data subfile.
- --target-subfile-name <target_subfile_name>¶
Required The name of the volume data subfile within the target files where the data will be written.
- -t, --tensor-component <tensor_components>¶
The name of the tensor to be interpolated. Accepts regular expression. Can be specified multiple times to interpolate several tensors at once. If none are specified, all tensors are interpolated.
- --target-extents <target_extents>¶
Required The extents of the target grid, as a comma-separated list without spaces. Can be different for each dimension e.g. ‘3,5,4’
- --target-basis <target_basis>¶
Required The basis of the target grid.
- Options:
Legendre | Chebyshev | FiniteDifference | SphericalHarmonic
- --target-quadrature <target_quadrature>¶
Required The quadrature of the target grid.
- Options:
Gauss | GaussLobatto | CellCentered | FaceCentered | Equiangular
- --start-time <start_time>¶
Disregard all observations with value before this point
- --stop-time <stop_time>¶
Disregard all observations with value after this point
- --stride <stride>¶
Stride through observations with this step size.
- -j, --num-jobs <num_jobs>¶
The maximum number of processes to be started. A process is spawned for each source file up to this number.
interpolate-to-points¶
Interpolate volume data to target coordinates.
spectre interpolate-to-points [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times. [required]
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- -t, --target-coords-file <target_coords_file>¶
Text file with target coordinates to interpolate to. Must have ‘dim’ columns with Cartesian coordinates. Rows enumerate points. Can be the output of ‘numpy.savetxt’.
- -p, --target-coords <target_coords>¶
List target coordinates explicitly, e.g. ‘0,0,0’. Can be specified multiple times to quickly interpolate to a couple of target points.
- --extrapolate-into-excisions¶
Enables extrapolation into excision regions of the domain. This can be useful to fill the excision region with (constraint-violating but smooth) data so it can be imported into moving puncture codes.
- -o, --output <output>¶
Output text file
- --delimiter <delimiter>¶
Delimiter separating columns for both the ‘–target-coords-file’ and the ‘–output’ file.
- Default:
'whitespace'
- -j, --num-threads <num_threads>¶
Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.
- Default:
'all available cores'
Arguments
- H5_FILES¶
Required argument(s)
plot-command¶
Plot data from simulations
See subcommands for available plots.
spectre plot-command [OPTIONS] COMMAND [ARGS]...
along-line¶
Plot variables along a line through volume data
Interpolates the volume data in the H5_FILES to a line and plots the selected variables. You choose the line by specifying the start and end points.
Either select a specific observation in the volume data with ‘–step’ or ‘–time’, or specify ‘–animate’ to produce an animation over all observations.
spectre plot-command along-line [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times. [required]
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- -A, --line-start <line_start>¶
Coordinates of the start of the line through the volume data. Specify as comma-separated list, e.g. ‘0,0,0’. [required]
- -B, --line-end <line_end>¶
Coordinates of the end of the line through the volume data. Specify as comma-separated list, e.g. ‘1,0,0’. [required]
- --extrapolate-into-excisions¶
Enables extrapolation into excision regions of the domain. This can be useful to fill the excision region with (constraint-violating but smooth) data so it can be imported into moving puncture codes.
- -N, --num-samples <num_samples>¶
Number of uniformly spaced samples along the line to which volume data is interpolated.
- Default:
200
- -j, --num-threads <num_threads>¶
Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.
- Default:
'all available cores'
- --x-logscale¶
Set the x-axis to log scale.
- --y-logscale¶
Set the y-axis to log scale.
- --y-bounds <y_bounds>¶
The lower and upper bounds of the y-axis.
- --animate¶
Animate over all observations.
- --interval <interval>¶
Delay between frames in milliseconds. Only used for animations.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
cce¶
Plot the Strain, News, and Psi0-Psi4 from the output of a SpECTRE CCE run.
The data must be in a SpECTRE Cce subfile with a ‘.cce’ extension. Multiple modes can be plotted at once along with your choice of plotting both real, imaginary, or both.
IMPORTANT: These plots are NOT in the correct BMS frame. This tool is only meant to plot the raw data produced by the SpECTRE CCE module.
spectre plot-command cce [OPTIONS] H5_FILENAME
Options
- -m, --modes <modes>¶
Required Which mode to plot. Specified as ‘l,m’ (e.g. ‘–modes 2,2’). Will plot both real and imaginary components unless ‘–real’ or ‘–imag’ are specified. Can be specified multiple times.
- --real¶
Plot only real modes. Mutually exclusive with ‘–imag’.
- --imag¶
Plot only imaginary modes. Mutually exclusive with ‘–real’.
- -r, --extraction-radius <extraction_radius>¶
Extraction radius of data to plot as an int. If there is only one Cce subfile, that one will be used and this option does not need to be specified. The expected form of the Cce subfile is ‘SpectreRXXXX.cce’ where XXXX is the zero-padded integer extraction radius. This option is ignored if the backwards compatibility option ‘–cce-group’/’-d’ is specified.
- -l, --list-extraction-radii¶
List Cce subfiles in the ‘h5_filename’ and exit.
- -d, --cce-group <backward_cce_group>¶
Option for backwards compatibility with an old version of CCE data. This is the group name of the CCE data in the ‘h5_filename’ (typically Cce). This option should only be used if your CCE data was produced with a version of SpECTRE prior to this Pull Request: https://github.com/sxs-collaboration/spectre/pull/5985.
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- --x-label <x_label>¶
The label on the x-axis.
- -t, --title <title>¶
Title of the graph.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILENAME¶
Required argument
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
control-system¶
Plot diagnostic information regarding all control systems except size control. If you want size control diagnostics use spectre plot size-control.
This tool assumes there are subfiles in each of the “reduction-files” with the path /ControlSystems/{Name}/*.dat, where {NAME} is the name of the control system and *.dat are all the components of that control system.
Shape control is a bit special because it has a large number of components. Control whether or not you plot shape, and how many of these components you plot, with the –with-shape/–without-shape, –shape-l_max, and –show-all-m options.
spectre plot-command control-system [OPTIONS] REDUCTION_FILES...
Options
- --with-shape, --without-shape¶
Wether or not to plot shape control.
- Default:
True
- -l, --shape-l_max <shape_l_max>¶
The max number of spherical harmonics to show on the plot. Since higher ell can have a lot of components, it may be desirable to show fewer components. Never plots l=0,1 since we don’t control these components. Only used if ‘–with-shape’.
- Default:
2
- --show-all-m¶
When plotting shape control, for a given ell, plot all m components. Default is, for a given ell, to plot the L2 norm over all the m components. Only used if ‘–with-shape’.
- Default:
False
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- --x-label <x_label>¶
The label on the x-axis.
- -t, --title <title>¶
Title of the graph.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- REDUCTION_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
dat¶
Plot columns in ‘.dat’ datasets in H5 files
spectre plot-command dat [OPTIONS] H5_FILE
Options
- -d, --subfile-name <subfile_name>¶
The dat subfile to read. [required]
- -l, --legend-only¶
Print out the available quantities and exit.
- -y, --function <functions>¶
The quantity to plot. Can be specified multiple times to plot several quantities on a single figure. If unspecified, list all available quantities and exit. Labels of quantities can be specified as key-value pairs such as ‘Error(Psi)=$L_2(psi)$’. Remember to wrap the key-value pair in quotes on the command line to avoid issues with special characters or spaces.
- -x, --x-axis <x_axis>¶
Select the column in the dat file uses as the x-axis in the plot.
- Default:
'first column in the dat file'
- --x-label <x_label>¶
The label on the x-axis.
- Default:
'name of the x-axis column'
- --y-label <y_label>¶
The label on the y-axis.
- Default:
'no label'
- --x-logscale¶
Set the x-axis to log scale.
- --y-logscale¶
Set the y-axis to log scale.
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- --y-bounds <y_bounds>¶
The lower and upper bounds of the y-axis.
- -t, --title <title>¶
Title of the graph.
- Default:
'subfile name'
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILE¶
Required argument
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
elliptic-convergence¶
Plot elliptic solver convergence
spectre plot-command elliptic-convergence [OPTIONS] H5_FILE
Options
- --linear-residuals-subfile-name <linear_residuals_subfile_name>¶
The name of the subfile containing the linear solver residuals
- Default:
'GmresResiduals.dat'
- --nonlinear-residuals-subfile-name <nonlinear_residuals_subfile_name>¶
The name of the subfile containing the nonlinear solver residuals
- Default:
'NewtonRaphsonResiduals.dat'
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILE¶
Required argument
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
memory-monitors¶
Plot the memory usage of a simulation from the MemoryMonitors data in the Reductions H5 file.
This tool assumes there is a group in each of the “reduction-files” with the path “/MemoryMonitors/” that holds dat files for each parallel component that was monitored.
Note that the totals plotted here are not necessary the total memory usage of the simulation. The memory monitors only capture what is inside ‘pup’functions. Any memory that cannot be captured by a ‘pup’ function will not by represented by this plot.
spectre plot-command memory-monitors [OPTIONS] REDUCTION_FILES...
Options
- --use-mb, --use-gb¶
Plot the y-axis in Megabytes or Gigabytes
- Default:
False
- --x-label <x_label>¶
The label on the x-axis.
- Default:
'name of the x-axis column'
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- REDUCTION_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
power-monitors¶
Plot power monitors from volume data
Reads volume data in the ‘H5_FILES’ and computes power monitors, which are essentially the spectral modes in each dimension of the grid. They give an indication how well the spectral expansion resolves fields on the grid. Power monitors are computed for all tensor components selected with the ‘–var’ / ‘-y’ option, and combined as an L2 norm.
One subplot is created for every selected ‘–block’ / ‘-b’. This can be a single block name, or a block group defined by the domain (such as all six wedges in a spherical shell). The power monitors in every logical direction of the grid are plotted for all elements in the block or block group. The logical directions are labeled “xi”, “eta” and “zeta”, and their orientation is defined by the coordinate maps in the domain. For example, see the documentation of the ‘Wedge’ map to understand which logical direction is radial in spherical shells.
spectre plot-command power-monitors [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. Can be specified multiple times. [required]
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- --list-blocks¶
Print available blocks and block groups and exit.
- -b, --block <block_or_group_names>¶
Name of block or block group to analyze. Can be specified multiple times to plot several block(groups) at once.
- -e, --elements <element_patterns>¶
Include only elements that match the specified glob pattern, like ‘B*,(L1I*,L0I0,L0I0)’. Can be specified multiple times, in which case elements are included that match _any_ of the specified patterns. If unspecified, include all elements in the blocks.
- --list-elements¶
List all elements in the specified blocks subject to ‘–elements’ / ‘-e’ patterns.
- -T, --over-time¶
Plot power monitors over time.
- --skip-filtered-modes <skip_filtered_modes>¶
Skip this number of highest modes. Useful if the highest modes are filtered, zeroing them out.
- --figsize <figsize>¶
Figure size in inches.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
size-control¶
Plot diagnostic information regarding the Size control system.
This tool assumes there is a subfile in each of the “reduction-files” with the path /ControlSystems/Size{LABEL}/Diagnostics.dat, where {LABEL} is replaced with the “object-label” input option.
spectre plot-command size-control [OPTIONS] REDUCTION_FILES...
Options
- -d, --object-label <object_label>¶
Required Which object to plot. This is either ‘A’, ‘B’, or ‘None’. ‘None’ is used when there is only one black hole in the simulation.
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- --x-label <x_label>¶
The label on the x-axis.
- -t, --title <title>¶
Title of the graph.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- REDUCTION_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
slice¶
Plot variables on a slice through volume data
Interpolates the volume data in the H5_FILES to a slice and plots the selected variables. You choose the slice by specifying its center, extents, normal, and up direction.
Either select a specific observation in the volume data with ‘–step’ or ‘–time’, or specify ‘–animate’ to produce an animation over all observations.
spectre plot-command slice [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within H5 file containing volume data to plot. Optional if the H5 files have only one subfile.
- -l, --list-vars¶
Print available variables and exit.
- -y, --var <vars_patterns>¶
Variable to plot. List any tensor components in the volume data file, such as ‘Shift_x’. Also accepts glob patterns like ‘Shift_*’. [required]
- --list-observations, --list-times¶
Print all available observation times and exit.
- --step <step>¶
Observation step number. Specify ‘-1’ or ‘last’ for the last step in the file. Mutually exclusive with ‘–time’.
- --time <time>¶
Observation time. The observation step closest to the specified time is selected. Mutually exclusive with ‘–step’.
- -C, --slice-origin, --slice-center <slice_origin>¶
Coordinates of the center of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,0,0’. [required]
- -X, --slice-extent <slice_extent>¶
Extent in both directions of the slice through the volume data, e.g. ‘-X 10 10’ for a 10x10 slice in the coordinates of the volume data. [required]
- -n, --slice-normal <slice_normal>¶
Direction of the normal of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,0,1’ for a slice in the xy-plane. [required]
- -u, --slice-up <slice_up>¶
Up-direction of the slice through the volume data. Specify as comma-separated list, e.g. ‘0,1,0’ so the y-axis is the vertical axis of the plot. [required]
- --extrapolate-into-excisions¶
Enables extrapolation into excision regions of the domain. This can be useful to fill the excision region with (constraint-violating but smooth) data so it can be imported into moving puncture codes.
- -N, --num-samples <num_samples>¶
Number of uniformly spaced samples along each direction of the slice to which volume data is interpolated.
- Default:
200, 200
- -j, --num-threads <num_threads>¶
Number of threads to use for interpolation. Only available if compiled with OpenMP. Parallelization is over volume data files, so this only has an effect if multiple files are specified.
- Default:
'all available cores'
- -t, --title <title>¶
Title for the plot.
- Default:
'name of the variable'
- --y-bounds, --data-bounds <data_bounds>¶
Lower and upper bounds for the color scale of the plot.
- --animate¶
Animate over all observations.
- --interval <interval>¶
Delay between frames in milliseconds. Only used for animations.
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
trajectories¶
Plot trajectories in inspiral simulation
Concatenates partial trajectories from each h5 reductions files and plots full trajectories.
spectre plot-command trajectories [OPTIONS] H5_FILES...
Options
- -A, --subfile-name-aha <subfile_name_aha>¶
Name of subfile containing the apparent horizon centers for object A.
- -B, --subfile-name-ahb <subfile_name_ahb>¶
Name of subfile containing the apparent horizon centers for object B.
- --sample-rate <sample_rate>¶
Downsample data to speed up plots. E.g. a value of 10 means every 10th point is plotted.
- Default:
1
- --figsize <figsize>¶
Figure size as width and height in inches
- Default:
10.0, 10.0
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
Arguments
- H5_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
render-1d¶
Render 1D data
spectre render-1d [OPTIONS] H5_FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of subfile within h5 file containing 1D volume data to be rendered.
- -y, --var <vars>¶
Name of variable to plot, e.g. ‘Psi’ or ‘Error(Psi)’. Can be specified multiple times. If unspecified, plot all available variables. Labels for variables can be specified as key-value pairs such as ‘Error(Psi)=$L_2(psi)$’. Remember to wrap the key-value pair in quotes on the command line to avoid issues with special characters or spaces.
- -l, --list-vars¶
Print available variables and exit.
- --step <step>¶
If specified, renders the integer observation step instead of an animation. Set to ‘-1’ for the last step.
- --interval <interval>¶
Delay between frames in milliseconds
- --x-label <x_label>¶
The label on the x-axis.
- Default:
'name of the x-axis column'
- --y-label <y_label>¶
The label on the y-axis.
- Default:
'no label'
- --x-logscale¶
Set the x-axis to log scale.
- --y-logscale¶
Set the y-axis to log scale.
- --x-bounds <x_bounds>¶
The lower and upper bounds of the x-axis.
- --y-bounds <y_bounds>¶
The lower and upper bounds of the y-axis.
- -t, --title <title>¶
Title of the graph.
- Default:
'subfile name'
- -s, --stylesheet <stylesheet>¶
Select a matplotlib stylesheet for customization of the plot, such as linestyle cycles, linewidth, fontsize, legend, etc. Specify a filename or one of the built-in styles. See https://matplotlib.org/gallery/style_sheets/style_sheets_reference for a list of built-in styles, e.g. ‘seaborn-dark’. The stylesheet can also be set with the ‘SPECTRE_MPL_STYLESHEET’ environment variable.
- -o, --output <output>¶
Name of the output plot file. If unspecified, the plot is shown interactively, which only works on machines with a window server. If a filename is specified, its extension determines the file format, e.g. ‘plot.png’ or ‘plot.pdf’ for static plots and ‘animation.gif’ or ‘animation.mp4’ (requires ffmpeg) for animations. If no extension is given, the file format depends on the system settings (see matplotlib.pyplot.savefig docs).
- --show-collocation-points¶
- --show-element-boundaries¶
- --show-basis-polynomials¶
Arguments
- H5_FILES¶
Required argument(s)
Environment variables
- SPECTRE_MPL_STYLESHEET
Provide a default for
--stylesheet
render-3d-command¶
Renders a 3D visualization of simulation data.
See subcommands for possible renderings.
spectre render-3d-command [OPTIONS] COMMAND [ARGS]...
bbh¶
Generate Pictures from XMF files for BBH Visualizations
Generates pictures from BBH runs using the XMF files generated using generate-xdmf. This script requires that the Lapse and SpatialRicciScalar were output in the volume data.
Arguments:
volume_xmf: Path to the volume data xmf file. output: Name of output file generated from paraview. Include extensions such as ‘.png’ aha_xmf: Path to the apparent horizon xmf file for object A. ahb_xmf: Path to the apparent horizon xmf file for object B. camera_angle: Specified camera angle, defaults to Side if empty. Other possible angles Top and Wide color_map: Color map for the lapse, defaults to ‘Rainbow Uniform’. Other color maps include ‘Inferno (matplotlib)’, ‘Viridis (matplotlib)’, etc. show_grid: Shows the grid lines of the domain. show_time: Shows the simulation time.
To splice all the pictures into a video, try using FFmpeg
spectre render-3d-command bbh [OPTIONS] VOLUME_XMF
Options
- -o, --output <output>¶
Required Output file. Include extension such as ‘.png’.
- -a, --aha-xmf <aha_xmf>¶
Optional xmf file for AhA visualization
- -b, --ahb-xmf <ahb_xmf>¶
Optional xmf file for AhB visualization
- -t, --time-step <time_step>¶
Select a time step. Specify ‘-1’ or ‘last’ to select the last time step.
- Default:
'first'
- --animate¶
Produce an animation of all time steps.
- -c, --camera-angle <camera_angle>¶
Determines which camera angle to use: Default is the Side view.Top view is right above the excisions at t = 0. Wide is further out and inbetween Side and Top view
- Options:
Side | Top | Wide
- --zoom <zoom_factor>¶
Zoom factor.
- -m, --color-map <color_map>¶
Determines how to color the domain, common color maps are “Inferno (matplotlib)”, “Viridis (matplotlib). Defaults to Rainbow Uniform.”
- --show-grid¶
Show grid lines
- --show-time¶
Show simulation time
Arguments
- VOLUME_XMF¶
Required argument
clip¶
Renders a clip normal to the z-direction.
XMF_FILE is the path to the XMF file that references the simulation data. It is typically generated by the ‘generate-xdmf’ command.
This is a quick way to get some insight into the simulation data. For more advanced renderings, open the XMF file in an interactive ParaView session, or implement rendering commands specialized for your use case.
spectre render-3d-command clip [OPTIONS] XMF_FILE
Options
- -o, --output <output>¶
Required Output file. Include extension such as ‘.png’.
- -y, --variable <variable>¶
Variable to plot. Lists available variables when not specified.
- -t, --time-step <time_step>¶
Select a time step. Specify ‘-1’ or ‘last’ to select the last time step.
- Default:
'first'
- --animate¶
Produce an animation of all time steps.
- --log¶
Plot variable in log scale.
- --show-grid¶
Show grid lines
- --zoom <zoom_factor>¶
Zoom factor.
- --clip-origin <clip_origin>¶
Origin of the clipping plane
- Default:
0.0, 0.0, 0.0
- --clip-normal <clip_normal>¶
Normal of the clipping plane
- Default:
0.0, 0.0, 1.0
Arguments
- XMF_FILE¶
Required argument
domain¶
Renders a 3D domain with elements and grid lines
This rendering is a starting point for visualizations of the domain geometry, e.g. for publications.
XMF_FILE is the path to the XMF file that references the simulation data. It is typically generated by the ‘generate-xdmf’ command. You can also provide a second XMF file with higher resolution data, which is used to render the outlines of elements to make them smoother.
spectre render-3d-command domain [OPTIONS] XMF_FILE [HI_RES_XMF_FILE]
Options
- -o, --output <output>¶
Required Output file. Include extension such as ‘.png’.
- -t, --time-step <time_step>¶
Select a time step. Specify ‘-1’ or ‘last’ to select the last time step.
- Default:
'first'
- --animate¶
Produce an animation of all time steps.
- --zoom <zoom_factor>¶
Zoom factor.
- --camera-theta <camera_theta>¶
Viewing angle from the z-axis in degrees.
- Default:
0.0
- --camera-phi <camera_phi>¶
Viewing angle around the z-axis in degrees.
- Default:
0.0
- --clip-origin, --slice-origin <clip_origin>¶
Origin of the clipping plane
- Default:
0.0, 0.0, 0.0
- --clip-normal, --slice-normal <clip_normal>¶
Normal of the clipping plane
- Default:
0.0, 0.0, 1.0
- --slice, --clip¶
Use a slice instead of a clip.
- Default:
False
- --background-color <background_color>¶
Background color in RGB fractions (white is 1 1 1, black is 0 0 0).
- Default:
1.0, 1.0, 1.0
Arguments
- XMF_FILE¶
Required argument
- HI_RES_XMF_FILE¶
Optional argument
resubmit¶
Create the next segment in the SEGMENTS_DIR and schedule it
- Arguments:
- segments_dir: Path to the segments directory, or path to the last segment
in the segments directory. The next segment will be created here.
- context_file_name: Optional. Name of the file that stores the context
for resubmissions in the ‘run_dir’. This file gets created by ‘spectre.support.schedule’. (Default: “SchedulerContext.yaml”)
- Returns: The ‘subprocess.CompletedProcess’ representing the process
that scheduled the run. Returns ‘None’ if no run was scheduled.
spectre resubmit [OPTIONS] SEGMENTS_DIRS...
Options
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
Arguments
- SEGMENTS_DIRS¶
Required argument(s)
run-next¶
Run the next entrypoint specified in the input file metadata
Invokes the Python function specified in the ‘Next’ section of the input file metadata. It can be specified like this:
The function will be invoked in the cwd directory (’–input-run-dir’ / ‘-i’), which defaults to the directory of the input file. The following special values can be used for the arguments:
‘__file__’: The (absolute) path of the input file.
‘None’: The Python value None.
- Arguments:
- next_entrypoint: The Python function to run. Must be a dictionary with
the following keys: - “Run”: The Python module and function to run, separated by a colon.
For example, “spectre.Pipelines.Bbh.Ringdown:start_ringdown”.
“With”: A dictionary of arguments to pass to the function.
- input_file_path: Path to the input file that specified the entrypoint.
Used to resolve ‘__file__’ in the entrypoint arguments.
- cwd: The working directory in which to run the entrypoint. Used to
resolve relative paths in the entrypoint arguments.
spectre run-next [OPTIONS] INPUT_FILE_PATH
Options
- -i, --input-run-dir <input_run_dir>¶
Directory where the input file ran. Paths in the input file are relative to this directory.
- Default:
'directory of the INPUT_FILE_PATH'
Arguments
- INPUT_FILE_PATH¶
Required argument
schedule¶
Schedule executable runs with an input file
Configures the input file, submit script, etc. to the ‘run_dir’, and then invokes the ‘scheduler’ to submit the run (typically “sbatch”). You can also bypass the scheduler and run the executable directly by setting the ‘scheduler’ to ‘None’.
# Selecting the executable
Specify either a path to the executable, or just its name if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
By default, the executable and submit scripts will be copied to the segments directory to support resubmissions (see below). See the ‘copy_executable’ argument docs for details on controlling this behavior.
# Segments and run directories
You can set either the ‘run_dir’ or the ‘segments_dir’ to specify where the executable will run (but not both). If you specify a ‘run_dir’, the executable will run in it directly. If you specify a ‘segments_dir’, a new segment will be created and used as the ‘run_dir’. Segments are named with incrementing integers and continue the run from the previous segment. For example, the following is a typical ‘segments_dir’:
You can omit the ‘run_dir’ if the current working directory already contains the input file.
# Placeholders
The input file, submit script, ‘run_dir’, ‘segments_dir’, and ‘job_name’ can have placeholders like ‘{{ num_nodes }}’. They must conform to the [Jinja template format](https://jinja.palletsprojects.com/en/3.0.x/templates/). The placeholders are resolved in the following stages. The following parameters are available as placeholders:
‘job_name’ (if specified):
‘run_dir’ and ‘segments_dir’:
Input file & submit script:
The parameters used to render the submit script are stored in a context file (named ‘context_file_name’) in the ‘run_dir’ to support resubmissions. The context file is used by ‘spectre.support.resubmit’ to schedule the next segment using the same parameters.
# Scheduling multiple runs
You can pass ranges of parameters to the ‘–params’ of this function to schedule multiple runs using the same input file template. For example, you can do an h-convergence test by using a placeholder for the refinement level in your input file:
When a parameter in ‘–params’ is an iterable, the ‘schedule’ function will recurse for every element in the iterable. For example, you can schedule multiple runs for a convergence test like this:
- Arguments:
- input_file_template: Path to an input file. It will be copied to the
‘run_dir’. It can be a Jinja template (see above).
- scheduler: ‘None’ to run the executable directly, or a scheduler such as
“sbatch” to submit the run to a queue.
- no_schedule: Optional. If ‘True’, override the ‘scheduler’ to ‘None’.
Useful to specify on the command line where the ‘scheduler’ defaults to “sbatch” on clusters.
- executable: Path or name of the executable to run. If unspecified, use the
‘Executable’ set in the input file metadata.
- run_dir: The directory to which input file, submit script, etc. are
copied, and relative to which the executable will run. Can be a Jinja template (see above).
- segments_dir: The directory in which a new segment is created as the
‘run_dir’. Mutually exclusive with ‘run_dir’. Can be a Jinja template (see above).
- copy_executable: Copy the executable to the run or segments directory.
- By default (when set to ‘None’):
If ‘–run-dir’ / ‘-o’ is set, don’t copy.
If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission.
- When set to ‘True’:
If ‘–run-dir’ / ‘-o’ is set, copy to the run directory.
If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments.
When set to ‘False’: Never copy.
- job_name: Optional. A string describing the job.
Can be a Jinja template (see above). (Default: executable name)
- submit_script_template: Optional. Path to a submit script. It will be
copied to the ‘run_dir’ if a ‘scheduler’ is set. Can be a Jinja template (see above). (Default: value of ‘default_submit_script_template’)
from_checkpoint: Optional. Path to a checkpoint directory. input_file_name: Optional. Filename of the input file in the ‘run_dir’.
(Default: basename of the ‘input_file_template’)
- submit_script_name: Optional. Filename of the submit script. (Default:
“Submit.sh”)
- out_file_name: Optional. Name of the log file. (Default:
“spectre.out”)
- context_file_name: Optional. Name of the file that stores the context
for resubmissions in the run_dir. Used by spectre.support.resubmit. (Default: “SchedulerContext.yaml”)
- submit: Optional. If ‘True’, automatically submit jobs using the
‘scheduler’. If ‘False’, skip the job submission. If ‘None’, prompt for confirmation before submitting.
- clean_output: Optional. When ‘True’, use
‘spectre.tools.CleanOutput.clean_output’ to clean up existing output files in the ‘run_dir’ before scheduling the run. (Default: ‘False’)
- force: Optional. When ‘True’, overwrite input file and submit script
in the ‘run_dir’ instead of raising an error when they already exist.
- extra_params: Optional. Dictionary of extra parameters passed to input
file and submit script templates. Parameters can also be passed as keyword arguments to this function instead.
- Returns: The ‘subprocess.CompletedProcess’ representing either the process
that scheduled the run, or the process that ran the executable if ‘scheduler’ is ‘None’. Returns ‘None’ if no or multiple runs were scheduled.
spectre schedule [OPTIONS] INPUT_FILE_TEMPLATE
Options
- -E, --executable <executable>¶
The executable to run. Can be a path, or just the name of the executable if it’s in the ‘PATH’. If unspecified, the ‘Executable’ listed in the input file metadata is used.
- Default:
'executable listed in input file'
- -o, --run-dir <run_dir>¶
The directory to which input file, submit script, etc. are copied, relative to which the executable will run, and to which output files are written. Defaults to the current working directory if the input file is already there. Mutually exclusive with ‘–segments-dir’ / ‘-O’.
- -O, --segments-dir <segments_dir>¶
The directory in which to create the next segment. Requires ‘–from-checkpoint’ or ‘–from-last-checkpoint’ unless starting the first segment.
- --copy-executable, --no-copy-executable¶
Copy the executable to the run or segments directory. (1) When no flag is specified: If ‘–run-dir’ / ‘-o’ is set, don’t copy. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. (2) When ‘–copy-executable’ is specified: If ‘–run-dir’ / ‘-o’ is set, copy to the run directory. If ‘–segments-dir’ / ‘-O’ is set, copy to segments directory to support resubmission. Still don’t copy to individual segments. (3) When ‘–no-copy-executable’ is specified: Never copy.
- -C, --clean-output¶
Clean up existing output files in the run directory before running the executable. See the ‘spectre clean-output’ command for details.
- -f, --force¶
Overwrite existing files in the ‘–run-dir’ / ‘-o’. You may also want to use ‘–clean-output’.
- --scheduler <scheduler>¶
The scheduler invoked to queue jobs on the machine.
- Default:
'none'
- --no-schedule¶
Run the executable directly, without scheduling it.
- --submit-script-template <submit_script_template>¶
Path to a submit script. It will be copied to the ‘run_dir’. It can be a [Jinja template](https://jinja.palletsprojects.com/en/3.0.x/templates/) (see main help text for possible placeholders).
- Default:
'/__w/spectre/spectre/build/bin/python/spectre/support/SubmitTemplate.sh'
- -J, --job-name <job_name>¶
A short name for the job (see main help text for possible placeholders).
- Default:
'executable name'
- -j, -c, --num-procs <num_procs>¶
Number of worker threads. Mutually exclusive with ‘–num-nodes’ / ‘-N’.
- -N, --num-nodes <num_nodes>¶
Number of nodes
- --queue <queue>¶
Name of the queue.
- -t, --time-limit <time_limit>¶
Wall time limit. Must be compatible with the chosen queue.
- -p, --param <extra_params>¶
Forward an additional parameter to the input file and submit script templates. Can be specified multiple times. Each entry must be a ‘key=value’ pair, where the key is the parameter name. The value can be an int, float, string, a comma-separated list, an inclusive range like ‘0…3’, an exclusive range like ‘0..3’ or ‘0..<3’, or an exponentiated value or range like ‘2**3’ or ‘10**4…6’. If a parameter is a list or range, multiple runs are scheduled recursively. You can also use the parameter in the ‘job_name’ and in the ‘run_dir’ or ‘segment_dir’, and when scheduling ranges of runs you probably should.
- --submit, --no-submit¶
Submit jobs automatically. If neither option is specified, a prompt will ask for confirmation before a job is submitted.
- --context-file-name <context_file_name>¶
Name of the context file that supports resubmissions.
- Default:
'SchedulerContext.yaml'
- --from-checkpoint <from_checkpoint>¶
Restart from this checkpoint.
- --from-last-checkpoint <from_last_checkpoint>¶
Restart from the last checkpoint in this directory.
Arguments
- INPUT_FILE_TEMPLATE¶
Required argument
simplify-traces¶
Process Charm++ Projections trace files
Process Charm++ Projections ‘.sts’ (not ‘.sum.sts’) files to make the entry method and Chare names easier to read in the GUI. Long template names are not rendered fully making it impossible to figure out what Action and Chare was being measured. The standard entry methods like ‘invoke_iterable_action’ and ‘simple_action’ are simplified by default, but further textual and regular expression replacements can be specified in a JSON file.
The output of this command will be written to the ‘OUTPUT_FILE’ if specified, to stdout if unspecified, edited in-place if the ‘-i’ flag is specified. Note that you will need to replace Charm++’s .sts file with the output file and the names must match.
spectre simplify-traces [OPTIONS] PROJECTIONS_FILE [OUTPUT_FILE]
Options
- -i, --in-place¶
Edit the ‘PROJECTIONS_FILE’ in place. A backup of the file is written to the ‘OUTPUT_FILE’ if specified.
- -r, --replacements-json-file <replacements_json_file>¶
A JSON file listing textual and regular expression replacements. The file must specify “BasicReplace” and “RegexReplace” dictionaries. Each dictionary has keys that are the name of the replacement (unused in any searches). For BasicReplace the value is a list of two-element lists, the first entry in the nested two-element list is the string to replace and the second what to replace it with. An example entry is:
“Actions::MutateApply”: [[“Actions::MutateApply<”, “], [“>()”, “()”]]
where if the line contains “Actions::MutateApply<” it and “>()” are replaced. The regular expression is structured similarly but the entire regex match is replaced.
Arguments
- PROJECTIONS_FILE¶
Required argument
- OUTPUT_FILE¶
Optional argument
status¶
Gives an overview of simulations running on this machine.
spectre status [OPTIONS]
Options
- -u, --uid, --user <user>¶
User name or user ID. See documentation for ‘sacct -u’ for details.
- Default:
'you'
- -a, --allusers¶
Show jobs for all users. See documentation for ‘sacct -a’ for details.
- -p, --show-paths¶
Show job working directory and input file paths.
- -U, --show-unidentified¶
Also show jobs that were not identified as SpECTRE executables.
- -D, --show-deleted¶
Also show jobs that ran in directories that are now deleted.
- -A, --show-all-segments¶
Show all segments instead of just the latest.
- -s, --state <state>¶
Show only jobs with this state, e.g. running (r) or completed (cd). See documentation for ‘sacct -s’ for details.
- -S, --starttime <starttime>¶
Show jobs eligible after this time, e.g. ‘now-2days’. See documentation for ‘sacct -S’ for details.
- Default:
'start of today'
- -w, --watch <refresh_rate>¶
On a new screen, refresh jobs every ‘watch’ number of seconds. Exit out with Ctl+C.
- --state-styles <state_styles>¶
Dictionary between sacct states and rich modifiers for how the state will be printed. Rather than always having to specify a dict on the command line, you can add this to the spectre config file.
An example for the config file would be
status:state_styles:RUNNING: ‘[green]’COMPLETED: ‘[bold][red]’See spectre -h for its path.
- -c, --columns <columns>¶
List of columns that will be printed for all jobs (executable specific columns will be added in addition to this list). This can also be specified in the config file with the name ‘columns’. Note that if the ‘–allusers’ option is not specified, then the “User” column will be omitted. Specify the columns as a comma separated list: State,JobId,Nodes . If you want to have spaces in the list, wrap it in single quotes: ‘State, JobId, Nodes’
- Default:
'State,End,User,JobID,JobName,Elapsed,Cores,Nodes'
- -e, --available-columns¶
Print a list of all available columns to use.
transform-volume-data¶
Transform volume data with Python functions
Run Python functions (kernels) over all volume data in the ‘H5FILES’ and write the output data back into the same files. You can use any Python function as kernel that takes tensors as input arguments and returns a tensor (from ‘spectre.DataStructures.Tensor’). The function must be annotated with tensor types, like this:
- def shift_magnitude(
shift: tnsr.I[DataVector, 3], spatial_metric: tnsr.ii[DataVector, 3]) -> Scalar[DataVector]:
# …
Any pybind11 binding of a C++ function will also work, as long as it takes only supported types as arguments. Supported types are tensors, as well as structural information such as the mesh, coordinates, and Jacobians. See the ‘parse_kernel_arg’ function for all supported argument types, and ‘parse_kernel_output’ for all supported return types.
The kernels can be loaded from any available Python module. Specify them as ‘path.to.module:function_name’ or ‘path.to.module.class:function_name’. You can also load a Python file that defines kernels with the ‘–exec’ / ‘-e’ option and then just specify the kernel as ‘function_name’. Examples of useful kernels:
Anything in ‘spectre.PointwiseFunctions’
‘spectre.Spectral.Mesh3D:extents’
‘spectre.NumericalAlgorithms.LinearOperators:relative_truncation_error’ and ‘absolute_truncation_error’
## Input and output dataset names
You will be prompted to specify dataset names for input and output tensors, unless you specify them with ‘–input-name/-i’. For example, if you specify ‘-i shift=Shift’ for the kernel function above, the code will read the datasets ‘Shift(_x,_y,_z)’ from the volume data and pass them to the kernel function for the ‘shift’ argument.
spectre transform-volume-data [OPTIONS] H5FILES...
Options
- -d, --subfile-name <subfile_name>¶
Name of volume data subfile within each h5 file.
- -k, --kernel <kernels>¶
Python function to apply to the volume data. Specify as ‘path.to.module:function_name’, where the module must be available to import. Alternatively, specify just ‘function_name’ if the function is defined in one of the ‘–exec’ / ‘-e’ files. Can be specified multiple times.
- -e, --exec <exec_files>¶
Python file to execute before loading kernels. Load kernels from this file with the ‘–kernel’ / ‘-k’ option. Can be specified multiple times.
- -i, --input-name <map_input_names>¶
Map of function argument name to dataset name in the volume data file. Specify key-value pair like ‘spatial_metric=SpatialMetric’. Can be specified multiple times. If unspecified, the argument name is transformed to CamelCase.
- --start-time <start_time>¶
The earliest time at which to start processing data. The start-time value is included.
- --stop-time <stop_time>¶
The time at which to stop processing data. The stop-time value is included.
- --stride <stride>¶
Process only every stride’th time step
- --integrate¶
Compute the volume integral over the kernels instead of writing them back into the data files. Specify ‘–output’ / ‘-o’ to write the integrals to a file.
- -o, --output <output>¶
Output file for integrals. Either a ‘.txt’ file or a ‘.h5’ file. Also requires the ‘–output-subfile’ option if a ‘.h5’ file is used. Only used if the ‘–integrate’ flag is set.
- --output-subfile <output_subfile>¶
Subfile name in the ‘–output’ / ‘-o’ file, if it is an H5 file.
- -f, --force¶
Overwrite existing data.
Arguments
- H5FILES¶
Required argument(s)
validate¶
Check an input file for parse errors
spectre validate [OPTIONS] INPUT_FILE_PATH
Options
- -E, --executable <executable>¶
Name or path of the executable. If unspecified, the ‘Executable:’ in the input file metadata is used.
Arguments
- INPUT_FILE_PATH¶
Required argument