Thorn CarpetIOHDF5 provides HDF5-based output to the Carpet mesh refinement driver in Cactus. This document explains CarpetIOHDF5’s usage and contains a specification of the HDF5 file format that was adapted from John Shalf’s FlexIO library.
Having encountered various problems with the Carpet I/O thorn CarpetIOFlexIO and the underlying FlexIO library, Erik Schnetter decided to write this thorn CarpetIOHDF5 which bypasses any intermediate binary I/O layer and outputs in HDF51 file format directly.
CarpetIOHDF5 provides output for the Carpet Mesh Refinement driver within the Cactus Code. Christian D. Ott added a file reader (analogous to Erik Schnetter’s implementation present in CarpetIOFlexIO) as well as checkpoint/recovery functionality to CarpetIOHDF5. Thomas Radke has taken over maintainence of this I/O thorn and is continuously working on fixing known bugs and improving the code functionality and efficiency.
The CarpetIOHDF5 I/O method can output any type of CCTK grid variables (grid scalars, grid functions, and grid arrays of arbitrary dimension); data is written into separate files named "<varname>.h5". It implements both serial and full parallel I/O – data files can be written/read either by processor 0 only or by all processors. Such datafiles can be used for further postprocessing (eg. visualization with OpenDX or DataVault2 ) or fed back into Cactus via the filereader capabilities of thorn IOUtil.
This document aims at giving the user a first handle on how to use CarpetIOHDF5. It also documents the HDF5 file layout used.
Parameters to control the CarpetIOHDF5 I/O method are:
IOHDF5::out_every (steerable)
How often to do periodic CarpetIOHDF5 output. If this parameter is set in the parameter file, it
will override the setting of the shared IO::out_every parameter. The output frequency can also
be set for individual variables using the out_every option in an option string appended to the
IOHDF5::out_vars parameter.
IOHDF5::out_dt (steerable)
output in intervals of that much coordinate time (overwrites IO::out_dt)
IOHDF5::out_criterion (steerable)
criterion to select output intervals (overwrites IO::out_criterion)
IOHDF5::out_vars (steerable)
The list of variables to output using the CarpetIOHDF5 I/O method. The variables must
be given by their fully qualified variable or group name. The special keyword all requests
CarpetIOHDF5 output for all variables. Multiple names must be separated by whitespaces.
Each group/variable name can have an option string attached in which you can specify a different output frequency for that individual variable, a set of individual refinement levels to be output, the compression level, or an individual output mode, e.g.
IOHDF5::out_vars = "wavetoy::phi{ out_every = 4 refinement_levels = { 1 2 } }"
Option strings currently supported by CarpetIOHDF5 are: out_every, out_unchunked, refinement_levels, and compression_level.
IOHDF5::out_dir
The directory in which to place the CarpetIOHDF5 output files. If the directory doesn’t exist at startup it
will be created. If parallel output is enabled and the directory name contains the substring "%u" it will
be substituted by the processor ID. By this means each processor can have its own output
directory.
If this parameter is set to an empty string CarpetIOHDF5 output will go to the standard output directory
as specified in IO::out_dir.
IOHDF5::compression_level (steerable)
Compression level to use for writing HDF5 datasets. Automatic gzip dataset compression can be enabled
by setting this integer parameter to values between 1 and 9 (inclusive), with increasing values requesting
higher compression rates (at the cost of additional runtime for outputting HDF5 data); a value of zero
(which is the default setting for this parameter) disables compression. The output compression level can
also be set for individual variables using the compression_level option in an option string appended to
the IOHDF5::out_vars parameter.
IO::out_single_precision (steerable)
whether to output double-precision data in single precision
According to the ouptput mode parameter settings of (IO::out_mode, IO::out_unchunked,
IO::out_proc_every) of thorn IOUtil, thorn CarpetIOHDF5 will output distributed grid variables
either
in serial from processor 0 into a single unchunked file
IO::out_mode = "onefile" IO::out_unchunked = "yes"
in serial from processor 0 into a single chunked file
IO::out_mode = "onefile" IO::out_unchunked = "no"
in parallel, that is, into separate chunked files (one per processor) containing the individual processors’ patches of the distributed grid variable
IO::out_mode = "proc"
Unchunked means that an entire Cactus grid array (gathered across all processors) is stored in a single HDF5 dataset whereas chunked means that all the processor-local patches of this array are stored as separate HDF5 datasets (called chunks). Consequently, for unchunked data all interprocessor ghostzones are excluded from the output. In contrast, for chunked data the interprocessor ghostzones are included in the output.
When visualising chunked datasets, they probably need to be recombined for a global view on the data. This needs to be done within the visualisation tool (see also below), Cactus itself does not provide its own recombiner utility program for CarpetIOHDF5’s output files.
The default is to output distributed grid variables in parallel, each processor writing a file <varname>.file_<processor ID>.h5. The chunked/unchunked mode can also be set individually in a key/value option string (with the key out_unchunked and possible string values "true|false|yes|no") appended to a group/variable name in the out_vars parameter, eg.
IOHDF5::out_vars = "wavetoy::phi{out_unchunked = ’true’} grid::coordinates"
will cause the variable phi to be output into a single unchunked file whereas other variables will still be output into separate chunked files (assuming the output mode is left to its default). Grid scalars and DISTRIB = CONST grid arrays are always output as unchunked data on processor 0 only.
Parallel output in a parallel simulation will ensure maximum I/O performance. Note that changing the output mode to serial I/O might only be necessary if the data analysis and visualisation tools cannot deal with chunked output files. Cactus itself, as well as many of the tools to visualise Carpet HDF5 data (see http://www.cactuscode.org/Visualization), can process both chunked and unchunked data. For instance, to visualise parallel output datafiles with DataVault, you would just send all the individual files to the DV server: hdf5todv phi.file_*.h5. In OpenDX the ImportCarpetIOHDF5 module can be given any filename from the set of parallel chunked files; the module will determine the total number of files in the set automatically and read them all.
Periodic output of grid variables is usually specified via I/O parameters in the parameter file and then automatically triggered by the flesh scheduler at each iteration step after analysis. If output should also be triggered at a different time, one can do that from within an application thorn by invoking one of the CCTK_OutputVar*() I/O routines provided by the flesh I/O API (see chapter B8.2 “IO” in the Cactus Users Guide). In this case, the application thorn routine which calls CCTK_OutputVar*() must be scheduled in level mode.
It should be noted here that – due to a restriction in the naming scheme of objects in an HDF5 data file – CarpetIOHDF5 can output a given grid variable with given refinement level only once per timestep. Attempts of application thorns to trigger the output of the same variable multiple times during an iteration will result in a runtime warning and have no further effect. If output for a variable is required also for intermediate timesteps this can be achieved by calling CCTK_OutputVarAs*() with a different alias name; output for the same variable is then written into different HDF5 files based on the alias argument.
Thorn CarpetIOHDF5 can also be used to create HDF5 checkpoint files and to recover from such files later on. In addition it can read HDF5 datafiles back in using the generic filereader interface described in the thorn documentation of IOUtil.
Checkpoint routines are scheduled at several timebins so that you can save the current state of your simulation after the initial data phase, during evolution, or at termination. Checkpointing for thorn CarpetIOHDF5 is enabled by setting the parameter IOHDF5::checkpoint = "yes".
A recovery routine is registered with thorn IOUtil in order to restart a new simulation from a given HDF5 checkpoint. The very same recovery mechanism is used to implement a filereader functionality to feed back data into Cactus.
Checkpointing and recovery are controlled by corresponding checkpoint/recovery parameters of thorn IOUtil (for a description of these parameters please refer to this thorn’s documentation).
This utility program extracts 1D lines and 2D slices from 3D HDF5 datasets produced by CarpetIOHDF5 and outputs them in CarpetIOASCII format (suitable to be further processed by gnuplot).
The hdf5toascii_slicer program is contained in the src/utils/ subdirectory of thorn CarpetIOHDF5. It is built with
make <configuration>-utils
where the executable ends up in the subdirectory exe/<configuration>/.
For details on how to use the hdf5toascii_slicer program, run it with no command-line options (or with the --help option).
hdf5toascii_slicer can be used on either chunked or unchunked data:
If the HDF5 data is unchunked, then hdf5toascii_slicer will output unchunked ASCII data.
If the HDF5 data is chunked, then hdf5toascii_slicer will output chunked ASCII data reflecting whatever subset of the HDF5 files are provided. That is, for example, the command
hdf5toascii_slicer my_variable.file_2.h5 my_variable.file_4.h5
will output ASCII data only for that part of the Carpet grid which lived on processors 2 and 4. It’s probably more useful to use the command
hdf5toascii_slicer my_variable.file_*.h5
which will output ASCII data for the entire Carpet grid.
This utility program extracts selected datasets from any given HDF5 output file which may be useful when only certain parts (eg. a specific timestep) of large files are required (eg. for copying to some other location for further processing).
The hdf5_extract program is contained in the src/utils/ subdirectory of thorn CactusPUGHIO/IOHDF5. It is built with
make <configuration>-utils
where the executable ends up in the subdirectory exe/<configuration>/.
# how often to output and where output files should go IO::out_every = 2 IO::out_dir = "wavetoy-data" # request output for wavetoy::psi at every other iteration for timelevel 0, # for wavetoy::phi every 4th iteration with timelevels 1 and 2 IOHDF5::out_vars = "wavetoy::phi{ out_every = 4 refinement_levels = { 1 2 } } wavetoy::psi" # we want unchunked output # (because the visualisation tool cannot deal with chunked data files) IO::out_mode = "onefile" IO::out_unchunked = 1
# how often to output IO::out_every = 2 # each processor writes to its own output directory IOHDF5::out_dir = "wavetoy-data-proc%u" # request output for wavetoy::psi at every other iteration for timelevel 0, # for wavetoy::phi every 4th iteration with timelevels 1 and 2 IOHDF5::out_vars = "wavetoy::phi{ out_every = 4 refinement_levels = { 1 2 } } wavetoy::psi" # we want parallel chunked output (note that this already is the default) IO::out_mode = "proc"
# say how often we want to checkpoint, how many checkpoints should be kept, # how the checkpoints should be named, and they should be written to IO::checkpoint_every = 100 IO::checkpoint_keep = 2 IO::checkpoint_file = "wavetoy" IO::checkpoint_dir = "wavetoy-checkpoints" # enable checkpointing for CarpetIOHDF5 IOHDF5::checkpoint = "yes" ####################################################### # recover from the latest checkpoint found IO::recover_file = "wavetoy" IO::recover_dir = "wavetoy-checkpoints" IO::recover = "auto"
# which data files to import and where to find them IO::filereader_ID_files = "phi psi" IO::filereader_ID_dir = "wavetoy-data" # what variables and which timestep to read # (if this parameter is left empty, all variables and timesteps found # in the data files will be read) IO::filereader_ID_vars = "WaveToyMoL::phi{ cctk_iteration = 0 } WaveToyMoL::psi"
checkpoint | Scope: private | BOOLEAN |
Description: Do checkpointing with CarpetIOHDF5 ?
| ||
Default: no | ||
checkpoint_every_divisor | Scope: private | INT |
Description: Checkpoint if (iteration % out_every) == 0
| ||
Range | Default: -1 | |
1:* | Every so many iterations
| |
-1:0 | Disable periodic checkpointing
| |
checkpoint_next | Scope: private | BOOLEAN |
Description: Checkpoint at next iteration ?
| ||
Default: no | ||
compression_level | Scope: private | INT |
Description: Compression level to use for writing HDF5 data
| ||
Range | Default: (none) | |
0:9 | Higher numbers compress better, a value of zero disables
compression
| |
minimum_size_for_compression | Scope: private | INT |
Description: Only compress datasets larger than this many bytes
| ||
Range | Default: 32768 | |
0:* | This should to be large enough so that compression gains outweigh
the overhead
| |
one_file_per_group | Scope: private | BOOLEAN |
Description: Write one file per group instead of per variable
| ||
Default: no | ||
one_file_per_proc | Scope: private | BOOLEAN |
Description: Write one file per process instead of per variable
| ||
Default: no | ||
open_one_input_file_at_a_time | Scope: private | BOOLEAN |
Description: Open only one HDF5 file at a time when reading data from multiple chunked
checkpoint/data files
| ||
Range | Default: no | |
no | Open all input files first, then import data (most efficient)
| |
yes | Process input files one after another (reduces memory
requirements)
| |
out0d_criterion | Scope: private | KEYWORD |
Description: Criterion to select 0D HDF5 slice output intervals, overrides out_every
| ||
Range | Default: default | |
default | Use IO::out_criterion
| |
never | Never output
| |
iteration | Output every so many iterations
| |
divisor | Output if iteration mod divisor == 0.
| |
time | Output every that much coordinate time
| |
out0d_dir | Scope: private | STRING |
Description: Name of 0D HDF5 slice output directory, overrides IO::out_dir
| ||
Range | Default: (none) | |
Empty: use IO::out_dir
| ||
.+ | Not empty: directory name
| |
out0d_dt | Scope: private | REAL |
Description: How often to do 0D HDF5 slice output, overrides IO::out_dt
| ||
Range | Default: -2 | |
(0:* | In intervals of that much coordinate time
| |
As often as possible
| ||
-1 | Disable output
| |
-2 | Default to IO::out_dt
| |
out0d_every | Scope: private | INT |
Description: How often to do 0D HDF5 slice output, overrides out_every
| ||
Range | Default: -2 | |
1:* | Output every so many time steps
| |
-1:0 | No output
| |
-2 | Use IO::out_every
| |
out0d_point_x | Scope: private | REAL |
Description: x coordinate for 0D points
| ||
Range | Default: (none) | |
*:* | ||
out0d_point_xi | Scope: private | INT |
Description: x-index (counting from 0) for 0D points
| ||
Range | Default: (none) | |
0:* | ||
out0d_point_y | Scope: private | REAL |
Description: y coordinate for 0D points
| ||
Range | Default: (none) | |
*:* | ||
out0d_point_yi | Scope: private | INT |
Description: y-index (counting from 0) for 0D points
| ||
Range | Default: (none) | |
0:* | ||
out0d_point_z | Scope: private | REAL |
Description: z coordinate for 0D points
| ||
Range | Default: (none) | |
*:* | ||
out0d_point_zi | Scope: private | INT |
Description: z-index (counting from 0) for 0D points
| ||
Range | Default: (none) | |
0:* | ||
out0d_vars | Scope: private | STRING |
Description: Variables to output in 0D HDF5 file format
| ||
Range | Default: (none) | |
List of group or variable names
| ||
out1d_criterion | Scope: private | KEYWORD |
Description: Criterion to select 1D HDF5 slice output intervals, overrides out_every
| ||
Range | Default: default | |
default | Use IO::out_criterion | |
never | Never output
| |
iteration | Output every so many iterations
| |
divisor | Output if (iteration % out_every) == 0.
| |
time | Output every that much coordinate time
| |
out1d_d | Scope: private | BOOLEAN |
Description: Do output along the diagonal
| ||
Default: yes | ||
out1d_dir | Scope: private | STRING |
Description: Name of 1D HDF5 slice output directory, overrides IO::out_dir
| ||
Range | Default: (none) | |
Empty: use IO::out_dir
| ||
.+ | Not empty: directory name
| |
out1d_dt | Scope: private | REAL |
Description: How often to do 1D HDF5 slice output, overrides IO::out_dt
| ||
Range | Default: -2 | |
(0:* | In intervals of that much coordinate time
| |
As often as possible
| ||
-1 | Disable output
| |
-2 | Default to IO::out_dt
| |
out1d_every | Scope: private | INT |
Description: How often to do 1D HDF5 slice output, overrides out_every
| ||
Range | Default: -2 | |
1:* | Output every so many time steps
| |
-1:0 | No output
| |
-2 | Use IO::out_every
| |
out1d_vars | Scope: private | STRING |
Description: Variables to output in 1D HDF5 file format
| ||
Range | Default: (none) | |
List of group or variable names
| ||
out1d_x | Scope: private | BOOLEAN |
Description: Do 1D HDF5 slice output in the x-direction
| ||
Default: yes | ||
out1d_xline_y | Scope: private | REAL |
Description: y coordinate for 1D lines in x-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_xline_yi | Scope: private | INT |
Description: y-index (counting from 0) for 1D lines in x-direction
| ||
Range | Default: (none) | |
0:* | ||
out1d_xline_z | Scope: private | REAL |
Description: z coordinate for 1D lines in x-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_xline_zi | Scope: private | INT |
Description: z-index (counting from 0) for 1D lines in x-direction
| ||
Range | Default: (none) | |
0:* | ||
out1d_y | Scope: private | BOOLEAN |
Description: Do 1D HDF5 slice output in the y-direction
| ||
Default: yes | ||
out1d_yline_x | Scope: private | REAL |
Description: x coordinate for 1D lines in y-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_yline_xi | Scope: private | INT |
Description: x-index (counting from 0) for 1D lines in y-direction
| ||
Range | Default: (none) | |
0:* | ||
out1d_yline_z | Scope: private | REAL |
Description: z coordinate for 1D lines in y-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_yline_zi | Scope: private | INT |
Description: z-index (counting from 0) for 1D lines in y-direction
| ||
Range | Default: (none) | |
0:* | ||
out1d_z | Scope: private | BOOLEAN |
Description: Do 1D HDF5 slice output in the z-direction
| ||
Default: yes | ||
out1d_zline_x | Scope: private | REAL |
Description: x coordinate for 1D lines in z-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_zline_xi | Scope: private | INT |
Description: x-index (counting from 0) for 1D lines in z-direction
| ||
Range | Default: (none) | |
0:* | ||
out1d_zline_y | Scope: private | REAL |
Description: y coordinate for 1D lines in z-direction
| ||
Range | Default: (none) | |
*:* | ||
out1d_zline_yi | Scope: private | INT |
Description: y-index (counting from 0) for 1D lines in z-direction
| ||
Range | Default: (none) | |
0:* | ||
out2d_criterion | Scope: private | KEYWORD |
Description: Criterion to select 2D HDF5 slice output intervals, overrides out_every
| ||
Range | Default: default | |
default | Use IO::out_criterion | |
never | Never output
| |
iteration | Output every so many iterations
| |
divisor | Output if (iteration % out_every) == 0.
| |
time | Output every that much coordinate time
| |
out2d_dir | Scope: private | STRING |
Description: Name of 2D HDF5 slice output directory, overrides IO::out_dir
| ||
Range | Default: (none) | |
Empty: use IO::out_dir
| ||
.+ | Not empty: directory name
| |
out2d_dt | Scope: private | REAL |
Description: How often to do 2D HDF5 slice output, overrides IO::out_dt
| ||
Range | Default: -2 | |
(0:* | In intervals of that much coordinate time
| |
As often as possible
| ||
-1 | Disable output
| |
-2 | Default to IO::out_dt
| |
out2d_every | Scope: private | INT |
Description: How often to do 2D HDF5 slice output, overrides out_every
| ||
Range | Default: -2 | |
1:* | Output every so many time steps
| |
-1:0 | No output
| |
-2 | Use IO::out_every
| |
out2d_vars | Scope: private | STRING |
Description: Variables to output in 2D HDF5 file format
| ||
Range | Default: (none) | |
List of group or variable names
| ||
out2d_xy | Scope: private | BOOLEAN |
Description: Do 2D HDF5 slice output in the xy-direction
| ||
Default: yes | ||
out2d_xyplane_z | Scope: private | REAL |
Description: z coordinate for 2D planes in xy-direction
| ||
Range | Default: (none) | |
*:* | ||
out2d_xyplane_zi | Scope: private | INT |
Description: z-index (counting from 0) for 2D planes in xy-direction
| ||
Range | Default: (none) | |
0:* | ||
out2d_xz | Scope: private | BOOLEAN |
Description: Do 2D HDF5 slice output in the xz-direction
| ||
Default: yes | ||
out2d_xzplane_y | Scope: private | REAL |
Description: y coordinate for 2D planes in xz-direction
| ||
Range | Default: (none) | |
*:* | ||
out2d_xzplane_yi | Scope: private | INT |
Description: y-index (counting from 0) for 2D planes in xz-direction
| ||
Range | Default: (none) | |
0:* | ||
out2d_yz | Scope: private | BOOLEAN |
Description: Do 2D HDF5 slice output in the yz-direction
| ||
Default: yes | ||
out2d_yzplane_x | Scope: private | REAL |
Description: x coordinate for 2D planes in yz-direction
| ||
Range | Default: (none) | |
*:* | ||
out2d_yzplane_xi | Scope: private | INT |
Description: x-index (counting from 0) for 2D planes in yz-direction
| ||
Range | Default: (none) | |
0:* | ||
out3d_criterion | Scope: private | KEYWORD |
Description: Criterion to select 3D HDF5 slice output intervals, overrides out_every
| ||
Range | Default: default | |
default | Use IO::out_criterion | |
never | Never output
| |
iteration | Output every so many iterations
| |
divisor | Output if (iteration % out_every) == 0.
| |
time | Output every that much coordinate time
| |
out3d_dir | Scope: private | STRING |
Description: Name of 3D HDF5 slice output directory, overrides IO::out_dir
| ||
Range | Default: (none) | |
Empty: use IO::out_dir
| ||
.+ | Not empty: directory name
| |
out3d_dt | Scope: private | REAL |
Description: How often to do 3D HDF5 slice output, overrides IO::out_dt
| ||
Range | Default: -2 | |
(0:* | In intervals of that much coordinate time
| |
As often as possible
| ||
-1 | Disable output
| |
-2 | Default to IO::out_dt
| |
out3d_every | Scope: private | INT |
Description: How often to do 3D HDF5 slice output, overrides out_every
| ||
Range | Default: -2 | |
1:* | Output every so many time steps
| |
-1:0 | No output
| |
-2 | Use IO::out_every
| |
out3d_ghosts | Scope: private | BOOLEAN |
Description: Output ghost zones (DEPRECATED)
| ||
Default: yes | ||
out3d_outer_ghosts | Scope: private | BOOLEAN |
Description: Output outer boundary zones (assuming that there are nghostzones boundary points)
(DEPRECATED)
| ||
Default: yes | ||
out3d_vars | Scope: private | STRING |
Description: Variables to output in 3D HDF5 file format
| ||
Range | Default: (none) | |
List of group or variable names
| ||
out_criterion | Scope: private | KEYWORD |
Description: Criterion to select CarpetIOHDF5 output intervals, overrides out_every
| ||
Range | Default: default | |
default | Use ’IO::out_criterion’ | |
never | Never output
| |
iteration | Output every so many iterations
| |
divisor | Output if (iteration % out_every) == 0.
| |
time | Output every that much coordinate time
| |
out_dir | Scope: private | STRING |
Description: Name of CarpetIOHDF5 output directory, overrides ’IO::out_dir’
| ||
Range | Default: (none) | |
Empty: use IO::out_dir
| ||
.+ | Not empty: directory name
| |
out_dt | Scope: private | REAL |
Description: How often to do CarpetIOHDF5 output, overrides ’IO::out_dt’
| ||
Range | Default: -2 | |
(0:* | In intervals of that much coordinate time
| |
As often as possible
| ||
-1 | Disable output
| |
-2 | Default to ’IO::out_dt’
| |
out_every | Scope: private | INT |
Description: How often to do CarpetIOHDF5 output, overrides out_every
| ||
Range | Default: -2 | |
1:* | Output every so many time steps
| |
-1:0 | No output
| |
-2 | Use ’IO::out_every’
| |
out_extension | Scope: private | STRING |
Description: File extension to use for CarpetIOHDF5 output
| ||
Range | Default: .h5 | |
File extension (including a leading dot, if desired)
| ||
out_vars | Scope: private | STRING |
Description: Variables to output in CarpetIOHDF5 file format
| ||
Range | Default: (none) | |
List of group or variable names
| ||
output_all_timelevels | Scope: private | BOOLEAN |
Description: Output all timelevels instead of only the current
| ||
Default: no | ||
output_boundary_points | Scope: private | BOOLEAN |
Description: Output outer boundary points (assuming that there are nghostzones boundary points)
| ||
Default: yes | ||
output_buffer_points | Scope: private | BOOLEAN |
Description: Output refinement buffer points
| ||
Default: yes | ||
output_ghost_points | Scope: private | BOOLEAN |
Description: Output ghost points
| ||
Default: yes | ||
output_index | Scope: private | BOOLEAN |
Description: Output an index file for each output file
| ||
Default: no | ||
output_symmetry_points | Scope: private | BOOLEAN |
Description: Output symmetry points (assuming that there are nghostzones symmetry points)
| ||
Default: yes | ||
skip_recover_variables | Scope: private | STRING |
Description: Skip these variables while recovering
| ||
Range | Default: (none) | |
use_checksums | Scope: private | BOOLEAN |
Description: Use checksums for the HDF5 data
| ||
Default: no | ||
use_grid_structure_from_checkpoint | Scope: private | BOOLEAN |
Description: Use the grid structure stored in the checkpoint file
| ||
Default: yes | ||
use_reflevels_from_checkpoint | Scope: private | BOOLEAN |
Description: Use ’CarpetRegrid::refinement_levels’ from the checkpoint file rather than from the
parameter file ?
| ||
Default: no | ||
abort_on_io_errors | Scope: shared from IO | BOOLEAN |
checkpoint_dir | Scope: shared from IO | STRING |
checkpoint_every | Scope: shared from IO | INT |
checkpoint_every_walltime_hours | Scope: shared from IO | REAL |
checkpoint_file | Scope: shared from IO | STRING |
checkpoint_id | Scope: shared from IO | BOOLEAN |
checkpoint_id_file | Scope: shared from IO | STRING |
checkpoint_keep | Scope: shared from IO | INT |
checkpoint_on_terminate | Scope: shared from IO | BOOLEAN |
filereader_id_dir | Scope: shared from IO | STRING |
io_out_criterion | Scope: shared from IO | KEYWORD |
io_out_dir | Scope: shared from IO | STRING |
io_out_dt | Scope: shared from IO | REAL |
io_out_every | Scope: shared from IO | INT |
io_out_unchunked | Scope: shared from IO | BOOLEAN |
out_group_separator | Scope: shared from IO | STRING |
out_mode | Scope: shared from IO | KEYWORD |
out_save_parameters | Scope: shared from IO | KEYWORD |
out_single_precision | Scope: shared from IO | BOOLEAN |
out_timesteps_per_file | Scope: shared from IO | INT |
out_xline_y | Scope: shared from IO | REAL |
out_xline_yi | Scope: shared from IO | INT |
out_xline_z | Scope: shared from IO | REAL |
out_xline_zi | Scope: shared from IO | INT |
out_xyplane_z | Scope: shared from IO | REAL |
out_xyplane_zi | Scope: shared from IO | INT |
out_xzplane_y | Scope: shared from IO | REAL |
out_xzplane_yi | Scope: shared from IO | INT |
out_yline_x | Scope: shared from IO | REAL |
out_yline_xi | Scope: shared from IO | INT |
out_yline_z | Scope: shared from IO | REAL |
out_yline_zi | Scope: shared from IO | INT |
out_yzplane_x | Scope: shared from IO | REAL |
out_yzplane_xi | Scope: shared from IO | INT |
out_zline_x | Scope: shared from IO | REAL |
out_zline_xi | Scope: shared from IO | INT |
out_zline_y | Scope: shared from IO | REAL |
out_zline_yi | Scope: shared from IO | INT |
recover | Scope: shared from IO | KEYWORD |
recover_and_remove | Scope: shared from IO | BOOLEAN |
recover_dir | Scope: shared from IO | STRING |
recover_file | Scope: shared from IO | STRING |
strict_io_parameter_check | Scope: shared from IO | BOOLEAN |
verbose | Scope: shared from IO | KEYWORD |
Implements:
iohdf5
Group Names | Variable Names | Details | |
next_output_iteration | next_output_iteration | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
variable type | INT | ||
next_output_time | next_output_time | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
variable type | REAL | ||
this_iteration | this_iteration | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
variable type | INT | ||
last_output_iteration_slice | last_output_iteration_slice | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
vararray_size | 4 | ||
variable type | INT | ||
last_output_time_slice | last_output_time_slice | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
vararray_size | 4 | ||
variable type | REAL | ||
this_iteration_slice | this_iteration_slice | compact | 0 |
dimensions | 0 | ||
distribution | CONSTANT | ||
group type | SCALAR | ||
timelevels | 1 | ||
vararray_size | 4 | ||
variable type | INT | ||
Adds header:
CarpetIOHDF5.hh
Uses header:
Timer.hh
carpet.hh
defs.hh
bbox.hh
vect.hh
data.hh
gdata.hh
ggf.hh
gh.hh
typecase.hh
typeprops.hh
mpi_string.hh
Provides:
IO_SetCheckpointGroups to
This section lists all the variables which are assigned storage by thorn Carpet/CarpetIOHDF5. Storage can either last for the duration of the run (Always means that if this thorn is activated storage will be assigned, Conditional means that if this thorn is activated storage will be assigned for the duration of the run if some condition is met), or can be turned on for the duration of a schedule function.
Always: | |
next_output_iteration next_output_time this_iteration | |
last_output_iteration_slice last_output_time_slice this_iteration_slice | |
CCTK_STARTUP
carpetiohdf5_startup
startup routine
After: | ioutil_startup | |
Language: | c | |
Type: | function | |
CCTK_INITIAL
carpetiohdf5_init
initialisation routine
Language: | c | |
Options: | global | |
Type: | function | |
Writes: | carpetiohdf5::this_iteration_slice | |
last_output_iteration_slice | ||
last_output_time_slice | ||
carpetiohdf5::this_iteration | ||
next_output_iteration | ||
next_output_time | ||
CCTK_POST_RECOVER_VARIABLES
carpetiohdf5_initcheckpointingintervals
initialisation of checkpointing intervals after recovery
Language: | c | |
Options: | global | |
Type: | function | |
CCTK_CPINITIAL
carpetiohdf5_initialdatacheckpoint
initial data checkpoint routine
Language: | c | |
Options: | meta | |
Type: | function | |
CCTK_CHECKPOINT
carpetiohdf5_evolutioncheckpoint
evolution checkpoint routine
Language: | c | |
Options: | meta | |
Type: | function | |
CCTK_TERMINATE
carpetiohdf5_terminationcheckpoint
termination checkpoint routine
Language: | c | |
Options: | meta | |
Type: | function | |
CCTK_POSTINITIAL
carpetiohdf5_closefiles
close all filereader input files
Language: | c | |
Options: | global | |
Type: | function | |
CCTK_RECOVER_PARAMETERS (conditional)
carpetiohdf5_recoverparameters
parameter recovery routine
Language: | c | |
Options: | meta | |
Type: | function | |
CCTK_STARTUP (conditional)
carpetiohdf5_setnumrefinementlevels
overwrite ’carpetregrid::refinement_levels’ with the number of levels found in the checkpoint file
Before: | carpetiohdf5_startup | |
Language: | c | |
Options: | meta | |
Type: | function | |
CCTK_POST_RECOVER_VARIABLES (conditional)
carpetiohdf5_closefiles
close all initial data checkpoint files after recovery
Language: | c | |
Options: | meta | |
Type: | function | |