Thomas Radke <>


Thorn CarpetIOStreamedHDF5 provides an I/O method to stream Cactus grid variables in HDF5 file format via a socket to any connected clients. In combination with client programs, which are capable of receiving and postprocessing streamed HDF5 data, this thorn can be used to perform online remote visualisation of live data from running Carpet FMR/AMR simulations.

1 Introduction

Thorn CarpetIOStreamedHDF5 uses the standard I/O library HDF5 (Hierarchical Data Format version 5) to output any type of Cactus grid variables (grid scalars, grid functions, and grid arrays of arbitrary dimension) in the HDF5 file format.

Streamed output is enabled by activating thorn CarpetIOStreamedHDF5 in your parameter file. At simulation startup it registers its own I/O method with the flesh’s I/O interface and opens a server port on the root processor (with processor ID 0) to which clients can connect to. Like any Cactus I/O method, it will then check periodically after each iteration whether output should be done for grid variables as chosen in the parameter file.

Data is streamed as a serialized HDF5 file to all clients which try to connect to the server port at the time of output. If multiple variables are to be output at the same time they will all be sent in a single streamed HDF5 file.

It should be noticed here that, due to the implementation of data streaming in the HDF5 library, streaming many variables (or a single variable with many refinement levels and/or a large global grid size) can be a costly operation in terms of memory requirements, as the resulting HDF5 file has to be kept in main memory before it gets sent to a client. You should instead switch on streamed HDF5 output only for those variables/refinement levels which you are currently interested in visualising; the corresponding I/O parameter is steerable so you can select other variables and/or levels any time.

2 CarpetIOStreamedHDF5 Parameters

Parameters to control the CarpetIOStreamedHDF5 I/O method are:

3 A Practical Session of Remote Online Visualisation

The following steps illustrate how a practical session of visualising live data from a Carpet simulation with the DataVault visualisation tool could look like. It is assumed that Cactus is running as a PBS job on a parallel cluster, with the compute nodes behind a cluster firewall (only the cluster head node can be directly accessed from outside).

  1. Your job has been submitted to PBS and shortly after begins its execution.
    Let’s assume that you want to run the HDF5 data streaming demo parameter file CarpetIOStreamedHDF5.par contained in the par/ subdirectory of thorn CarpetIOStreamedHDF5.
  2. Grep the stdout of your job’s startup messages for a line containing ’CarpetIOStreamedHDF5’ and ’data streaming service started on’. This tells you the hostname and port number to connect to (eg. ic0010:10000).
  3. On the head node, set the DVHOST shell environment variable to the hostname of the machine which runs the DV server (ideally your laptop or local workstation).
    Then run the hdf5todv client with the URL of your Cactus simulation’s data streaming service, eg. ’hdf5todv ic0010:10000’. This should receive one timestep of chosen CarpetIOStreamedHDF5 variables from the simulation and send it on to DV. The new timestep should appear there as a new register.
  4. Repeat the previous step as often as you like in order to get a sequence of timesteps which you then can animate.

4 Further Information

More information on HDF5 can be found on the HDF5 home page

The list of tools for visualising Cactus and Carpet output data can be found on the Cactus VisTools page

The OpenDXutils package, which provides the ImportCarpetHDF5 import module to read Carpet HDF5 data, also contains a network net/ to visualise live streamed HDF5 data produced by a Carpet simulation running the CarpetIOStreamedHDF5.par parameter file. The package is publicly available under GPL via anonymous CVS: