Stores simulation data such as fields and particles along with domain information, conversion units etc. as HDF5 files [Huebl2017] . It uses libSplash for writing HDF5 data. It is used for post-simulation analysis and for restarts of the simulation after a crash or an intended stop.
What is the format of the created HDF5 files?¶
We write our fields and particles in an open markup called openPMD. You can investigate your files via a large collection of tools and frameworks or use the native HDF5 bindings of your favorite programming language.
Resources for a quick-start:
The plugin is available as soon as the libSplash and HDF5 libraries are compiled in.
.param file is fileOutput.param.
One can e.g. disable the output of particles by setting:
/* output all species */ using FileOutputParticles = VectorAllSpecies; /* disable */ using FileOutputParticles = MakeSeq_t< >;
You can use
--hdf5.file to specify the output period and path and name of the created fileset.
--hdf5.period 128 --hdf5.file simData --hdf5.source 'species_all' will write only the particle species data to files of the form
simData_128.h5 in the default simulation output directory every 128 steps.
Note that this plugin will only be available if libSplash and HDF5 is found during compile configuration.
PIConGPU command line option
Period after which simulation data should be stored on disk.
Relative or absolute fileset prefix for simulation data.
If relative, files are stored under
Select data sources to dump. Default is
This plugin is a multi plugin. Command line parameter can be used multiple times to create e.g. dumps with different dumping period. In the case where a optional parameter with a default value is explicitly defined the parameter will be always passed to the instance of the multi plugin where the parameter is not set. e.g.
--hdf5.period 128 --hdf5.file simData1 --hdf5.period 1000 --hdf5.file simData2 --hdf5.source 'species_all'
creates two plugins:
dump all species data each 128th time step.
dump all fields and species data (this is the default) data each 1000th time step.
no extra allocations.
During I/O, each complete particle species is allocated one after an other.
A. Huebl, R. Widera, F. Schmitt, A. Matthes, N. Podhorszki, J.Y. Choi, S. Klasky, and M. Bussmann. On the Scalability of Data Reduction Techniques in Current and Upcoming HPC Systems from an Application Perspective. ISC High Performance Workshops 2017, LNCS 10524, pp. 15-29 (2017), arXiv:1706.00522, DOI:10.1007/978-3-319-67630-2_2