Section author: Axel Huebl
First, decide where to store input files, a good place might be
~) because it is usually backed up.
Second, decide where to store your output of simulations which needs to be placed on a high-bandwidth, large-storage file system which we will refer to as
For a first test you can also use your home directory:
We need a few directories to structure our workflow:
# PIConGPU input files mkdir $HOME/picInputs # PIConGPU simulation output mkdir $SCRATCH/runs
1. Create an Input (Parameter) Set¶
# clone the LWFA example to $HOME/picInputs/myLWFA pic-create $PIC_EXAMPLES/LaserWakefield $HOME/picInputs/myLWFA # switch to your input directory cd $HOME/picInputs/myLWFA
PIConGPU is controlled via two kinds of textual input sets: compile-time options and runtime options.
Compile-time .param files reside in
include/picongpu/param/ and define the physics case and deployed numerics.
After creation and whenever options are changed, PIConGPU requires a re-compile.
Feel free to take a look now, but we will later come back on how to edit those files.
Runtime (command line) arguments are set in
These options do not require a re-compile when changed (e.g. simulation size, number of devices, plugins, …).
2. Compile Simulation¶
In our input,
.param files are build directly into the PIConGPU binary for performance reasons.
A compile is required after changing or initially adding those files.
In this step you can optimize the simulation for the specific hardware you want to run on. By default, we compile for Nvidia GPUs with the CUDA backend, targeting the oldest compatible architecture.
This step will take a few minutes. Time for a coffee or a sword fight!
We explain in the details section below how to set further options, e.g. CPU targets or tuning for newer GPU architectures.
3. Run Simulation¶
While you are still in
$HOME/picInputs/myLWFA, start your simulation on one CUDA capable GPU:
# example run for an interactive simulation on the same machine tbg -s bash -c etc/picongpu/1.cfg -t etc/picongpu/bash/mpiexec.tpl $SCRATCH/runs/lwfa_001
This will create the directory
$SCRATCH/runs/lwfa_001 where all simulation output will be written to.
tbg will further create a subfolder
input/ in the directory of the run with the same structure as
myLWFA to archive your input files.
Details on the Commands Above¶
tbg tool is explained in detail in its own section.
Its primary purpose is to abstract the options in runtime
.cfg files from the technical details on how to run on various supercomputers.
For example, if you want to run on the HPC System “Hypnos” at HZDR, your
tbg submit command would just change to:
# request 1 GPU from the PBS batch system and run on the queue "k20" tbg -s qsub -c etc/picongpu/1.cfg -t etc/picongpu/hypnos-hzdr/k20.tpl $SCRATCH/runs/lwfa_002 # run again, this time on 16 GPUs tbg -s qsub -c etc/picongpu/16.cfg -t etc/picongpu/hypnos-hzdr/k20.tpl $SCRATCH/runs/lwfa_003
Note that we can use the same
1.cfg file, your input set is portable.
This tool is just a short-hand to create a new set of input files. It copies from an already existing set of input files (e.g. our examples or a previous simulation) and adds additional helper files.
pic-create --help for more options during input set creation:
pic-create create a new parameter set for simulation input merge default picongpu parameters and a given example's input usage: pic-create [OPTION] [src_dir] dest_dir If no src_dir is set picongpu a default case is cloned -f | --force - merge data if destination already exists -h | --help - show this help message Dependencies: rsync
A run simulation can also be reused to create derived input sets via
pic-create $SCRATCH/runs/lwfa_001/input $HOME/picInputs/mySecondLWFA
This tool is actually a short-hand for an out-of-source build with CMake.
In detail, it does:
# go to an empty build directory mkdir -p .build cd .build # configure with CMake pic-configure $OPTIONS .. # compile PIConGPU with the current input set (e.g. myLWFA) # - "make -j install" runs implicitly "make -j" and then "make install" # - make install copies resulting binaries to input set make -j install
pic-build accepts the same command line flags as pic-configure.
For example, if you want to build for running on CPUs instead of a GPUs, call:
# example for running efficiently on the CPU you are currently compiling on pic-build -b "omp2b"
Its full documentation from
pic-build --help reads:
Build new binaries for a PIConGPU input set Creates or updates the binaries in an input set. This step needs to be performed every time a .param file is changed. This tools creates a temporary build directory, configures and compiles current input set in it and installs the resulting binaries. This is just a short-hand tool for switching to a temporary build directory and running 'pic-configure ..' and 'make install' manually. You must run this command inside an input directory. usage: pic-build [OPTIONS] -b | --backend - set compute backend and optionally the architecture syntax: backend[:architecture] supported backends: cuda, omp2b, serial, tbb (e.g.: "cuda:20;35;37;52;60" or "omp2b:native" or "omp2b") default: "cuda" if not set via environment variable PIC_BACKEND note: architecture names are compiler dependent -c | --cmake - overwrite options for cmake (e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug") -t <presetNumber> - configure this preset from cmakeFlags -h | --help - show this help message
You will likely not use this tool directly.
Instead, pic-build from above calls
pic-configure for you, forwarding its arguments.
We strongly recommend to set the appropriate target compute backend via
-b for optimal performance.
For Nvidia CUDA GPUs, set the compute capability of your GPU:
# example for running efficiently on a K80 GPU with compute capability 3.7 pic-configure -b "cuda:37" $HOME/picInputs/myLWFA
For running on a CPU instead of a GPU, set this:
# example for running efficiently on the CPU you are currently compiling on pic-configure -b "omp2b:native" $HOME/picInputs/myLWFA
If you are compiling on a cluster, the CPU architecture of the head/login nodes versus the actual compute architecture does likely vary! Compiling a backend for the wrong architecture does in the best case dramatically reduce your performance and in the worst case will not run at all!
During configure, the backend’s architecture is forwarded to the compiler’s
For example, if you are compiling with GCC for running on AMD Opteron 6276 CPUs set
-b omp2b:bdver1 or for Intel Xeon Phi Knight’s Landing CPUs set
pic-configure --help for more options during input set configuration:
Configure PIConGPU with CMake Generates a call to CMake and provides short-hand access to selected PIConGPU CMake options. Advanced users can always run 'ccmake .' after this call for further compilation options. usage: pic-configure [OPTIONS] <inputDirectory> -i | --install - path were picongpu shall be installed (default is <inputDirectory>) -b | --backend - set compute backend and optionally the architecture syntax: backend[:architecture] supported backends: cuda, omp2b, serial, tbb (e.g.: "cuda:20;35;37;52;60" or "omp2b:native" or "omp2b") default: "cuda" if not set via environment variable PIC_BACKEND note: architecture names are compiler dependent -c | --cmake - overwrite options for cmake (e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug") -t <presetNumber> - configure this preset from cmakeFlags -h | --help - show this help message
After running configure you can run
ccmake . to set additional compile options (optimizations, debug levels, hardware version, etc.).
This will influence your build done via
You can pass further options to configure PIConGPU directly instead of using
ccmake ., by passing
-c "-DOPTION1=VALUE1 -DOPTION2=VALUE2".