See also
You need to have an environment loaded (source $HOME/picongpu.profile
when installing from source) that provides all PIConGPU dependencies to complete this chapter.
Warning
PIConGPU source code is portable and can be compiled on all major operating systems.
However, helper tools like pic-create
and pic-build
described in this section rely on Linux utilities and thus are not expected to work on other platforms out-of-the-box.
Note that building and using PIConGPU on other operating systems is still possible but has to be done manually or with custom tools.
This case is not covered in the documentation, but we can assist users with it when needed.
Basics
Section author: Axel Huebl
Preparation
First, decide where to store input files, a good place might be $HOME
(~
) because it is usually backed up.
Second, decide where to store your output of simulations which needs to be placed on a high-bandwidth, large-storage file system which we will refer to as $SCRATCH
.
For a first test you can also use your home directory:
export SCRATCH=$HOME
We need a few directories to structure our workflow:
# PIConGPU input files
mkdir $HOME/picInputs
# PIConGPU simulation output
mkdir $SCRATCH/runs
Step-by-Step
1. Create an Input (Parameter) Set
# clone the LWFA example to $HOME/picInputs/myLWFA
pic-create $PIC_EXAMPLES/LaserWakefield $HOME/picInputs/myLWFA
# switch to your input directory
cd $HOME/picInputs/myLWFA
PIConGPU is controlled via two kinds of textual input sets: compile-time options and runtime options.
Compile-time .param files reside in include/picongpu/param/
and define the physics case and deployed numerics.
After creation and whenever options are changed, PIConGPU requires a re-compile.
Feel free to take a look now, but we will later come back on how to edit those files.
Runtime (command line) arguments are set in etc/picongpu/*.cfg
files.
These options do not require a re-compile when changed (e.g. simulation size, number of devices, plugins, …).
2. Compile Simulation
In our input, .param
files are build directly into the PIConGPU binary for performance reasons.
A compile is required after changing or initially adding those files.
In this step you can optimize the simulation for the specific hardware you want to run on. By default, we compile for Nvidia GPUs with the CUDA backend, targeting the oldest compatible architecture.
pic-build
This step will take a few minutes. Time for a coffee or a sword fight!
We explain in the details section below how to set further options, e.g. CPU targets or tuning for newer GPU architectures.
3. Run Simulation
While you are still in $HOME/picInputs/myLWFA
, start your simulation on one CUDA capable GPU:
# example run for an interactive simulation on the same machine
tbg -s bash -c etc/picongpu/1.cfg -t etc/picongpu/bash/mpiexec.tpl $SCRATCH/runs/lwfa_001
This will create the directory $SCRATCH/runs/lwfa_001
where all simulation output will be written to.
tbg
will further create a subfolder input/
in the directory of the run with the same structure as myLWFA
to archive your input files.
Subfolder simOutput/
has all the simulation results.
Particularly, the simulation progress log is in simOutput/output
.
4. Creating Own Simulation
For creating an own simulation, we recommend starting with the most fitting example and modifying the compile-time options in .param files and run-time options in .cfg files.
Changing contents of .param
files requires recompilation of the code, modifying .cfg
files does not.
Note that available run-time options generally depend on the environment used for the build, the chosen compute backend, and the contents of .param
files.
To get the list of all available options for the current configuration, after a successful pic-build
run
.build/picongpu --help
Details on the Commands Above
tbg
The tbg
tool is explained in detail in its own section.
Its primary purpose is to abstract the options in runtime .cfg
files from the technical details on how to run on various supercomputers.
For example, if you want to run on the HPC System “Hemera” at HZDR, your tbg
submit command would just change to:
# request 1 GPU from the PBS batch system and run on the queue "k20"
tbg -s sbatch -c etc/picongpu/1.cfg -t etc/picongpu/hemera-hzdr/k20.tpl $SCRATCH/runs/lwfa_002
# run again, this time on 16 GPUs
tbg -s sbatch -c etc/picongpu/16.cfg -t etc/picongpu/hemera-hzdr/k20.tpl $SCRATCH/runs/lwfa_003
Note that we can use the same 1.cfg
file, your input set is portable.
pic-create
This tool is just a short-hand to create a new set of input files. It copies from an already existing set of input files (e.g. our examples or a previous simulation) and adds additional helper files.
See pic-create --help
for more options during input set creation:
pic-create create a new parameter set for simulation input
merge default picongpu parameters and a given example's input
usage: pic-create [OPTION] [src_dir] dest_dir
If no src_dir is set picongpu a default case is cloned
If src_dir is not in the current directory, pic-create will
look for it in $PIC_EXAMPLES
-f | --force - merge data if destination already exists
-h | --help - show this help message
Dependencies: rsync
A run simulation can also be reused to create derived input sets via pic-create
:
pic-create $SCRATCH/runs/lwfa_001/input $HOME/picInputs/mySecondLWFA
pic-build
This tool is actually a short-hand for an out-of-source build with CMake.
In detail, it does:
# go to an empty build directory
mkdir -p .build
cd .build
# configure with CMake
pic-configure $OPTIONS ..
# compile PIConGPU with the current input set (e.g. myLWFA)
# - "make -j install" runs implicitly "make -j" and then "make install"
# - make install copies resulting binaries to input set
make -j install
pic-build
accepts the same command line flags as pic-configure.
For example, if you want to build for running on CPUs instead of a GPUs, call:
# example for running efficiently on the CPU you are currently compiling on
pic-build -b "omp2b"
Its full documentation from pic-build --help
reads:
Build new binaries for a PIConGPU input set
Creates or updates the binaries in an input set. This step needs to
be performed every time a .param file is changed.
This tools creates a temporary build directory, configures and
compiles current input set in it and installs the resulting
binaries.
This is just a short-hand tool for switching to a temporary build
directory and running 'pic-configure ..' and 'make install'
manually.
You must run this command inside an input directory.
usage: pic-build [OPTIONS]
-j [N] - allow N jobs at once; infinite jobs with no arg
-b | --backend - set compute backend and optionally the architecture
syntax: backend[:architecture]
supported backends: cuda, hip, omp2b, serial, tbb, threads
(e.g.: "cuda:35;37;52;60" or "omp2b:native" or "omp2b")
default: "cuda" if not set via environment variable PIC_BACKEND
note: architecture names are compiler dependent
-c | --cmake - overwrite options for cmake
(e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug")
-t <presetNumber> - configure this preset from cmakeFlags
-f | --force - clear the cmake file cache and scan for new param files
-G <cmakeBuildSystem> - select the build system used by CMake, e.g. Ninja, ...
-h | --help - show this help message
pic-configure
This tool is just a convenient wrapper for a call to CMake. It is executed from an empty build directory.
You will likely not use this tool directly.
Instead, pic-build from above calls pic-configure
for you, forwarding its arguments.
We strongly recommend to set the appropriate target compute backend via -b
for optimal performance.
For Nvidia CUDA GPUs, set the compute capability of your GPU:
# example for running efficiently on a K80 GPU with compute capability 3.7
pic-configure -b "cuda:37" $HOME/picInputs/myLWFA
For running on a CPU instead of a GPU, set this:
# example for running efficiently on the CPU you are currently compiling on
pic-configure -b "omp2b:native" $HOME/picInputs/myLWFA
Note
If you are compiling on a cluster, the CPU architecture of the head/login nodes versus the actual compute architecture does likely vary! Compiling a backend for the wrong architecture does in the best case dramatically reduce your performance and in the worst case will not run at all!
During configure, the backend’s architecture is forwarded to the compiler’s -mtune
and -march
flags.
For example, if you are compiling with GCC for running on AMD Opteron 6276 CPUs set -b omp2b:bdver1
or for Intel Xeon Phi Knight’s Landing CPUs set -b omp2b:knl
.
See pic-configure --help
for more options during input set configuration:
Configure PIConGPU with CMake
Generates a call to CMake and provides short-hand access to selected
PIConGPU CMake options.
Advanced users can always run 'ccmake .' after this call for further
compilation options.
usage: pic-configure [OPTIONS] <inputDirectory>
-i | --install - path were picongpu shall be installed
(default is <inputDirectory>)
-b | --backend - set compute backend and optionally the architecture
syntax: backend[:architecture]
supported backends: cuda, hip, omp2b, serial, tbb, threads
(e.g.: "cuda:35;37;52;60" or "omp2b:native" or "omp2b")
default: "cuda" if not set via environment variable PIC_BACKEND
note: architecture names are compiler dependent
-c | --cmake - overwrite options for cmake
(e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug")
-t <presetNumber> - configure this preset from cmakeFlags
-f | --force - clear the cmake file cache and scan for new param files
-G <cmakeBuildSystem> - select the build system used by CMake, e.g. Ninja, ...
-h | --help - show this help message
After running configure you can run ccmake .
to set additional compile options (optimizations, debug levels, hardware version, etc.).
This will influence your build done via make install
.
You can pass further options to configure PIConGPU directly instead of using ccmake .
, by passing -c "-DOPTION1=VALUE1 -DOPTION2=VALUE2"
.