See also

You need to have an environment loaded (. $HOME/picongpu.profile) that provides all PIConGPU dependencies to complete this chapter.

Basics

Section author: Axel Huebl

Preparation

First, decide where to store input files, a good place might be $HOME (~) because it is usually backed up. Second, decide where to store your output of simulations which needs to be placed on a high-bandwidth, large-storage file system which we will refer to as $SCRATCH.

For a first test you can also use your home directory:

export SCRATCH=$HOME

We need a few directories to structure our workflow:

# PIConGPU input files
mkdir $HOME/picInputs

# PIConGPU simulation output
mkdir $SCRATCH/runs

Step-by-Step

1. Create an Input (Parameter) Set

# clone the LWFA example to $HOME/picInputs/myLWFA
pic-create $PICSRC/examples/LaserWakefield/ $HOME/picInputs/myLWFA

# switch to your input directory
cd $HOME/picInputs/myLWFA

PIConGPU is controlled via two kinds of input sets: compile-time options and runtime options.

Edit the .param files inside include/picongpu/simulation_defines/param/. Initially and when options are changed, PIConGPU requires a re-compile.

Now edit the runtime (command line) arguments in etc/picongpu/*.cfg. These options do not require a re-compile when changed (e.g. simulation size, number of GPUs, plugins, ...).

2. Compile Simulation

In our input, .param files are build directly into the PIConGPU binary for performance reasons. Changing or initially adding those requires a compile.

In this step you can optimize the simulation for the specific hardware you want to run on. By default, we compile for Nvidia GPUs with CUDA targeting the oldest compatible architecture.

pic-build

This step will take a few minutes. Time for a coffee or a sword fight!

3. Run Simulation

While you are still in $HOME/picInputs/myLWFA, start your simulation on one CUDA capable GPU:

# example run for an interactive simulation on the same machine
tbg -s bash -c etc/picongpu/0001gpus.cfg -t etc/picongpu/bash/mpiexec.tpl $SCRATCH/runs/lwfa_001

This will create the directory $SCRATCH/runs/lwfa_001 where all simulation output will be written to. tbg will further create a subfolder input/ in the directory of the run with the same structure as myLWFA to archive your input files.

Further Reading

Individual input files, their syntax and usage are explained in the following sections.

See tbg --help for more information about the tbg tool.

For example, if you want to run on the HPC System “Hypnos” at HZDR, your tbg submit command would just change to:

# request 16 GPUs from the PBS batch system and run on the queue k20
tbg -s qsub -c etc/picongpu/0016gpus.cfg -t etc/picongpu/hypnos-hzdr/k20_profile.tpl $SCRATCH/runs/lwfa_002

pic-create

This tool is just a short-hand to create a new set of input files. It does a copy from an already existing set of input files (e.g. our examples or a previous simulation) and adds additional default files.

See pic-create --help for more options during input set creation:

pic-create create a new parameter set for simulation input
merge default picongpu parameters and a given example's input

usage: pic-create [OPTION] [src_dir] dest_dir
If no src_dir is set picongpu a default case is cloned

-f | --force         - merge data if destination already exists
-h | --help          - show this help message

Dependencies: rsync

A run simulation can also be reused to create derived input sets via pic-create:

pic-create $SCRATCH/runs/lwfa_001/input $HOME/picInputs/mySecondLWFA

pic-build

This tool is actually a short-hand for an out-of-source build with CMake.

In detail, it does:

# go to an empty build directory
mkdir -p .build
cd .build

# configure with CMake
pic-configure $OPTIONS ..

# compile PIConGPU with the current input set (e.g. myLWFA)
# - "make -j install" runs implicitly "make -j" and then "make install"
# - make install copies resulting binaries to input set
make -j install

pic-build accepts the same command line flags as pic-configure. For example, if you want to build for running on CPUs instead of a GPUs, call:

# example for running efficiently on the CPU you are currently compiling on
pic-build -a "omp2b"

Its full documentation from pic-build --help reads:

Build new binaries for a PIConGPU input set

Creates or updates the binaries in an input set. This step needs to
be performed every time a .param file is changed.

This tools creates a temporary build directory, configures and
compiles current input set in it and installs the resulting
binaries.
This is just a short-hand tool for switching to a temporary build
directory and running 'pic-configure ..' and 'make install'
manually.

You must run this command inside an input directory.

usage: pic-build [OPTIONS]

-a | --arch          - set compute backend and optionally the architecture
                       syntax: backend[:architecture]
                       supported backends: cuda, omp2b
                       (e.g.: "cuda:20;35;37;52;60" or "omp2b:native" or "omp2b")
-c | --cmake         - overwrite options for cmake
                       (e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug")
-t <presetNumber>    - configure this preset from cmakeFlags
-h | --help          - show this help message

pic-configure

The tools is just a convenient wrapper for a call to CMake. It is executed from an empty build directory. You will likely not use this tool directly when using pic-build from above.

We strongly recommend to set the appropriate target compute architecture via -a for optimal performance. For Nvidia CUDA GPUs, set the compute capability of your GPU:

# example for running efficiently on a K80 GPU with compute capability 3.7
pic-configure -a "cuda:37" $HOME/picInputs/myLWFA

For running on a CPU instead of a GPU, set this:

# example for running efficiently on the CPU you are currently compiling on
pic-configure -a "omp2b:native" $HOME/picInputs/myLWFA

Note

If you are compiling on a cluster, the CPU architecture of the head/login nodes versus the actual compute architecture does likely vary! Compiling for the wrong architecture does in the best case dramatically reduce your performance and in the worst case will not run at all!

During configure, the architecture is forwarded to the compiler’s -mtune and -march flags. For example, if you are compiling for running on AMD Opteron 6276 CPUs set -a omp2b:bdver1.

See pic-configure --help for more options during input set configuration:

Configure PIConGPU with CMake

Generates a call to CMake and provides short-hand access to selected
PIConGPU CMake options.
Advanced users can always run 'ccmake .' after this call for further
compilation options.

usage: pic-configure [OPTIONS] <inputDirectory>

-i | --install       - path were picongpu shall be installed
                       (default is <inputDirectory>)
-a | --arch          - set compute backend and optionally the architecture
                       syntax: backend[:architecture]
                       supported backends: cuda, omp2b
                       (e.g.: "cuda:20;35;37;52;60" or "omp2b:native" or "omp2b")
-c | --cmake         - overwrite options for cmake
                       (e.g.: "-DPIC_VERBOSE=21 -DCMAKE_BUILD_TYPE=Debug")
-t <presetNumber>    - configure this preset from cmakeFlags
-h | --help          - show this help message

After running configure you can run ccmake . to set additional compile options (optimizations, debug levels, hardware version, etc.). This will influence your build done via make.

You can pass further options to configure PIConGPU directly instead of using ccmake ., by passing -c "-DOPTION1=VALUE1 -DOPTION2=VALUE2".