Table of Contents | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
For building an application with GPU acceleration, users can use either PrgEnv-nvhpc
, cudatoolkit/<version>
or nvhpc-mixed
. For simplicity, we We recommend using PrgEnv-nvhpc
for completeness.
Expand | ||
---|---|---|
| ||
|
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
Code Block | ||
---|---|---|
| ||
#!/bin/bash
#SBATCH -p gpu # Partition
#SBATCH -N 1 # Number of nodes
#SBATCH --gpus=4 # Number of GPU cards
#SBATCH --ntasks=4 # Number of MPI processes
#SBATCH --cpus-per-task=16 # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00 # Job runtime limit
#SBATCH -A ltXXXXXX # Billing account
# #SBATCH -J <JobName> # Job name
module purge
# --- Load necessary modules ---
module load <...>
module load <...>
# --- Add software to Linux search paths ---
export PATH=<software-bin-path>:${PATH}
export LD_LIBRARY_PATH=<software-lib/lib64-path>:${LD_LIBRARY_PATH}
# export PYTHONPATH=<software-python-site-packages>:${PYTHONPATH}
# source <your-software-specific-script>
# --- (Optional) Set related environment variables ---
# export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # MUST specify --cpus-per-task above
# --- Run the software ---
# srun <srun-options> ./<software>
# or
# ./<software> |
...
4. Setting environment variables
Some software requires additional environment variables to be set at runtime; for example, the path to the temporary directory. Parameters set by Slurm sbatch (see Slurm sbatch - output environment variables) could be utilized in setting up software-specific environment variables.
In addition For application with OpenMP threading, OMP_NUM_THREADS
, OMP_STACKSIZE
, ulimit -s unlimited
are commonly set in a job script. An example is shown below.
...
Command | Total MPI processes | CPU per MPI process | MPI processes per node |
---|---|---|---|
srun | -n, --ntasks | -c, --cpus-per-task | --ntasks-per-node |
mpirun/mpiexec | -n, -np | --map-by socket:PE=N | --map-by ppr:N:node |
aprun | -n, --pes | -d, --cpus-per-pe | -N, --pes-per-node |
There is usually no need to add options to srun
since, by default, Slurm will automatically derive them from sbatch
. However, we recommend explicitly adding GPU binding options such as --gpus-per-task
or --ntasks-per-gpu
according to your software specification to srun
. Please visit Slurm srun for more details.
...
Installation guide
Reference
...