Table of Contents | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
Command | Total MPI processes | CPU per MPI process | MPI processes per node | GPU card |
---|---|---|---|---|
srun | -n, --ntasks | -c, --cpus-per-task | --ntasks-per-node | -G, --gpus |
mpirun/mpiexec | -n, -np | --map-by socket:PE=N | --map-by ppr:N:node | - |
aprun | -n, --pes | -d, --cpus-per-pe | -N, --pes-per-node | - |
Nevertheless, there There is usually no needs need to explicitly add these options to srun
, since most srun
options are automatically derived from sbatch
, except for --cpus-per-task
and a few others since, by default, Slurm will automatically derive them from sbatch
. However, we recommend explicitly adding GPU binding options such as --gpus-per-task
or --ntasks-per-gpu
according to each software specification to srun
. Please visit Slurm srun for more details.
Note |
---|
For multi-threaded applications, it is essential to specify |
...