...
Job script
CPU partition
Code Block language bash #!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=16 # Number of MPI processes per node #SBATCH --cpus-per-task=8 # Number of OpenMP threads per MPI process #SBATCH -t 5-00:00:00 # Job runtime limit #SBATCH -A ltXXXXXX # Billing account *** USER EDIT *** #SBATCH -J MiniWeather # Job name module purge module load cpeCray/23.03 module load cray-parallel-netcdf/1.12.3.3 module load cudatoolkit/23.3_11.8 module load craype-accel-nvidia80 export PATH=/--path-to-collision-dir--:${PATH} # *** USER EDIT *** export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} srun --cpus-per-task=${SLURM_CPUS_PER_TASK} openmp
GPU partition
Code Block language bash #!/bin/bash #SBATCH -p gpu # Partition #SBATCH -N 1 # Number of nodes #SBATCH --gpus-per-tasknode=1 # Number of GPU card per node #SBATCH --ntasks-per-node=1 # Number of MPI processes per node #SBATCH --cpus-per-task=16 # Number of OpenMP threads per MPI process #SBATCH -t 5-00:00:00 # Job runtime limit #SBATCH -A ltXXXXXX # Billing account *** USER EDIT *** #SBATCH -J MiniWeather # Job name module purge module load cpeCray/23.03 module load cray-parallel-netcdf/1.12.3.3 module load cudatoolkit/23.3_11.8 module load craype-accel-nvidia80 export PATH=/--path-to-collision-dir--:${PATH} # *** USER EDIT *** export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} srun --cpus-per-task=${SLURM_CPUS_PER_TASK} --gpus-per-task=1 openmp45
(Optional) Post-processing
Code Block language bash module purge module load Miniconda3/22.11.1-1 conda activate netcdf-py39 ncview ./output.nc conda deactivate
Note: Must login with X11 enabled, that is,
echo ${DISPLAY}
should not be empty.
...