This article will guide you to run the describes how to execute a Python script using Slurm script on the LANTA HPC system . An overview of the content can be found in the using Apptainer. The following table of contents below for immediate visualization of the interesting partsprovides a summary of the article's material so that the reader can quickly identify the most important sections.
Table of Contents |
---|
minLevel | 1 |
---|
maxLevel | 6 |
---|
outline | false |
---|
type | list |
---|
printable | false |
---|
|
Example of a Slurm script
...
for
...
launching the Python script
Running on Compute node
Code Block |
---|
#!/bin/bash
#SBATCH -p compute # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128 # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module purge load Apptainer/1.1.6 # Unload all modules module load Miniconda3/22.11.1-1 # Load the Miniconda3Apptainer module
conda
apptainer activateexec tensorflow-2.6.0 # Activate your environment
-B $PWD:$PWD file.sif python3 file.py # Run your program or executable code |
Info |
---|
Full node: -c 128, Half node: -c 64, ΒΌ node: -c 32 |
Running on GPU node
Code Block |
---|
#!/bin/bash
#SBATCH -p gpu # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 16 # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1 # Specify number of GPU per task
#SBATCH --ntasks-per-node=4 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module purge load Apptainer/1.1.6 # Unload all modules module load Miniconda3/22.11.1-1 # Load the Miniconda3Apptainer module
conda
apptainer activateexec tensorflow-2.6.0 # Activate your environment
--nv -B $PWD:$PWD file.sif python3 file.py # Run your program or executable code |
Info |
---|
1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4 |
Submit a job
Use the sbatch script.sh
command to submit your job to the Slurm system.
Code Block |
---|
username@lanta:~> sbatch script.sh |
Related articles
Filter by label (Content by label) |
---|
showLabels | false |
---|
showSpace | false |
---|
cql | label in ( "singularity"python-script" , "python-vir-env" , "linuxjupyter-apptainer" ) and space = currentSpace ( ) |
---|
|
...