Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This article will guide you to run the Python script using Slurm describes how to execute a Python script on the LANTA HPC system . An overview of the content can be found in the using Apptainer. The following table of contents below for immediate visualization of the interesting partsprovides a summary of the article's material so that the reader can quickly identify the most important sections.

Table of Contents
minLevel1
maxLevel6
outlinefalse
typelist
printablefalse

Slurm script example for running the Python script

Running on Compute node

Code Block
#!/bin/bash
#SBATCH -p compute                       # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128   			         # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1		         # Specify tasks per node
#SBATCH -t 120:00:00                     # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx                      # Specify project name
#SBATCH -J JOBNAME                       # Specify job name

module load Apptainer/1.1.6              # Load the Apptainer module

apptainer exec -B $PWD:$PWD file.sif python3 file.py      # Run your program
Info

Full node: -c 128, Half node: -c 64, ΒΌ node: -c 32

Running on GPU node

Code Block
#!/bin/bash
#SBATCH -p gpu                           # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 16   			         # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1		         # Specify number of GPU per task
#SBATCH --ntasks-per-node=4		         # Specify tasks per node
#SBATCH -t 120:00:00                     # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx               	     # Specify project name
#SBATCH -J JOBNAME               	     # Specify job name

module load Apptainer/1.1.6              # Load the Apptainer module

apptainer exec --nv -B $PWD:$PWD file.sif python3 file.py       # Run your program
Info

1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4

Submit a job

Use the sbatch script.sh command to submit your job to the Slurm system.

Code Block
username@lanta:~> sbatch script.sh

Related articles

Filter by label (Content by label)
cqllabel in ( "singularity" , "linux" ) = "Apptainer" and space = currentSpace ( )