Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This article will guide you to run the Python script using Slurm script on a LANTA HPC system. You can find an overview of the content from the table of contents below for immediate viewing of the interesting sections.

Slurm script example for running a Python script

Running on Compute node

#!/bin/bash
#SBATCH -p compute                  # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128   			    # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1		    # Specify tasks per node
#SBATCH -t 120:00:00                # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx                 # Specify project name
#SBATCH -J JOBNAME                  # Specify job name

module purge				        # Unload all modules
module load Miniconda3/22.11.1-1    # Load the Miniconda3 module
conda activate tensorflow-2.6.0		# Activate your environment

python3 file.py                     # Run your program or executable code

Note: Full node: -c 128, Half node: -c 64, ¼ node: -c 32

Running on GPU node

#!/bin/bash
#SBATCH -p gpu                      # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 16   			    # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1		    # Specify number of GPU per task
#SBATCH --ntasks-per-node=4		    # Specify tasks per node
#SBATCH -t 120:00:00                # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx               	# Specify project name
#SBATCH -J JOBNAME               	# Specify job name

module purge				        # Unload all modules
module load Miniconda3/22.11.1-1    # Load the Miniconda3 module
conda activate tensorflow-2.6.0		# Activate your environment

python3 file.py                     # Run your program or executable code

Note: 1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4

Submit a job

Use the sbatch script.sh commands to submit your job to the Slurm system.

username@lanta:~> sbatch script.sh

Related articles

  • No labels