This article will guide you to run the Python script using Slurm script on a the LANTA HPC system. You can find an overview of the content from the table of contents below for immediate viewing of the interesting sections.
Table of Contents |
---|
Slurm script example for running a Python script
Running on Compute node
Code Block |
---|
#!/bin/bash #SBATCH -p compute # Specify partition [Compute/Memory/GPU] #SBATCH -N 1 -c 128 # Specify number of nodes and processors per task #SBATCH --ntasks-per-node=1 # Specify tasks per node #SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second) #SBATCH -A ltxxxxxx # Specify project name #SBATCH -J JOBNAME # Specify job name module purge # Unload all modules module load Miniconda3/22.11.1-1 # Load the Miniconda3 module conda activate tensorflow-2.6.0 # Activate your environment python3 file.py # Run your program or executable code |
...
Info |
---|
Full node: -c 128, Half node: -c 64, ΒΌ node: -c 32 |
Running on GPU node
Code Block |
---|
#!/bin/bash #SBATCH -p gpu # Specify partition [Compute/Memory/GPU] #SBATCH -N 1 -c 16 # Specify number of nodes and processors per task #SBATCH --gpus-per-task=1 # Specify number of GPU per task #SBATCH --ntasks-per-node=4 # Specify tasks per node #SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second) #SBATCH -A ltxxxxxx # Specify project name #SBATCH -J JOBNAME # Specify job name module purge # Unload all modules module load Miniconda3/22.11.1-1 # Load the Miniconda3 module conda activate tensorflow-2.6.0 # Activate your environment python3 file.py # Run your program or executable code |
...
Info |
---|
1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4 |
Submit a job
Use the sbatch script.sh
commands to submit your job to the Slurm system.
Code Block |
---|
username@lanta:~> sbatch script.sh |
...
Related articles
Filter by label (Content by label) | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...