...
...
...
...
...
...
...
...
...
...
This article will guide you to run the Python script using Slurm script on the LANTA HPC system. An overview of the content can be found in the table of contents below for immediate visualization of the interesting parts.
Table of Contents | ||||||
---|---|---|---|---|---|---|
|
How to check the version of Apptainer on LANTA-HPC
On LANTA-frontend-node, you can check the software that is installed in the cluster to use the command.
$ module avail
$ module avail
------------------------- /lantafs/utils/modules/modules/all --------------------------
ANSYS/2020.1
ARAGORN/1.2.38-foss-2019b
AUGUSTUS/3.3.3-foss-2019b
Advisor/2019_update5
Autoconf/2.69-GCCcore-8.3.0
Autoconf/2.69-GCCcore-10.2.0
Autoconf/2.69 (D)
…
2. If you need a specific software name to check the version. In this case, I would like to know the version of Singularity on the HPC system LANTA.
$ module avail Singularity
$ module avail singularity
------------------------- /lantafs/utils/modules/modules/all --------------------------
Singularity/3.3.0 Singularity/3.4.2 (D)
Where:
D: Default Module
Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of
the "keys".
...
Slurm script example for running the Python script
Running on Compute node
Code Block |
---|
#!/bin/bash
#SBATCH -p compute # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128 # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module purge # Unload all modules
module load Miniconda3/22.11.1-1 # Load the Miniconda3 module
conda activate tensorflow-2.6.0 # Activate your environment
python3 file.py # Run your program or executable code |
Info |
---|
Full node: -c 128, Half node: -c 64, ¼ node: -c 32 |
Running on GPU node
Code Block |
---|
#!/bin/bash
#SBATCH -p gpu # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 16 # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1 # Specify number of GPU per task
#SBATCH --ntasks-per-node=4 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module purge # Unload all modules
module load Miniconda3/22.11.1-1 # Load the Miniconda3 module
conda activate tensorflow-2.6.0 # Activate your environment
python3 file.py # Run your program or executable code |
Info |
---|
1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4 |
Submit a job
Use the sbatch script.sh
command to submit your job to the Slurm system.
Code Block |
---|
username@lanta:~> sbatch script.sh |
Related articles
Filter by label (Content by label) | ||
---|---|---|
|