...
...
...
...
...
...
...
...
...
...
This article describes how to execute a Python script on the LANTA HPC system using Apptainer. The following table of contents provides a summary of the article's material so that the reader can quickly identify the most important sections.
Table of Contents |
---|
2. If you need a specific software name to check the version. In this case, I would like to know the version of Singularity on the HPC system LANTA.
$ module avail Singularity
$ module avail singularity
------------------------- /lantafs/utils/modules/modules/all --------------------------
Singularity/3.3.0 Singularity/3.4.2 (D)
Where:
D: Default Module
Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of
the "keys".
Currently (accessed on 23/07/2021), Singularity on LANTA has two versions, 3.3.0 and 3.4.2 (D), where (D) is indicated as the default version.
Related articles
Filter by label (Content by label) | ||||||
---|---|---|---|---|---|---|
|
How to check the version of Apptainer on LANTA-HPC
On LANTA-frontend-node, you can check the software that is installed in the cluster to use the command.
$ module avail
$ module avail
------------------------- /lantafs/utils/modules/modules/all --------------------------
ANSYS/2020.1
ARAGORN/1.2.38-foss-2019b
AUGUSTUS/3.3.3-foss-2019b
Advisor/2019_update5
Autoconf/2.69-GCCcore-8.3.0
Autoconf/2.69-GCCcore-10.2.0
Autoconf/2.69 (D)
…
|
Example of a Slurm script for launching the Python script
Running on Compute node
Code Block |
---|
#!/bin/bash
#SBATCH -p compute # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128 # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module load Apptainer/1.1.6 # Load the Apptainer module
apptainer exec -B $PWD:$PWD file.sif python3 file.py # Run your program |
Info |
---|
Full node: -c 128, Half node: -c 64, ¼ node: -c 32 |
Running on GPU node
Code Block |
---|
#!/bin/bash
#SBATCH -p gpu # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 16 # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1 # Specify number of GPU per task
#SBATCH --ntasks-per-node=4 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module load Apptainer/1.1.6 # Load the Apptainer module
apptainer exec --nv -B $PWD:$PWD file.sif python3 file.py # Run your program |
Info |
---|
1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4 |
Submit a job
Use the sbatch script.sh
command to submit your job to the Slurm system.
Code Block |
---|
username@lanta:~> sbatch script.sh |
Related articles
Filter by label (Content by label) | ||||||
---|---|---|---|---|---|---|
|
...