This article will guide you to run a python script using Slurm script on a LANTA HPC system.
Using Miniconda via Easybuild
Load Miniconda module
Use the ml av Miniconda
command to see which version of Miniconda is available on the LANTA HPC system.
Use the ml Miniconda3/xx.xx.x
command to load the Miniconda version that you want to use. If you don't specify a version, the default version (D) is loaded, which is Miniconda3/22.11.1-1.
Code Block |
---|
username@lanta:~> ml av Miniconda
---------------------- /lustrefs/disk/modules/easybuild/modules/all -----------------------
Miniconda3/22.11.1-1
Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
username@lanta:~> ml Miniconda3/22.11.1-1 |
Activate your environment
Use the conda env list
command to view a list of your environments.
If you want to activate your environment such as TensorFlow-2.6.0, you can use the conda activate tensorflow-2.6.0
command.
...
describes how to execute a Python script on the LANTA HPC system using Miniconda. The following table of contents provides an overview of the article's material, allowing for quick identification of the most relevant sections.
Slurm script example for running the Python script
Running on Compute node
Code Block |
---|
#!/bin/bash
#SBATCH -p compute # Specify partition [Compute/Memory/GPU]
#SBATCH -N 1 -c 128 # Specify number of nodes and processors per task
#SBATCH --ntasks-per-node=1 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module load Mamba/23.11.0-0 # Load the conda module
conda activate tensorflow-2.12.1 # Activate your environment
python3 file.py # Run your program or executable /lustrefs/disk/modules/easybuild/software/Miniconda3/22.11.1-1
netcdf-py39code |
Info |
---|
Full node: -c 128, Half node: -c 64, ¼ node: -c 32 |
Running on GPU node
Code Block |
---|
#!/bin/bash
#SBATCH -p gpu # Specify partition [Compute/lustrefs/disk/modules/easybuild/software/Miniconda3/22.11.1-1/envs/netcdf-py39
pytorch-1.11.0 /lustrefs/disk/modules/easybuild/software/Miniconda3/22.11.1-1/envs/pytorch-1.11.0
tensorflow-2.6.0 /lustrefs/disk/modules/easybuild/software/Miniconda3/22.11.1-1/envs/tensorflow-2.6.0
username@lanta:~> conda activate tensorflow-2.6.0
(tensorflow-2.6.0) username@lanta:~> |
Creating an environment in the user’s home
Create an environment
Use the conda create -n myenv
commands to create the conda environment with myenv name.
Code Block |
---|
username@lanta:~> conda create -n myenv
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /your directory/envs/myenv
Proceed ([y]/n)? y
...
username@lanta:~> |
Create an environment with a specific version of the packages
Code Block |
---|
username@lanta:~> conda create -n myenv python=3.9
username@lanta:~> conda create -n myenv python=3.9 scipy=0.17.3 |
Creating an environment in the project’s home
Specify a location for an environment
Code Block |
---|
username@lanta:~> conda create --prefix /your project directory/envs |
Specify a location for an environment with a specific version of the packages
Code Block |
---|
username@lanta:~> conda create --prefix /your project directory/envs python=3.9 |
Activate your environment in the project’s home
Code Block |
---|
username@lanta:~> conda activate /your project directory/envs |
Creating an environment from an environment.yml file
A simple environment.yml file
Code Block |
---|
name: test
dependencies:
- python=3.9
- numpy=1.23.1
- pandas |
Create the environment from the environment.yml file in the user’s home
Code Block |
---|
username@lanta:~> conda env create -f environment.yml |
Create the environment from the environment.yml file in the project’s home
Code Block |
---|
username@lanta:~> conda env create -f environment.yml --prefix /your project directory/envsMemory/GPU]
#SBATCH -N 1 -c 16 # Specify number of nodes and processors per task
#SBATCH --gpus-per-task=1 # Specify number of GPU per task
#SBATCH --ntasks-per-node=4 # Specify tasks per node
#SBATCH -t 120:00:00 # Specify maximum time limit (hour: minute: second)
#SBATCH -A ltxxxxxx # Specify project name
#SBATCH -J JOBNAME # Specify job name
module load Mamba/23.11.0-0 # Load the conda module
conda activate tensorflow-2.12.1 # Activate your environment
python3 file.py # Run your program or executable code |
Info |
---|
1 GPU card: --ntasks-per-node=1, 2 GPU cards: --ntasks-per-node=2, 4 GPU cards: --ntasks-per-node=4 |
Submit a job
Use the sbatch script.sh
command to submit your job to the Slurm system.
Code Block |
---|
username@lanta:~> sbatch script.sh |
Note |
---|
Before you use the sbatch script.sh command, you must ensure that your environment is disabled. |
...
Related articles
Filter by label (Content by label) |
---|
showLabels | false |
---|
max | 5 |
---|
spaces | com.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@48ae393 |
---|
showSpace | false |
---|
sort | modified |
---|
showSpacetype | falsepage |
---|
reverse | true |
---|
typelabels | pagesingularity python container |
---|
cql | label = in ( "python-vir-env" , "python-apptainer" , "env" , "jupyter" ) and space = currentSpace ( ) | labels | singularity python container |
---|
|