Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.

Official website : https://www.vasp.at/

updated : 20 Feb 6 June 2023

...

...

Anchor
VASP_version
VASP_version
Available version

There are two modules of VASP VASP version 6.3.2 is available on Lanta. One is for For running VASP on CPU node, and the another is for running on GPU node.please use VASP modules that has text cpu in its name. On the other hand, please use VASP modules that has text gpu in its name to run on GPU node. All VASP modules has implemented VTST code (https://theory.cm.utexas.edu/vtsttools/index.html ).

Version

Processing unit

Module name


6.3.2

CPU

VASP/6.3.2-GNU-cpu_vtst

VASP/6.3.2-Intel-cpu_vtst

GPU

VASP/6.3.2-NVHPC-gpu_vtst

Anchor
1
1
1. Input file

The basic input files for running VASP on Lanta are VASP inputs and job submission script.

Anchor
2
2
2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

Anchor
2.1
2.1
2.1 Run VASP on

...

Compute node

VASP on Lanta has OpenMP support, so users can use a combination of OpenMP threading and parallelization over MPI ranks. However, only some cases can get benefit from using multiple OpenMP threads per MPI rank. For further information, please visit Combining OpenMP + MPI in VASP. Here, both job submission scripts with Pure MPI and Hybrid OpenMP+MPI are shown.

Anchor
2.1.1
2.1.1
2.1.1 Pure MPI

Code Block
#!/bin/bash -l
#SBATCH -p compute               	#specify partition
#SBATCH -N 1                     	#specify number of nodes
#SBATCH --ntasks-per-node=64   	    #specify number of tasks per node
#SBATCH -t 2:00:00               	#job time limit <hr:min:sec>
#SBATCH -A thaiscltXXXXXX                	#project name
#SBATCH -J VASP-run              	#job name

##Module Load##
#module purge
module load VASP/6.3.2-GNU-cpu

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

_vtst

#set the maximum stacksize to unlimited
ulimit -s unlimited

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

#disable OpenMP
export OMP_NUM_THREADS=1

##Run VASP###
srun vasp_std

The script above using compute partition (-p compute), 1 node (-N 1) with 64 tasks per node (--ntasks-per-node=64), so the total CPUs core for this job is 64 (the number of tasks) x 1 (default CPU per task) = 64 cores. The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account.

Anchor
2.1.2
2.1.2
2.1.2 Hybrid MPI + OpenMP

Code Block
#!/bin/bash -l
#SBATCH -p compute               	#specify partition
#SBATCH -N 1                     	#specify number of nodes
#SBATCH --ntasks-per-node=16   	    #specify number of tasks per node
#SBATCH --cpus-per-task=4	        #specify number of openmp thread per task
#SBATCH -t 2:00:00               	#job time limit <hr:min:sec>
#SBATCH -A thaiscltXXXXXX                	#project name
#SBATCH -J VASP-run              	#job name

##Module Load##
#module purge
module load VASP/6.3.2-GNU-cpu

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

_vtst

#set the maximum stacksize to unlimited
ulimit -s unlimited

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

# Set OpenMP variables
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=close
export OMP_STACKSIZE=512m

##Run VASP###
srun --cpus-per-task=${SLURM_CPUS_PER_TASK} vasp_std

The script above using compute partition (-p compute), 1 node (-N 1) with 16 tasks per node (--ntasks-per-node=16) and 4 CPU cores per task (--cpus-per-node=4), so the total CPUs core for this job is 16 (the number of tasks) x 4 (no. of CPU per task) = 64 cores.

Infonote

The total number of MPI ranks (ntasks) × OMP_NUM_THREADS must not exceed the total number of physical cores (128 cores per Compute node on Lanta)

Info

Please note that more CPU cores is not always mean better performance. It is a good idea to do a test with your own system for the optimum CPU cores.

Anchor
2.2
2.2
2.2 Run VASP on GPU node

Code Block
#!/bin/bash -l
#SBATCH -p gpu               	#specify partition
#SBATCH -N 1                    #specify number of nodes
#SBATCH --ntasks-per-node=4   	#specify number of tasks per node
#SBATCH --gpus-per-tasknode=14	    #specify number of gpus per task
#SBATCH --cpus-per-task=16 	    #specify number of openmp thread per task
#SBATCH -t 2:00:00              #job time limit <hr:min:sec>
#SBATCH -A thaiscltXXXXXX               #project name
#SBATCH -J VASP-run             #job name

##Module Load##
#module purge
module load VASP/6.3.2-GNU-cpu

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

NVHPC-gpu_vtst

#set the maximum stacksize to unlimited
ulimit -s unlimited

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

# Set OpenMP variables
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export OMP_PLACES=cores
export OMP_PROC_BIND=close
export OMP_STACKSIZE=512m

##Run VASP###
srun --cpus-per-task=${SLURM_CPUS_PER_TASK} vasp_std

The script above using gpu partition (-p gpu), 1 node (-N 1) with 4 tasks per node (--ntasks-per-node=4), 1 4 GPU card per task node (--gpus-per-tasknode=14) and each task uses 16 CPUs core (--cpus-per-task=16), so the total CPU cores for this job is 4 (the number of tasks) x 16 (no. of CPUs per task) = 64 cores. The total GPUs used in this job is 4 (one gpu node on Lanta has 4 GPU GPUs of A100).

Info

Total cores per LANTA GPU node is 64

Infonote

the number of MPI ranks (ntasks) should less than or equal to the number of GPUs (--ntasks-per-node should not exceed 4 for a single gpu node)

Anchor
3
3
3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.

...