The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.
...
Anchor | ||||
---|---|---|---|---|
|
There are two modules of VASP VASP version 6.3.2 is available on Lanta. One is for For running VASP on CPU node, and the another is for running on GPU nodeplease use VASP modules that has text cpu
in its name. On the other hand, please use VASP modules that has text gpu
in its name. For VASP that implemented VTST code (https://theory.cm.utexas.edu/vtsttools/index.html ), these VASP modules have suffix vtst
in modules name.
Version | Processing unit | Module name |
---|---|---|
| CPU | VASP/6.3.2-GNU-cpu VASP/6.3.2-GNU-cpu_vtst |
GPU | VASP/6.3.2-NVHPC-gpu VASP/6.3.2-NVHPC-gpu_vtst |
Anchor | ||||
---|---|---|---|---|
|
The basic input files for running VASP on Lanta are VASP inputs and job submission script.
Anchor | ||||
---|---|---|---|---|
|
create a script using vi submit.sh
command and specify the following details depending on computational resources you want to use.
Anchor | ||||
---|---|---|---|---|
|
VASP on Lanta has OpenMP support, so users can use a combination of OpenMP threading and parallelization over MPI ranks. However, only some cases can get benefit from using multiple OpenMP threads per MPI rank. For further information, please visit Combining OpenMP + MPI in VASP. Here, both job submission scripts with Pure MPI and Hybrid OpenMP+MPI are shown.
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
#!/bin/bash -l #SBATCH -p compute #specify partition #SBATCH -N 1 #specify number of nodes #SBATCH --ntasks-per-node=64 #specify number of tasks per node #SBATCH -t 2:00:00 #job time limit <hr:min:sec> #SBATCH -A ltXXXXXX #project name #SBATCH -J VASP-run #job name ##Module Load## module load VASP/6.3.2-GNU-cpu ##Extra Modules load due to MPI issue module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 #set the maximum stacksize to unlimited ulimit -s unlimited # Extra setting due to MPI issue export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n #disable OpenMP export OMP_NUM_THREADS=1 ##Run VASP### srun vasp_std |
The script above using compute partition (-p compute
), 1 node (-N 1
) with 64 tasks per node (--ntasks-per-node=64
), so the total CPUs core for this job is 64 (the number of tasks) x 1 (default CPU per task) = 64 cores. The account is set to ltXXXXXX (-A ltXXXXXX
) that is subjected to change to your own account.
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
#!/bin/bash -l #SBATCH -p compute #specify partition #SBATCH -N 1 #specify number of nodes #SBATCH --ntasks-per-node=16 #specify number of tasks per node #SBATCH --cpus-per-task=4 #specify number of openmp thread per task #SBATCH -t 2:00:00 #job time limit <hr:min:sec> #SBATCH -A ltXXXXXX #project name #SBATCH -J VASP-run #job name ##Module Load## module load VASP/6.3.2-GNU-cpu ##Extra Modules load due to MPI issue module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 #set the maximum stacksize to unlimited ulimit -s unlimited # Extra setting due to MPI issue export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n # Set OpenMP variables export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export OMP_PLACES=cores export OMP_PROC_BIND=close export OMP_STACKSIZE=512m ##Run VASP### srun vasp_std |
...
Info |
---|
Please note that more CPU cores is not always mean better performance. It is a good idea to do a test with your own system for the optimum CPU cores. |
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
#!/bin/bash -l #SBATCH -p gpu #specify partition #SBATCH -N 1 #specify number of nodes #SBATCH --ntasks-per-node=4 #specify number of tasks per node #SBATCH --gpus-per-task=1 #specify number of gpus per task #SBATCH --cpus-per-task=16 #specify number of openmp thread per task #SBATCH -t 2:00:00 #job time limit <hr:min:sec> #SBATCH -A ltXXXXXX #project name #SBATCH -J VASP-run #job name ##Module Load## module load VASP/6.3.2-NVHPC-gpu ##Extra Modules load due to MPI issue module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 #set the maximum stacksize to unlimited ulimit -s unlimited # Extra setting due to MPI issue export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n # Set OpenMP variables export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export OMP_PLACES=cores export OMP_PROC_BIND=close export OMP_STACKSIZE=512m ##Run VASP### srun vasp_std |
...
Info |
---|
the number of MPI ranks (ntasks) should less than or equal to the number of GPUs ( |
Anchor | ||||
---|---|---|---|---|
|
using sbatch submit.sh
command to submit the job to the queuing system.
...