Quantum Espresso

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

Official website : https://www.quantum-espresso.org

updated : 5 August 2024


Available Version

QuantumEspresso version 7.2 is available on Lanta. QuantumEspresso/7.2-libxc-6.1.0-cpu is used to run on CPU node, while QuantumEspresso/7.2-libxc-6.1.0-gpu is used to run on GPU node. Both module versions are linked to https://tddft.org/programs/libxc/

Version

Processing unit

Module name

Version

Processing unit

Module name

7.2

CPU

QuantumESPRESSO/7.2-libxc-6.1.0-cpu

GPU

QuantumESPRESSO/7.2-libxc-6.1.0-gpu

1. Input file

The basic input files for running Quantum Espresso on Lanta are Quantum Espresso inputs and job submission script.

2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

2.1 Run Quantum Espresso on CPU node

#!/bin/bash #SBATCH -p compute # select the partition #SBATCH --nodes=1 # define number of node #SBATCH --ntasks-per-node=64 # define number of tasks per node #SBATCH --cpus-per-task=1 # OMP threads #SBATCH -t 2:00:00 # define reserve time #SBATCH -J test_QE # define the job name #SBATCH -A ltXXXXXX # define your project account module purge module load QuantumESPRESSO/7.2-libxc-6.1.0-cpu mkdir -p /scratch/ltXXXXXX-YYYY/$USER/$SLURM_JOB_ID # change ltXXXXXX-YYYY to yours export ESPRESSO_TMPDIR=/scratch/ltXXXXXX-YYYY/$USER/${SLURM_JOB_ID} # the location of the outdir export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export ESPRESSO_PSEUDO=./ # the location of pseudo potential ulimit -s unlimited srun --cpus-per-task=${SLURM_CPUS_PER_TASK} pw.x -inp ausurf.in > ausurf.out

The script above using compute partition (-p compute), 1 node (-N 1) with 64 tasks per node (--ntasks-per-node=64), so the total CPUs core for this job is 64 (the number of tasks) x 1 (default CPU per task) = 64 cores. The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account.

Please check ESPRESSO_TMPDIR regularly and deleted unwanted files since they can consumes large amount of space.

2.2 Run Quantum Espresso on GPU node

#!/bin/bash #SBATCH -p gpu # select the partition #SBATCH --nodes=1 # define number of node #SBATCH --ntasks-per-node=1 # define number of tasks per node #SBATCH --gpus-per-node=1 # define number of gpus #SBATCH --cpus-per-task=4 # OMP threads #SBATCH -t 2:00:00 # define reserve time #SBATCH -J test_QE # define the job name #SBATCH -A ltXXXXXX # define your project account module purge module load QuantumESPRESSO/7.2-libxc-6.1.0-gpu mkdir -p /scratch/ltXXXXXX-YYYY/$USER/$SLURM_JOB_ID # change ltXXXXXX-YYYY to yours export ESPRESSO_TMPDIR=/scratch/ltXXXXXX-YYYY/$USER/${SLURM_JOB_ID} # the location of the outdir export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} export ESPRESSO_PSEUDO=./ # the location of pseudo potential ulimit -s unlimited srun --cpus-per-task=${SLURM_CPUS_PER_TASK} pw.x -inp ausurf.in > ausurf.out

The script above using gpu partition (-p gpu), 1 node (-N 1) with 1 tasks per node (--ntasks-per-node=1), 1 GPU card per node (--gpus-per-node=1) and each task has 4 OpenMP threads (--cpus-per-task=4).

The total number of MPI ranks (ntasks) × OMP_NUM_THREADS must not exceed the total number of physical cores (128 cores per Compute node and 64 cores per GPU node)

the number of MPI ranks (ntasks) should equal to the number of GPUs ( ntasks-per-node = gpus-per-node)

 

3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.