Amber
Amber is a suite of biomolecular simulation programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. The Amber software suite is divided into two parts: AmberTools22, a collection of freely available and Amber22, which is centered around the pmemd simulation program.
Official website : The Amber Molecular Dynamics Package
updated : 5 May 2023
Available version
Amber22 is available on Lanta which can be accessed by module load Amber/2022-cpeGNU-CUDA-11.7
. Running MD simulations in Amber22 is also supported by GPU acceleration.
Version | Module name |
---|---|
22 | Amber/2022-cpeGNU-CUDA-11.7 |
1. Input files
The input files for running Amber22 can be studied from Amber Tutorials. The example input file for Amber can be found at /project/common/Amber/Example/
2. Job submission script
create a script using vi submit.sh
command and specify the following details depending on computational resources you want to use.
#!/bin/bash -l
#SBATCH -p compute # specific partition
#SBATCH -N 1 # specific no. of nodes
#SBATCH --ntasks-per-node=128 # specific MPI tasks
#SBATCH --time=24:00:00 # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX # project account
#SBATCH -J Amber-run # job name
module purge #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7
WORKDIR=$SLURM_SUBMIT_DIR
######################################################
cd $WORKDIR
srun pmemd.MPI -O -i mdin -o mdout -p prmtop -c inpcrd
The script above using compute partition (-p compute
), 1 node (-N 1
) with 128 tasks per node (--ntasks-per-node=128
). The account is set to ltXXXXXX (-A ltXXXXXX
) that is subjected to change to your own account. The wall-time limit is set to 2 hours 24:00:00
(the maximum time limit is 5 days -t 5-00:00:00
).
2.2.1 Single gpu
#!/bin/bash -l
#SBATCH -p gpu # specific partition
#SBATCH -N 1 # specific no. of nodes
#SBATCH --ntasks-per-node=1 # specific MPI tasks
#SBATCH -G 1 # specific no. of GPUs
#SBATCH --time=24:00:00 # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX # project account
#SBATCH -J Amber-run # job name
module purge #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7
WORKDIR=$SLURM_SUBMIT_DIR
######################################################
cd $WORKDIR
srun pmemd.cuda -O -i mdin -o mdout -p prmtop -c inpcrd
The script above request for GPU partition (-p gpu
), 1 node (-N 1
), 1 tasks (--ntasks-per-node=1
) and 1 GPU cards -G 1
.
2.2.2 Multiple gpus
#!/bin/bash -l
#SBATCH -p gpu # specific partition
#SBATCH -N 1 # specific no. of nodes
#SBATCH --ntasks-per-node=2 # specific MPI tasks
#SBATCH -G 2 # specific no. of GPUs
#SBATCH --time=24:00:00 # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX # project account
#SBATCH -J Amber-run # job name
module purge #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7
WORKDIR=$SLURM_SUBMIT_DIR
######################################################
cd $WORKDIR
srun pmemd.cuda.MPI -O -i mdin -o mdout -p prmtop -c inpcrd
Please make sure that the number of tasks ( --ntasks-per-node ) must match the number of GPU cards ( -G ).
There is no significant scaling for running Amber on multiple GPUs over single GPU.
3. Job submission
using sbatch submit.sh
command to submit the job to the queuing system.