Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Amber is a suite of biomolecular simulation programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. The Amber software suite is divided into two parts: AmberTools22, a collection of freely available and Amber22, which is centered around the pmemd simulation program.

Official website : https://ambermd.org/index.php

updated : 3 Mar 5 May 2023

...

...

Anchor
0
0
Available version

Amber22 is available on Lanta which can be accessed by module load Amber/2022-CrayGNUcpeGNU-CUDA-11.47. Running MD simulations in Amber22 is also supported by GPU acceleration.

Version

Module name

22

Amber/2022-CrayGNUcpeGNU-CUDA-11.47

Anchor
1
1
1. Input files

The input files for running Amber22 can be studied from https://ambermd.org/tutorials/. The example input file for Amber can be found at /project/common/Amber/Example/

Anchor
2
2
2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

Anchor
2.1
2.1
2.1 Run Amber on CPU node

Code Block
#!/bin/bash -l
#SBATCH -p compute                 #specific# specific partition
#SBATCH -N 1                       #specify number# specific no. of nodenodes
#SBATCH --ntasks-per-node=32128      #specify# numberspecific ofMPI tasks
per node
#SBATCH --time=24:00:00           #job # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX                # #accountproject nameaccount
#SBATCH -J AMBERAmber-job  run           #job_name  module purge # job name

module purge                               #purge all module
module load Amber/2022-CrayGNUcpeGNU-CUDA-11.4

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

7

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.MPI -O -i min1.inmdin -o min1.outmdout -p *.top -c *.crd -ref *.crd -r min1.restrt

srun pmemd.MPI -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt

srun pmemd.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel

srun pmemd.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel

srun pmemd.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel

for ((i=4;i<6;i++)); do
     j=$((i-1))
     srun pmemd.MPI -O -i md${i}.in -o md${i}.out -p *.top -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel
done
prmtop -c inpcrd

The script above using compute partition (-p compute), 1 node (-N 1) with 32 128 tasks per node (--ntasks-per-node=32128). The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account. The wall-time limit is set to 2 hours 24:00:00 (the maximum time limit is 5 days -t 5-00:00:00).

Anchor
2.2
2.2
2.2 Run Amber on GPU node

Anchor
2.2.1
2.2.1
2.2.1 Single gpu

Code Block
#!/bin/bash -l
#SBATCH -p gpu                 # #specifyspecific partition
#SBATCH -N 1                   #specify number of node #SBATCH# -nspecific 2no. of nodes
#SBATCH --ntasks-per-node=1        #       #specify number of tasks specific MPI tasks
#SBATCH -G 1  2                   #specify number of# gpusspecific (mustno. matchof #tasks)GPUs
#SBATCH --time=524:00:00            # #jobjob time limit <hr:min:sec>
#SBATCH -A ltXXXXXX             #account name   # project account
#SBATCH -J AMBERAmber-gpurun           #job name  module purge# job name

module purge                            #purge all module
module load Amber/2022-CrayGNUcpeGNU-CUDA-11.47

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

WORKDIR=$SLURM_SUBMIT_DIR

######################################################
# nothing should need changed below here to run unless you do not

cd $WORKDIR

srun pmemd.cuda -O -i min1.in -o min1.out -p *.top -c *.crd -ref *.crd -r min1.restrt

srun pmemd.cuda -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt

srun pmemd.cuda.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel

srun pmemd.cuda.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel

srun pmemd.cuda.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel

for ((i=4;i<6;i++)); do
     j=$((i-1))
   WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.cuda -O -i mdin -o mdout -p prmtop -c inpcrd

The script above request for GPU partition (-p gpu), 1 node (-N 1), 1 tasks (--ntasks-per-node=1) and 1 GPU cards -G 1.

Anchor
2.2.2
2.2.2
2.2.2 Multiple gpus

Code Block
#!/bin/bash -l
#SBATCH -p gpu                 # specific partition
#SBATCH -N 1                       # specific no. of nodes
#SBATCH --ntasks-per-node=2        # specific MPI tasks
#SBATCH -G 2                       # specific no. of GPUs
#SBATCH --time=24:00:00            # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX                # project account
#SBATCH -J Amber-run               # job name

module purge                            #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.cuda.MPI -O -i md${i}.inmdin -o md${i}.outmdout -p *.topprmtop -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel
done

...

inpcrd
Note

Please make sure that the number of tasks ( -n -ntasks-per-node ) must match the number of GPU cards ( -G ).

Info

There is no significant scaling for running Amber on multiple GPUs over single GPU.

Anchor
3
3
3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.

...