Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Amber is a suite of biomolecular simulation programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. The Amber software suite is divided into two parts: AmberTools22, a collection of freely available and Amber22, which is centered around the pmemd simulation program.

Official website : https://ambermd.org/index.php

updated : 3 Mar 2023



Available version

Amber22 is available on Lanta which can be accessed by module load Amber/2022-CrayGNU-CUDA-11.4. Running MD simulations in Amber22 is also supported by GPU acceleration.

Version

Module name

22

Amber/2022-CrayGNU-CUDA-11.4

1. Input files

The input files for running Amber22 can be studied from https://ambermd.org/tutorials/

2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

2.1 Run Amber on CPU node

#!/bin/bash -l
#SBATCH -p compute               #specific partition
#SBATCH -N 1                     #specify number of node
#SBATCH --ntasks-per-node=32     #specify number of tasks per node
#SBATCH --time=24:00:00          #job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX              #account name
#SBATCH -J AMBER-job             #job_name

module purge                                     #purge all module
module load Amber/2022-CrayGNU-CUDA-11.4

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.MPI -O -i min1.in -o min1.out -p *.top -c *.crd -ref *.crd -r min1.restrt

srun pmemd.MPI -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt

srun pmemd.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel

srun pmemd.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel

srun pmemd.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel

for ((i=4;i<6;i++)); do
     j=$((i-1))
     srun pmemd.MPI -O -i md${i}.in -o md${i}.out -p *.top -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel
done

The script above using compute partition (-p compute), 1 node (-N 1) with 32 tasks per node (--ntasks-per-node=32). The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account. The wall-time limit is set to 2 hours 24:00:00 (the maximum time limit is 5 days -t 5-00:00:00).

2.2 Run Amber on GPU node

#!/bin/bash -l
#SBATCH -p gpu                 #specify partition
#SBATCH -N 1                   #specify number of node
#SBATCH --ntasks-per-node=1    #specify number of tasks per node
#SBATCH --gpus-per-task=1      #specify number of gpus per task
#SBATCH --time=5:00:00         #job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX            #account name
#SBATCH -J AMBER-gpu           #job name

module purge                                 #purge all module
module load Amber/2022-CrayGNU-CUDA-11.4

##Extra Modules load due to MPI issue
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0

# Extra setting due to MPI issue
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

WORKDIR=$SLURM_SUBMIT_DIR

######################################################
# nothing should need changed below here to run unless you do not

cd $WORKDIR

srun pmemd.cuda -O -i min1.in -o min1.out -p *.top -c *.crd -ref *.crd -r min1.restrt

srun pmemd.cuda -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt

srun pmemd.cuda.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel

srun pmemd.cuda.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel

srun pmemd.cuda.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel

for ((i=4;i<6;i++)); do
     j=$((i-1))
     srun pmemd.cuda.MPI -O -i md${i}.in -o md${i}.out -p *.top -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel
done

The script above request for GPU partition (-p gpu), 1 node (-N 1), 1 task (--ntasks-per-node=1) and 1 GPUs card --gpus-per-task=1.

Currently, for running Amber on GPU node, only 1 task and 1 GPU card is allowed. Asking for more than 1 task will result in error cudaIpcOpenMemHandle failed on gpu->pbPeerAccumulator Handle invalid argument in slurm output file. Although --gpus-per-task can be requested more than 1, only 1 GPU card is used for running the job. We are working on these issues.

3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.

  • No labels