Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Amber is a suite of biomolecular simulation programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. The Amber software suite is divided into two parts: AmberTools22, a collection of freely available and Amber22, which is centered around the pmemd simulation program.

...

...

Anchor
0
0
Available version

Amber22 is available on Lanta which can be accessed by module load Amber/2022-cpeGNU-CUDA-11.7. Running MD simulations in Amber22 is also supported by GPU acceleration.

Version

Module name

22

Amber/2022-cpeGNU-CUDA-11.7

Anchor
1
1
1. Input files

The input files for running Amber22 can be studied from https://ambermd.org/tutorials/. The example input file for Amber can be found at /project/common/Amber/Example/

Anchor
2
2
2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

Anchor
2.1
2.1
2.1 Run Amber on CPU node

Code Block
#!/bin/bash -l
#SBATCH -p compute                 # specific partition
#SBATCH -N 1                       # specific no. of nodes
#SBATCH --ntasks-per-node=128      # specific MPI tasks
#SBATCH --time=24:00:00            # job time limit <hr:min:sec>
#SBATCH -A ltXXXXXX                # project account
#SBATCH -J Amber-run               # job name

module purge                            #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.MPI -O -i mdin -o mdout -p prmtop -c inpcrd

The script above using compute partition (-p compute), 1 node (-N 1) with 128 tasks per node (--ntasks-per-node=128). The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account. The wall-time limit is set to 2 hours 24:00:00 (the maximum time limit is 5 days -t 5-00:00:00).

Anchor
2.2
2.2
2.2 Run Amber on GPU node

Anchor
2.2.1
2.2.1
2.2.1 Single gpu

Code Block
#!/bin/bash -l
#SBATCH -p gpu                 # specific partition
#SBATCH -N 1                       # specific no. of nodes
#SBATCH --ntasks-per-node=1        # specific MPI tasks
#SBATCH -G 1                       # specific no. of GPUs
#SBATCH --time=24:00:00            # job time limit <hr:min:sec>
#SBATCH -A thaisc  ltXXXXXX                # project account
#SBATCH -J Amber-run               # job name

module purge                            #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.cuda -O -i mdin -o mdout -p prmtop -c inpcrd

The script above request for GPU partition (-p gpu), 1 node (-N 1), 1 tasks (--ntasks-per-node=1) and 1 GPU cards -G 1.

Anchor
2.2.2
2.2.2
2.2.2 Multiple gpus

Code Block
#!/bin/bash -l
#SBATCH -p gpu                 # specific partition
#SBATCH -N 1                       # specific no. of nodes
#SBATCH --ntasks-per-node=2        # specific MPI tasks
#SBATCH -G 2                       # specific no. of GPUs
#SBATCH --time=24:00:00            # job time limit <hr:min:sec>
#SBATCH -A thaisc  ltXXXXXX                # project account
#SBATCH -J Amber-run               # job name

module purge                            #purge all module
module load Amber/2022-cpeGNU-CUDA-11.7

WORKDIR=$SLURM_SUBMIT_DIR
######################################################

cd $WORKDIR

srun pmemd.cuda.MPI -O -i mdin -o mdout -p prmtop -c inpcrd

...

Info

There is no significant scaling for running Amber on multiple GPUs over single GPU.

Anchor
3
3
3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.

...