Amber is a suite of biomolecular simulation programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. The Amber software suite is divided into two parts: AmberTools22, a collection of freely available and Amber22, which is centered around the pmemd simulation program.
...
...
Anchor | ||||
---|---|---|---|---|
|
Amber22 is available on Lanta which can be accessed by module load Amber/2022-CrayGNU-CUDA-11.4
. Running MD simulations in Amber22 is also supported by GPU acceleration.
Version | Module name |
---|---|
22 | Amber/2022-CrayGNU-CUDA-11.4 |
Anchor | ||||
---|---|---|---|---|
|
The input files for running Amber22 can be studied from https://ambermd.org/tutorials/
Anchor | ||||
---|---|---|---|---|
|
create a script using vi submit.sh
command and specify the following details depending on computational resources you want to use.
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
#!/bin/bash -l #SBATCH -p compute #specific partition #SBATCH -N 1 #specify number of node #SBATCH --ntasks-per-node=32 #specify number of tasks per node #SBATCH --time=24:00:00 #job time limit <hr:min:sec> #SBATCH -A ltXXXXXX #account name #SBATCH -J AMBER-job #job_name module purge #purge all module module load Amber/2022-CrayGNU-CUDA-11.4 ##Extra Modules load due to MPI issue module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 # Extra setting due to MPI issue export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n WORKDIR=$SLURM_SUBMIT_DIR ###################################################### cd $WORKDIR srun pmemd.MPI -O -i min1.in -o min1.out -p *.top -c *.crd -ref *.crd -r min1.restrt srun pmemd.MPI -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt srun pmemd.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel srun pmemd.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel srun pmemd.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel for ((i=4;i<6;i++)); do j=$((i-1)) srun pmemd.MPI -O -i md${i}.in -o md${i}.out -p *.top -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel done |
The script above using compute partition (-p compute
), 1 node (-N 1
) with 32 tasks per node (--ntasks-per-node=32
). The account is set to ltXXXXXX (-A ltXXXXXX
) that is subjected to change to your own account. The wall-time limit is set to 2 hours 24:00:00
(the maximum time limit is 5 days -t 5-00:00:00
).
Anchor | ||||
---|---|---|---|---|
|
Code Block |
---|
#!/bin/bash -l #SBATCH -p gpu #specify partition #SBATCH -N 1 #specify number of node #SBATCH --ntasks-per-node=1n 2 #specify number of tasks per node #SBATCH --gpus-per-task=1G 2 #specify number of gpus per task(must match #tasks) #SBATCH --time=5:00:00 #job time limit <hr:min:sec> #SBATCH -A ltXXXXXX #account name #SBATCH -J AMBER-gpu #job name module purge #purge all module module load Amber/2022-CrayGNU-CUDA-11.4 ##Extra Modules load due to MPI issue module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 # Extra setting due to MPI issue export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n WORKDIR=$SLURM_SUBMIT_DIR ###################################################### # nothing should need changed below here to run unless you do not cd $WORKDIR srun pmemd.cuda -O -i min1.in -o min1.out -p *.top -c *.crd -ref *.crd -r min1.restrt srun pmemd.cuda -O -i min2.in -o min2.out -p *.top -c min1.restrt -ref min1.restrt -r min2.restrt srun pmemd.cuda.MPI -O -i md1.in -o md1.out -p *.top -c min2.restrt -ref min2.restrt -r md1.restrt -x md1.nc -v mdvel srun pmemd.cuda.MPI -O -i md2.in -o md2.out -p *.top -c md1.restrt -ref md1.restrt -r md2.restrt -x md2.nc -v mdvel srun pmemd.cuda.MPI -O -i md3.in -o md3.out -p *.top -c md2.restrt -ref md2.restrt -r md3.restrt -x md3.nc -v mdvel for ((i=4;i<6;i++)); do j=$((i-1)) srun pmemd.cuda.MPI -O -i md${i}.in -o md${i}.out -p *.top -c md${j}.restrt -ref md${j}.restrt -r md${i}.restrt -x md${i}.nc -v mdvel done |
...
The script above request for GPU partition (-p gpu
), 1 node (-N 1
), 1 task 2 tasks (--ntasks-per-node=1
) and 1 GPUs card --gpus-per-task=1
.
...
n 2
) and 2 GPU cards -G 2
.
Note |
---|
Please make sure that the number of tasks ( -n ) must match the number of GPU cards ( -G ). |
Info |
---|
There is no significant scaling for running Amber on multiple GPUs over single GPU. |
Anchor | ||||
---|---|---|---|---|
|
using sbatch submit.sh
command to submit the job to the queuing system.
...