Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Software website : http://ambermd.org/Installation.php

Table of Contents

Amber20 for CPU Installation

This is for running Amber20 on compute and memory node(s).

  1. Obtain Amber20 at http://ambermd.org and download source codes

  2. Upload and extract amber source codes at your project home director

    Code Block
    cd /tarafs/data/project/projxxxx  #change projxxxx to your project ID  
    tar xvfj AmberTools20.tar.bz2  
    tar xvfj Amber20.tar.bz2  
  3. Set the AMBERHOME home directory environment variable

    Code Block
    cd amber20_src
    export AMBERHOME=$PWD
  4. Load modules and prepare for AMBER20 installation

    Code Block
    module purge
    module load bzip2/1.0.8-GCCcore-8.3.0
    module load GCC/8.3.0
    module load XZ/5.2.4-GCCcore-8.3.0
  5. Go to the amber directory and execute the compilation script

    Code Block
    cd $AMBERHOME
    ./configure -noX11 gnu
  6. Install and test

    Code Block
    source ./amber.sh
    make install
    make test
  7. After the installation is successful, you can compile parallel (MPI) versions of Amber20 by load OpenMPI module

    Code Block
    module load OpenMPI/3.1.4-GCC-8.3.0
  8. Then, recompilation with MPI

    Code Block
    ./configure -mpi -noX11 gnu
    make install

...

Amber20 for GPU Installation

This is for running Amber20 on GPU node. After finishing the Amber20 for CPU installation, please follow the following steps to install Amber20 on GPU.

  1. Go to the AMBERHOME home directory environment variable

    Code Block
    cd /tarafs/data/project/projxxxx  #change projxxxx to your project ID  
    cd amber20_src
    export AMBERHOME=$PWD
  2. Load modules and prepare for AMBER20 installation

    Code Block
    module purge
    module load bzip2/1.0.8-GCCcore-8.3.0
    module load XZ/5.2.4-GCCcore-8.3.0
    module load gcccuda/2019b
    module load CUDA/10.1.243
  3. Go to the amber directory and execute the compilation script for the installation with cuda

    Code Block
    cd $AMBERHOME
    ./configure -noX11 -cuda gnu
    make install
  4. Testing

    Code Block
    make test.cuda_serial
  5. After the installation is successful, you can compile parallel (MPI) versions with cuda

    Code Block
    module load OpenMPI/3.1.4-gcccuda-2019b
    module load NCCL/2.4.8-gcccuda-2019b
    export NCCL_HOME=/tarafs/utils/modules/software/NCCL/2.4.8-gcccuda-2019b/
  6. Then, recompilation with MPI for GPU

    Code Block
    ./configure -noX11 -cuda -mpi -nccl gnu
    make install

Related

...

articles

Filter by label (Content by label)
showLabelsfalse
showSpacefalse
sorttitle
cqllabel in ( "amber" , "md_simulation" )
Page Properties
hiddentrue

...