Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Gaussian 16 is the latest version of the Gaussian series of electronic structure programs, used by chemists, chemical engineers, biochemists, physicists and other scientists worldwide. Gaussian 16 provides a wide-ranging suite of the most advanced modeling capabilities available. You can use it to investigate the real-world chemical problems that interest you, in all of their complexity, even on modest computer hardware.

Official website : https://gaussian.com/g16main/

update : 21 Feb 202312 Dec 2024

...

...

Anchor
0
0
Available version

Currently, only Gaussian version 16.C.02 is available on Lanta. This Gaussian module can be used for running on both CPU node and GPU node. Gaussian on Lanta does not support parallelization across nodes (can only request a single node per job)

Version

Processing unit

Module name


16.C.02

CPU


Gaussian/16.C.02-AVX2

GPU

Anchor
1
1
1. Input files

The basic input files for running Gaussian on Lanta are gaussian input file (.gjf or .com) and job submission script. The Gaussian input file for CPU and GPU node is slightly different for the Link 0 command section.

Anchor
1.1
1.1
1.1 CPU

Example of gaussian input file for running on CPU

Code Block
%Chk=e3_06_react                      
%Mem = 10GB
%CPU%Nproc = 0-6364
# opt freq APFD/6-311+g(2d,p) geom=connectivity Int=(UltraFine,Acc2E=12)

(CH3)2CH-N=N=N Reactant Opt Freq

0 1
 C                 -1.83177831   -0.66472886    2.11461257
 C                 -0.34820496   -0.85131460    1.74610765
 H                 -0.27347155   -1.22969857    0.74803927
 H                 -2.28349687    0.02809460    1.43575618
 H                 -2.33612081   -1.60629555    2.05143972
 H                 -1.90651173   -0.28634489    3.11268095
 C                  0.30193203   -1.84846238    2.72315330
 H                  1.33272650   -1.97810312    2.46711416
 H                  0.22719861   -1.47007841    3.72122168
 H                 -0.20241047   -2.79002907    2.65998044
 N                  0.34467679    0.44223964    1.83289652
 N                  0.37818431    1.05254766    2.90258123
 N                  0.41169183    1.66285567    3.97226594

 1 2 1.0 4 1.0 5 1.0 6 1.0
 2 7 1.0 3 1.0 11 1.0
 3
 4
 5
 6
 7 8 1.0 9 1.0 10 1.0
 8
 9
 10
 11 12 2.0
 12 13 2.0
 13

Here, %Chk locate and name scratch files, %Mem sets the amount of dynamic memory to use, %CPU contains a list of processor/core numbers for shared memory parallel processing. In this example, the total CPU cores is 64 (CPUs no. 0 to 63). For further information, please visit https://gaussian.com/link0/.

$CPU Note 1.1 : %CPU = 0-63 can be replaced by replace %Nproc = 64 to avoid using the same CPU core ID from others job
( if you don’t want to use , but it need to specify #SBATCH --exclusive in job submission script)

Anchor
1.2
1.2
1.2 GPU

Example of gaussian input file for running on GPU. The extra tag is %GPUCPU=gpu-list=core-list. In this example %GPUCPU=0-3=0-3, the job uses 4 GPUs (id 0-3) and assigns CPU no. 0-3 to control these GPUs, respectively.

Code Block
%Chk=e3_06_reactreact                      
%Mem = 10GB
%CPU = 0-63
%GPUCPU=0-3=0-3
# opt freq APFD/6-311+g(2d,p) geom=connectivity Int=(UltraFine,Acc2E=12)

(CH3)2CH-N=N=N Reactant Opt Freq

0 1
 C                 -1.83177831   -0.66472886    2.11461257
 C                 -0.34820496   -0.85131460    1.74610765
 H                 -0.27347155   -1.22969857    0.74803927
 H                 -2.28349687    0.02809460    1.43575618
 H                 -2.33612081   -1.60629555    2.05143972
 H                 -1.90651173   -0.28634489    3.11268095
 C                  0.30193203   -1.84846238    2.72315330
 H                  1.33272650   -1.97810312    2.46711416
 H                  0.22719861   -1.47007841    3.72122168
 H                 -0.20241047   -2.79002907    2.65998044
 N                  0.34467679    0.44223964    1.83289652
 N                  0.37818431    1.05254766    2.90258123
 N                  0.41169183    1.66285567    3.97226594

 1 2 1.0 4 1.0 5 1.0 6 1.0
 2 7 1.0 3 1.0 11 1.0
 3
 4
 5
 6
 7 8 1.0 9 1.0 10 1.0
 8
 9
 10
 11 12 2.0
 12 13 2.0
 13

Anchor
2
2
2. Job submission script

create a script using vi submit.sh command and specify the following details depending on computational resources you want to use.

Anchor
2.1
2.1
2.1 Run Gaussian on CPU node

Code Block
#!/bin/bash -l
#SBATCH -p compute                 	#specify partition
#SBATCH -N 1                        #specify number of nodes
#SBATCH --cpus-per-task=64   	    #specify number of cpus
#SBATCH -t 2:00:00                  #job time limit <hr:min:sec>
#SBATCH -J e3_06_react            	    #job name
#SBATCH -A ltXXXXXX                 #project account
#SBATCH --exclusive		  

        #reserving entire node for the job

#module loadpurge                            #purge all module
module load Gaussian/16.C.02-AVX2      	#load gaussian version16

FILENAME=e3_06_react                    # please change the FILENAME
WORKDIR=$SLURM_SUBMIT_DIR
#################################

export GAUSS_SCRDIR=/scratch/ltXXXXXX                 # please change GAUSS_SCRDIR#create temporary scratch directory
mkdir -p /scratch/lantaXXXXXX-Y/$USER/$SLURM_JOB_ID

#export gaussain scratch directory to this one
export GAUSS_SCRDIR=/scratch/lantaXXXXXX-Y/$USER/$SLURM_JOB_ID

g16 < $FILENAME.gjf > $WORKDIR/$FILENAME.log

The script above request for compute partition (-p compute), 1 node (-N 1) with 64 CPU cores per task (--cpus-per-task=64). The wall-time limit is set to 2 hours 2:00:00 (the maximum time limit is 5 days -t 5-00:00:00) . The account is set to ltXXXXXX (-A ltXXXXXX) that is subjected to change to your own account. The job name is set to e3_06_ react (-J e3_06_reactreact ). For Gaussian, it is recommend to reserve an entire node for the job #SBATCH --exclusive since Gaussian is resource intensive, and using Gaussian on a shared compute node could potentially affect other users. might be needed if %CPU in gaussian input file is used instead of %Nproc (see the note in section 1.1)

Info

Total cores per LANTA Compute node is 128

Note

Please change FILENAME to you own input file and change the GAUSS_SCRDIR to your own project scratch directory ( The path format is /scratch/[projID-shortname] )

Note

The resource specified in job submission script must consistent with the resource specified in Gaussian input file.

Anchor
2.2
2.2
2.2 Run Gaussian on GPU node

Code Block
#!/bin/bash -l
#SBATCH -p gpu                  	#specify partition
#SBATCH -N 1                        #specify number of nodes
#SBATCH --gpus-per-node=4		    #specify number of gpu
#SBATCH --cpus-per-task=64   	    #specify number of cpus
#SBATCH -t 2:00:00                  #job time limit <hr:min:sec>
#SBATCH -J e3_06_react            	    #job name
#SBATCH -A ltXXXXXX                 #project account
#SBATCH --exclusive		            #reserving entire node for the job

#module load purge                           #purge all module
module load Gaussian/16.C.02-AVX2      	#load gaussian version16

FILENAME=e3_06_react                    # please change the FILENAME
WORKDIR=$SLURM_SUBMIT_DIR
################################

#create exporttemporary GAUSS_SCRDIR=/scratch/ltXXXXXX directory
mkdir -p /scratch/lantaXXXXXX-Y/$USER/$SLURM_JOB_ID

#export gaussain scratch directory to this one
    # please change the export GAUSS_SCRDIR=/scratch/lantaXXXXXX-Y/$USER/$SLURM_JOB_ID

g16 < $FILENAME.gjf > $WORKDIR/$FILENAME.log

...

Info

One GPU node on Lanta has 4 GPUs card of A100, so --gpus-per-node=4 is the maximum number

Anchor
3
3
3. Job submission

using sbatch submit.sh command to submit the job to the queuing system.