Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

OpenFOAM (Open-source Field Operation And Manipulation) is well-known software in computational fluid mechanics (CFD). It contains several standard solvers for simulating a broad range of continuum flows. Using the OpenFOAM toolbox, users can build their own solver to solve specific problems. OpenFOAM is open-source software under the GNU General Public License Version 3.

Official website: https://www.openfoam.com/ and https://openfoam.org/

Updated: July 2023



Modules

Module name

Description

Note

OpenFOAM/v2212-cpeCray-23.03

From https://www.openfoam.com/

MPI only, more tools

OpenFOAM/10-cpeCray-23.03

From https://openfoam.org/

MPI only, slightly faster

1. Case directory

A case directory is needed to run OpenFOAM. For new users, visit https://www.openfoam.com/documentation/tutorial-guide or https://doc.cfd.direct/openfoam/user-guide-v10/index.

After loading an OpenFOAM module, several examples can be found at $FOAM_TUTORIALS. Before running, copy them to your local path cp -r $FOAM_TUTORIALS . .

2. Job submission script

Two approaches for submitting OpenFOAM jobs are presented below. These approaches can be adapted to any OpenFOAM variant; however, some commands may have slightly different names.

I. RunFunction commands

Below is an example of OpenFOAM submission script using $WM_PROJECT_DIR/bin/tools/RunFunctions. It can be created using vi submitFoam.sh.

  • The total number of tasks (nodes x ntasks-per-node) must be equal or larger than to the number of subdomains specified in system/decomposeParDict (or from getNumberOfProcessors command).

  • LANTA compute partition has 128 CPU cores per node. Therefore, ntasks-per-node <= 128.

#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH --nodes=1              # Number of nodes
#SBATCH --ntasks-per-node=32   # Number of MPI processes per node 
#SBATCH -t 5-00:00:00          # Job runtime limit
#SBATCH -J OpenFOAM            # Job name
#SBATCH -A ltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load OpenFOAM/v2212-cpeCray-23.03

ulimit -s unlimited

# *** OpenFOAM steps *** {USER EDIT} ***
source $WM_PROJECT_DIR/bin/tools/RunFunctions   # Import RunFunctions

runApplication surfaceFeatureExtract
runApplication blockMesh
runApplication snappyHexMesh -overwrite
runApplication decomposePar
runParallel renumberMesh -overwrite
runParallel checkMesh
restore0Dir -processor
runParallel $(getApplication)
runApplication reconstructPar

This approach will generate several log files; one for each step.

II. Explicit commands

Another example of OpenFOAM submission script is shown below.

  • The total number of tasks (nodes x ntasks-per-node) must be equal or larger than to the number of subdomains specified in system/decomposeParDict.

  • LANTA compute partition has 128 CPU cores per node. Therefore, ntasks-per-node <= 128.

#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH --nodes=1              # Number of nodes
#SBATCH --ntasks-per-node=32   # Number of MPI processes per node 
#SBATCH -t 5-00:00:00          # Job runtime limit
#SBATCH -J OpenFOAM            # Job name
#SBATCH -A ltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load OpenFOAM/v2212-cpeCray-23.03

ulimit -s unlimited

# *** OpenFOAM steps *** {USER EDIT} ***
cp -r ./0.orig ./0
srun -N1 -n1 surfaceFeatureExtract 2>&1
srun -N1 -n1 blockMesh 2>&1
srun -N1 -n1 snappyHexMesh -overwrite 2>&1
srun -N1 -n1 decomposePar -copyZero 2>&1
srun renumberMesh -parallel -overwrite 2>&1
srun checkMesh -parallel 2>&1
srun xxxFoam -parallel 2>&1
srun -N1 -n1 reconstructPar 2>&1

Manual of an OpenFOAM command can be accessed by executing <command> -help or <command> -help-full.

3. Job submission

To submit jobs to the SLURM queuing system, execute

sbatch submitFoam.sh

The main log will be recorded in ‘slurm-xxxxxx.out’.

4. Additional note

(As of July 2023)

Due to some issues that degrade HPE Slingshot interconnect performance, we advise users to not request whole full node when running a large case across two or more nodes. In other words, we recommend using ntasks-per-node < 128 when nodes >2.

For instance, #SBATCH --nodes=4 --ntasks-per-node=64 --mem-per-cpu=3800M for 256 subdomains.

(As of June 2023)

If users encounter srun: error: task xxx launch failed: Error configuring interconnect, try requesting whole nodes using sbatch --exclusive submitFoam.sh or only idle nodes using sbatch --nodelist=xxx submitFoam.sh (see https://slurm.schedmd.com/sbatch.html ).

  1. Out of memory
    If users encounter
    slurmstepd: error: Detected ... oom-kill event(s) ... killed by the cgroup out-of-memory handler
    in any log file, try increasing RAM per CPU core by adding

    • #SBATCH --mem-per-cpu=3800M – while having ntasks-per-node <= 64 or

    • #SBATCH --mem-per-cpu=7600M – while having ntasks-per-node <= 32

    to your submitFoam.sh.

  2. Floating-point exception trapping
    By default, the floating-point exception trapping is disabled. To enable, add export FOAM_SIGFPE=true to your script.

  3. Building custom executable
    Please use

    • FOAM_USER_APPBIN in place of FOAM_APPBIN and FOAM_SITE_APPBIN

    • FOAM_USER_LIBBIN in place of FOAM_LIBBIN and FOAM_SITE_LIBBIN

    to avoid permission denied.
    [v2212] To manually specify those paths, see $WM_PROJECT_DIR/bin/tools/change-userdir.sh


Contact Us
ThaiSC support service : thaisc-support@nstda.or.th

  • No labels