Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Weather Research and Forecasting (WRF) model is a common well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.

Official website: https://www2.mmm.ucar.edu/wrf/users/

Updated: Feb 2023

Table of Contents

Available version

...

WRF Version

...

Toolchain

...

Parallel

...

Module name

Mar 2024

...

Table of Contents
minLevel1
maxLevel6
include
outlinefalse
indent
excludeModules
typelist
class
printablefalse

...

Modules

Module name

Description

Note

WRF/4.4.2

Cray + CrayMPICH

MPI+OpenMP

WRF

-DMSM-cpeCray-23.03

Standard WRF model

Aggressive optimization

WRFchem/4.

4

5.

2

1-

DMSM

DM-

CrayCCE

cpeIntel-

22.06

GNU + CrayMPICH

MPI+OpenMP

WRF

23.09

WRF model with chemistry,
including WRF-Chem tools

Standard optimization

WRFchem/4.

4

5.2-

DMSM

DM-

CrayGNU

cpeCray-

22

23.

06

WPS Version

Toolchain

Parallel

Module name

4.4

Cray + CrayMPICH

MPI

03

WRF model with chemistry,
including WRF-Chem tools

(Experimental)
Aggressive optimization

WPS/4.4-DM-cpeCray-23.03

WRF pre-processing system

for WRF 4.4.X

WPS/4.

4

5-DM-

CrayCCE-22.06

GNU + CrayMPICH

MPI

WPS/4.4-DM-CrayGNU-22.06

cpeIntel-23.09

WRF pre-processing system

for WRF 4.5.X

Expand
titleMore details

DM indicates that the module only supports MPI (Slurm’s task); therefore, --cpus-per-task=1 and export OMP_NUM_THREADS=1 should be used.

DMSM indicates that the module supports both MPI and OpenMP. Users can set the number of OpenMP thread per MPI process through --cpus-per-task.

For WRF on LANTA, we recommend setting --cpus-per-task equals to 2, 4 or 8. We note that the -c${SLURM_CPUS_PER_TASK} option for srun is essential.

1. Input file

1.1 To drive run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.

1.2 To set configure the domain configuration and simulation time, the namelist.wps is needed. A short detail brief description of it can be found here. It is advised recommended to define use the WRF domains using WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.

Info

Users can use static geographical data already available on LANTA by specifying
Some static datasets, such as geog, Global_emissions_v3 and EDGAR, are readily available on LANTA at /project/common/WPS_Static/. They can be utilized by, for example, specifying geo_data_path = '/project/common/WPS_Static/geog' in the namelist.wps.

1.3 To run the WRF model, the namelist.input is neededrequired. A concise information description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.

Info

Two complete examples are available at /project/common/WRF/. To run it within a directory, use

  • cp /project/common/WRF/Example1/* . (WRF) or

  • cp /project/common/WRF/Example2/* . (WRF-Chem)

then follow the instructions inside the README file. (The Data directory is not needed.)

2. Job submission script

An Below is an example of a WRF submission script is shown below(submitWRF.sh). It could can be created by using vi submitWRF.sh. For whole node allocation, please confirm that 128 x (Number of nodes) = (Number of MPI processes) x (Number of OpenMP threads per MPI processes).

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH -N 1                   # Number of nodes
#SBATCH --ntasks-per-node=1632            # Number of MPI processes per node
#SBATCH --cpus-per-task=84      # Number of OpenMP threads per MPI process
#SBATCH -t 025-00:00:00            # Job runtime limit
#SBATCH -J WRF                 # Job name
#SBATCH -A ltXXXXXXltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load WPS/4.4-DM-CrayCCEcpeCray-2223.0603
module load WRF/4.4.2-DMSM-CrayCCEcpeCray-2223.06
03
###
A fix for CrayMPICH, until further notice ###
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

# -- (Recommended) -- #
export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

# *** {USER EDIT} *** #
# Please check that namelist.wps and namelist.input areexist inwhere this script sameis directorysubmitted.
link_grib /--Path-to-your-meteorological-data--/
link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/

# -- WPS -- #
link_wps
srun -n${SLURM_NTASKS} ./geogrid.exe
srun -N1 -n1 ./ungrib.exe
srun -n${SLURM_NTASKS} ./metgrid.exe
unlink_wps

# -- WRF -- #
link_emreal
srun -n${SLURM_NTASKS} ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
unlink_emreal

...

Info
More
  • Additional information

about
  • regarding ThaiSC

custom
  • support commands for WPS

and WRF modules
  • (link_wps, unlink_wps) and WRF (link_emreal, unlink_emreal) can be found by using

commands such as
  • link_

wps
  • xxx --help, link_xxx --description or man link_

emreal or module help WRF.

Some options do not support hybrid run; in these cases, try

  • xxx, after loading the modules.

  • You could run those WRF/WPS executables (.exe) separately by commenting unrelated lines out, using #, and adjusting your resource requests (#SBATCH) appropriately.

Some physics/dynamics options (and WRF-Chem) DO NOT support hybrid (DM+SM) run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

increasing
  1. Increase the total number of tasks,

e.g.
  1. for example, #SBATCH --ntasks-per-node=

64
  1. 128

specifying
  1. Specify the number of tasks for each executable explicitly

,
  1. ; for

example
  1. instance, use
    srun -n16 ./real.exe
    srun -

n64 -c1
  1. n128 ./wrf.exe or just srun ./wrf.exe

3. Job submission

To submit jobs to the SLURM queuing system on LANTA, execute sbatch submitWRF.sh.

Info

If WPS and WRF jobs are going to be submitted separately, users could use --dependency option of sbatch command to ensure that WRF starts running after WPS is completed.

...

Code Block
languagebash
sbatch submitWRF.sh

4. Post-processing

Several tools NCL, NCO, CDO, Ncview, ecCodes, netcdf4-python, wrf-python, pyngl, pynio, and cartopy are available for processing NetCDF files. They are available in Conda environments such as installed in the netcdf-py39 of Mamba/23.11.0-0 (previously Miniconda3).

To use NCL, for instance, add

Code Block
languagebash
module load Miniconda3Mamba/23.11.0-0
conda activate netcdf-py39

# For NCL only
export NCARG_ROOT=${CONDA_PREFIX}
export NCARG_RANGS=/project/common/WPS_Static/rangs
export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile  # (If used)

# Commands such as 'srun -n1 ncl xxx' or 'srun -n1 python xxx'

...

 for serial run
Note

Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes.
For more information, visit LANTA Frontend Usage Policy.

5. Advanced topics

Child pages (Children Display)

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th