Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Weather Research and Forecasting (WRF) model is a common well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.

Official website: https://www2.mmm.ucar.edu/wrf/users/

Updated: May 2023

Table of Contents

Available version

...

Mar 2024

...

Table of Contents
minLevel1
maxLevel6
include
outlinefalse
indent
excludeModules
typelist
class
printablefalse

...

Modules

Module name

Description

Note

WRF/4.4.2-DMSM-cpeCray-23.03

Standard WRF model

Aggressive optimization

WRFchem/4.5.1-DM-cpeIntel-23.09

WRF model with chemistry,
including WRF-Chem tools

Standard optimization

WRFchem/4.5.2-

DMSM

DM-cpeCray-23.03

WRF model with chemistry,
including WRF-Chem tools

(Experimental)
Aggressive optimization

WPS

Version

Module name

Optimization

/4.4-DM-cpeCray-23.03

WRF pre-processing system

for WRF 4.4.X

WPS/4.

4

5-DM-

cpeCray

cpeIntel-23.

03Aggressive optimization

09

WRF pre-processing system

for WRF 4.5.X

Expand
titleMore details

DM indicates that the module only supports MPI (Slurm’s task); therefore, --cpus-per-task=1 and export OMP_NUM_THREADS=1 should be used.

DMSM indicates that the module supports both MPI and OpenMP. Users can set the number of OpenMP thread per MPI process through --cpus-per-task.

For WRF on LANTA, we recommend setting --cpus-per-task equals to 2, 4 or 8. We note that the -c${SLURM_CPUS_PER_TASK} option for srun is essential.

1. Input file

1.1 To run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.

1.2 To configure the domain and simulation time, the namelist.wps is needed. A brief description of it can be found here. It is recommended to use the WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.

Info

Users can utilize static geographical data already available on LANTA by specifying
Some static datasets, such as geog, Global_emissions_v3 and EDGAR, are readily available on LANTA at /project/common/WPS_Static/. They can be utilized by, for example, specifying geo_data_path = '/project/common/WPS_Static/geog' in the namelist.wps.

1.3 To run the WRF model, the namelist.input is required. A concise description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.

Info

Two complete examples are available at /project/common/WRF/. To run it within a directory, use

  • cp /project/common/WRF/Example1/* . (WRF) or

  • cp /project/common/WRF/Example2/* . (WRF-Chem)

then follow the instructions inside the README file. (The Data directory is not needed.)

2. Job submission script

Below is an example of a WRF submission script (submitWRF.sh). It can be created using vi submitWRF.sh. To ensure whole node allocation, please verify that 128 x (Number of nodes) = (Number of MPI processes) x (Number of OpenMP threads per MPI processes).

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH -N 1                   # Number of nodes
#SBATCH --ntasks-per-node=32            # Number of MPI processes per node
#SBATCH --cpus-per-task=4      # Number of OpenMP threads per MPI process
#SBATCH -t 025-00:00:00            # Job runtime limit
#SBATCH -J WRF                 # Job name
#SBATCH -A ltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load WPS/4.4-DM-cpeCray-23.03
module load WRF/4.4.2-DMSM-cpeCray-23.03

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

# *** {USER EDIT} *** #
# Please check that namelist.wps and namelist.input areexist inwhere this script sameis directorysubmitted.
link_grib /--Path-to-your-meteorological-data--/
link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/

# -- WPS -- #
link_wps
srun -n${SLURM_NTASKS} ./geogrid.exe
srun -N1 -n1 ./ungrib.exe
srun -n${SLURM_NTASKS} ./metgrid.exe
unlink_wps

# -- WRF -- #
link_emreal
srun -n${SLURM_NTASKS} ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
unlink_emreal
Info
  • Additional information

about
  • regarding ThaiSC

custom
  • support commands for

the WPS and WRF modules can be accessed by using commands such as link_wps
  • WPS (link_wps, unlink_wps) and WRF (link_emreal, unlink_emreal) can be found by using link_xxx --help, link_xxx --description or man link_

emreal or module help WRF.
Note

Some options do not support hybrid run. To determine if this is the case, do the following:

  • use xxx, after loading the modules.

  • You could run those WRF/WPS executables (.exe) separately by commenting unrelated lines out, using #, and adjusting your resource requests (#SBATCH) appropriately.

Some physics/dynamics options (and WRF-Chem) DO NOT support hybrid (DM+SM) run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

...

  1. Increase the total number of tasks, for example, #SBATCH --ntasks-per-node=

...

  1. 128

...

  1. Specify the number of tasks for each executable explicitly; for instance, use
    srun -n16 ./real.exe
    srun -

...

  1. n128 ./wrf.exe or just srun ./wrf.exe

3. Job submission

To submit jobs to the SLURM queuing system on LANTA, execute sbatch submitWRF.sh.

Info

If WPS and WRF jobs need be submitted separately, users could use the --dependency option of sbatch command to ensure that WRF starts running only after WPS is completed.

...

Code Block
languagebash
sbatch submitWRF.sh

4. Post-processing

Several tools NCL, NCO, CDO, Ncview, ecCodes, netcdf4-python, wrf-python, pyngl, pynio, and cartopy are available for processing NetCDF files. They are available in Conda environments such as installed in the netcdf-py39 of Mamba/23.11.0-0 (previously Miniconda3).

To use NCL, for instance, add

Code Block
languagebash
module load Miniconda3Mamba/23.11.0-0
conda activate netcdf-py39

# For NCL only
export NCARG_ROOT=${CONDA_PREFIX}
export NCARG_RANGS=/project/common/WPS_Static/rangs
export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile  # (If used)

# Commands such as 'srun -n1 ncl xxx' or 'srun -n1 python xxx'

...

 for serial run
Note

Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes.
For more information, visit LANTA Frontend Usage Policy.

5. Advanced topics

Child pages (Children Display)

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th