Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Weather Research and Forecasting (WRF) model is a well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.

Official website: https://www2.mmm.ucar.edu/wrf/users/

Updated: Aug 2023Mar 2024

...

Table of Contents
minLevel1
maxLevel6
include
outlinefalse
indent
excludeModules
typelist
class
printablefalse

...

Module name

Description

Note

WRF/4.4.2-DMSM-cpeCray-23.03

Standard WRF model

Aggressive optimization

WRFchem/4.5.1-DM-cpeIntel-23.09

WRF model with chemistry,
including WRF-Chem tools

Standard optimization

WRFchem/4.5.2-DM-cpeCray-23.03

WRF model with chemistry,
including WRF-Chem tools

(Experimental)
Standard Aggressive optimization

WPS/4.4-DM-cpeCray-23.03

WRF pre-processing system

for WRF 4.4.X

WPS/4.5-DM-cpeIntel-23.0309

WRF pre-processing system

for WRF 4.5.X

Expand
titleMore details

DM indicates that the module only supports MPI (Slurm’s task); therefore, --cpus-per-task=1 and export OMP_NUM_THREADS=1 should be used.

DMSM indicates that the module supports both MPI and OpenMP. Users can set the number of OpenMP thread per MPI process through --cpus-per-task.

For WRF on LANTA, we recommend setting --cpus-per-task equals to 2, 4 or 8. We note that the -c${SLURM_CPUS_PER_TASK} option for srun is essential.

1. Input file

1.1 To run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.

...

1.3 To run the WRF model, the namelist.input is required. A concise description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.

Info

Two complete examples are available at /project/common/WRF/. To run it within a directory, use

  • cp /project/common/WRF/Example1/* . (WRF) or

  • cp /project/common/WRF/Example2/* . (WRF-Chem)

then follow the instructions inside the README file. (The Data directory is not needed.)

2. Job submission script

Below is an example of a WRF submission script (submitWRF.sh). It can be created using vi submitWRF.sh. To ensure whole node allocation, please verify that 128 = (Number of MPI processes per node) x (Number of OpenMP threads per MPI processes) – this is recommended to avoid potential issues.

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH -N 1                   # Number of nodes
#SBATCH --ntasks-per-node=32   # Number of MPI processes per node
#SBATCH --cpus-per-task=4      # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00          # Job runtime limit
#SBATCH -J WRF                 # Job name
#SBATCH -A ltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load WPS/4.4-DM-cpeCray-23.03
module load WRF/4.4.2-DMSM-cpeCray-23.03

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

# *** {USER EDIT} *** #
# Please check that namelist.wps and namelist.input areexist inwhere this same directoryscript is submitted.
link_grib /--Path-to-your-meteorological-data--/
link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/

# -- WPS -- #
link_wps
srun -n${SLURM_NTASKS} ./geogrid.exe
srun -N1 -n1 ./ungrib.exe
srun -n${SLURM_NTASKS} ./metgrid.exe
unlink_wps

# -- WRF -- #
link_emreal
srun -n${SLURM_NTASKS} ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
unlink_emreal
Info
  • Additional information

about
  • regarding ThaiSC support commands for WPS (link_wps, unlink_wps) and WRF (link_emreal, unlink_emreal) can be found by using

commands such as
  • link_

wps
  • xxx --help, link_xxx --description or man link_

emreal or module help WRF.
Note

Some options do not support hybrid run (such as running WRF-Chem). If this is the case, do the following:

  • use xxx, after loading the modules.

  • You could run those WRF/WPS executables (.exe) separately by commenting unrelated lines out, using #, and adjusting your resource requests (#SBATCH) appropriately.

Some physics/dynamics options (and WRF-Chem) DO NOT support hybrid (DM+SM) run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

...

  1. Increase the total number of tasks, for example, #SBATCH --ntasks-per-node=128

...

  1. Specify the number of tasks for each executable explicitly; for instance, use
    srun -n16 ./real.exe
    srun -n128

...

  1. ./wrf.exe or just srun ./wrf.exe

3. Job submission

To submit jobs to the SLURM queuing system on LANTA, execute

Code Block
languagebash
sbatch submitWRF.sh

Two complete examples are available at /project/common/WRF/. To run it within a directory, use cp /project/common/WRF/Example1/* . or cp /project/common/WRF/Example2/* . and follow the instructions in the README file.

Note
Users should check the slurm-xxxx.out file regularly because an abnormal exit from an MPI task/process may NOT cause the entire job to terminate. Setting an appropriate job runtime limit -t is also helpful.

4. Post-processing

NCL, NCO, CDO, Ncview, ecCodes, netcdf4-python, Ncview, wrf-python, pyngl, pynio, and cartopy are available for processing NetCDF files. They are installed in Conda environment the netcdf-py39 of Miniconda3Mamba/2223.11.1-10-0 (previously Miniconda3).

To use NCL, for instance,

Code Block
languagebash
module load Miniconda3Mamba/2223.11.10-10
conda activate netcdf-py39

# For NCL only
export NCARG_ROOT=${CONDA_PREFIX}
export NCARG_RANGS=/project/common/WPS_Static/rangs
export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile  # (If used)

# Commands such as 'ncl xxx' or 'python xxx' for serial run
Note

Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes.
For more information, visit LANTA Frontend Usage Policy.

5. Advanced topics

...

Install your own WRF-Chem

...

Install your own WRF-Chem preprocessing tools

...

Child pages (Children Display)

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th