Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 27 Next »

Weather Research and Forecasting (WRF) model is a well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.

Official website: https://www2.mmm.ucar.edu/wrf/users/

Updated: Mar 2024



Modules

Module name

Description

Note

WRF/4.4.2-DMSM-cpeCray-23.03

Standard WRF model

Aggressive optimization

WRFchem/4.5.1-DM-cpeIntel-23.09

WRF model with chemistry,
including WRF-Chem tools

Standard optimization

WPS/4.4-DM-cpeCray-23.03

WRF pre-processing system

for WRF 4.4.X

WPS/4.5-DM-cpeIntel-23.09

WRF pre-processing system

for WRF 4.5.X

 More details

DM indicates that the module only supports MPI (Slurm’s task); therefore, --cpus-per-task=1 and export OMP_NUM_THREADS=1 should be used.

DMSM indicates that the module supports both MPI and OpenMP. Users can set the number of OpenMP thread per MPI process through --cpus-per-task.

For WRF on LANTA, we recommend setting --cpus-per-task equals to 2, 4 or 8. We note that the -c${SLURM_CPUS_PER_TASK} option for srun is essential.

1. Input file

1.1 To run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.

1.2 To configure the domain and simulation time, the namelist.wps is needed. A brief description of it can be found here. It is recommended to use the WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.

Some static datasets, such as geog, Global_emissions_v3 and EDGAR, are readily available on LANTA at /project/common/WPS_Static/. They can be utilized by, for example, specifying geo_data_path = '/project/common/WPS_Static/geog' in the namelist.wps.

1.3 To run the WRF model, the namelist.input is required. A concise description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.

Two complete examples are available at /project/common/WRF/. To run it within a directory, use cp /project/common/WRF/Example1/* . or cp /project/common/WRF/Example2/* . and follow the instructions inside the README file.

2. Job submission script

Below is an example of a WRF submission script (submitWRF.sh). It can be created using vi submitWRF.sh.

#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH -N 1                   # Number of nodes
#SBATCH --ntasks-per-node=32   # Number of MPI processes per node
#SBATCH --cpus-per-task=4      # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00          # Job runtime limit
#SBATCH -J WRF                 # Job name
#SBATCH -A ltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load WPS/4.4-DM-cpeCray-23.03
module load WRF/4.4.2-DMSM-cpeCray-23.03

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

# *** {USER EDIT} *** #
# Please check that namelist.wps and namelist.input exist where this script is submitted.
link_grib /--Path-to-your-meteorological-data--/
link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/

# -- WPS -- #
link_wps
srun -n${SLURM_NTASKS} ./geogrid.exe
srun -N1 -n1 ./ungrib.exe
srun -n${SLURM_NTASKS} ./metgrid.exe
unlink_wps

# -- WRF -- #
link_emreal
srun -n${SLURM_NTASKS} ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
unlink_emreal

Additional information regarding ThaiSC support commands for WPS (link_wps, unlink_wps) and WRF (link_emreal, unlink_emreal) can be found by using link_xxx --help, link_xxx --description or man link_xxx, after loading the modules.

Some physics/dynamics options DO NOT support hybrid (DM+SM) run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

  2. Increase the total number of tasks, for example, #SBATCH --ntasks-per-node=128

  3. Specify the number of tasks for each executable explicitly; for instance, use
    srun -n16 ./real.exe
    srun -n128 -c1 ./wrf.exe

3. Job submission

To submit jobs to the SLURM queuing system on LANTA, execute

sbatch submitWRF.sh

4. Post-processing

NCL, NCO, CDO, Ncview, ecCodes, netcdf4-python, wrf-python, pyngl, pynio, cartopy and others are available for processing NetCDF files. They are installed in Conda environment netcdf-py39 of Miniconda3/22.11.1-1.

To use NCL, for instance,

module load Miniconda3/22.11.1-1
conda activate netcdf-py39

# For NCL only
export NCARG_ROOT=${CONDA_PREFIX}
export NCARG_RANGS=/project/common/WPS_Static/rangs
export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile  # (If used)

# Commands such as 'ncl xxx' or 'python xxx' for serial run

Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes.

5. Advanced topics


Contact Us
ThaiSC support service : thaisc-support@nstda.or.th

  • No labels