Weather Research and Forecasting (WRF) model is a well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.
Official website: https://www2.mmm.ucar.edu/wrf/users/
Updated: Aug 2023
Modules
Module name | Description | Note |
---|---|---|
WRF/4.4.2-DMSM-cpeCray-23.03 | Standard WRF model | Aggressive optimization |
WRFchem/4.5-DM-cpeIntel-23.03 | WRF model with chemistry, | Standard optimization |
WPS/4.4-DM-cpeCray-23.03 | WRF pre-processing system | for WRF 4.4.X |
WPS/4.5-DM-cpeIntel-23.03 | WRF pre-processing system | for WRF 4.5.X |
1. Input file
1.1 To run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.
1.2 To configure the domain and simulation time, the namelist.wps is needed. A brief description of it can be found here. It is recommended to use the WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.
Users can utilize open source datasets, such as geog, Global_emissions_v3 and EDGAR, already available on LANTA by specifying, for example, geo_data_path = '/project/common/WPS_Static/geog'
1.3 To run the WRF model, the namelist.input is required. A concise description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.
2. Job submission script
Below is an example of a WRF submission script. It can be created using vi submitWRF.sh
.
To ensure whole node allocation, please verify that 128 = (Number of MPI processes per node) x (Number of OpenMP threads per MPI processes)
– this is recommended to avoid potential issues.
#!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=32 # Number of MPI processes per node #SBATCH --cpus-per-task=4 # Number of OpenMP threads per MPI process #SBATCH -t 5-00:00:00 # Job runtime limit #SBATCH -J WRF # Job name #SBATCH -A ltxxxxxx # Account *** {USER EDIT} *** module purge module load WPS/4.4-DM-cpeCray-23.03 module load WRF/4.4.2-DMSM-cpeCray-23.03 export OMP_STACKSIZE="32M" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} ulimit -s unlimited # *** {USER EDIT} *** # # Please check that namelist.wps and namelist.input are in this same directory link_grib /--Path-to-your-meteorological-data--/ link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/ # -- WPS -- # link_wps srun -n${SLURM_NTASKS} ./geogrid.exe srun -N1 -n1 ./ungrib.exe srun -n${SLURM_NTASKS} ./metgrid.exe unlink_wps # -- WRF -- # link_emreal srun -n${SLURM_NTASKS} ./real.exe srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe unlink_emreal
Additional information about ThaiSC support commands for WPS and WRF can be accessed by using commands such as link_wps --help
, man link_emreal
or module help WRF
.
Some options do not support hybrid run (such as running WRF-Chem). If this is the case, do the following:
use
#SBATCH --cpus-per-task=1
andexport OMP_NUM_THREADS=1
,increase the total number of tasks, for example,
#SBATCH --ntasks-per-node=128
specify the number of tasks for each executable explicitly; for instance, use
srun -n16 ./real.exe
srun -n128 -c1 ./wrf.exe
3. Job submission
To submit jobs to the SLURM queuing system on LANTA, execute
sbatch submitWRF.sh
Two complete examples are available at /project/common/WRF/
. To run it within a directory, use cp /project/common/WRF/Example1/* .
or cp /project/common/WRF/Example2/* .
and follow the instructions in the README file.
Users should check the slurm-xxxx.out
file regularly because an abnormal exit from an MPI task/process may not cause the entire job to terminate. Setting an appropriate job runtime limit -t
is also helpful.
4. Post-processing
NCL, NCO, CDO, netcdf4-python, Ncview, wrf-python, pyngl, pynio and cartopy are available for processing NetCDF files. They are installed in Conda environment netcdf-py39
of Miniconda3/22.11.1-1
.
To use NCL, for instance,
module load Miniconda3/22.11.1-1 conda activate netcdf-py39 # For NCL only export NCARG_ROOT=${CONDA_PREFIX} export NCARG_RANGS=/project/common/WPS_Static/rangs export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile # (If used) # Commands such as 'ncl xxx' or 'python xxx' for serial run
Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes.
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th