Weather Research and Forecasting (WRF) model is a common atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.
Official website: https://www2.mmm.ucar.edu/wrf/users/
Updated: Feb 2023
Available version
WRF Version | Module name | Optimization |
---|---|---|
4.4.2 | WRF/4.4.2-DMSM-CrayCCE-22.06 | Aggressive |
WRF/4.4.2-DMSM-CrayGNU-22.06 | Standard |
WPS Version | Module name | Optimization |
---|---|---|
4.4 | WPS/4.4-DM-CrayCCE-22.06 | Aggressive |
WPS/4.4-DM-CrayGNU-22.06 | Standard |
1. Input file
1.1 To drive the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.
1.2 To set domain configuration and simulation time, namelist.wps is needed. A short detail can be found here. It is advised to define WRF domains using WRF Domain Wizard or GIS4WRF plug-in for QGIS.
Users can use static geographical data already available on LANTA by specifyinggeo_data_path = '/project/common/WPS_Static/geog'
1.3 To run the WRF model, namelist.input is needed. A concise information of essential parameters can be found here, while full description is available in chapter 5 of WRF user's guide.
2. Job submission script
An example of WRF submission script is shown below. It could be created by vi submitWRF.sh
. For whole node allocation, please confirm that 128 x (Number of nodes) = (Number of MPI processes) x (Number of OpenMP threads per MPI processes)
.
#!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks=16 # Number of MPI processes #SBATCH --cpus-per-task=8 # Number of OpenMP threads per MPI process #SBATCH -t 02:00:00 # Job runtime limit #SBATCH -J WRF # Job name #SBATCH -A ltXXXXXX # Account *** {USER EDIT} *** module purge module load WPS/4.4-DM-CrayCCE-22.06 module load WRF/4.4.2-DMSM-CrayCCE-22.06 ### A fix for CrayMPICH, until further notice ### module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n # -- (Recommended) -- # export OMP_STACKSIZE="32M" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} ulimit -s unlimited # *** {USER EDIT} *** # # Please check that namelist.wps and namelist.input are in this same directory link_grib /--Path-to-your-meteorological-data--/ link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/ # -- WPS -- # link_wps srun -n${SLURM_NTASKS} ./geogrid.exe srun -N1 -n1 ./ungrib.exe srun -n${SLURM_NTASKS} ./metgrid.exe unlink_wps # -- WRF -- # link_emreal srun -n${SLURM_NTASKS} ./real.exe srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe unlink_emreal
More information about ThaiSC custom commands for WPS and WRF modules can be found by using commands such as link_wps --help
, man link_emreal
or module help WRF
.
Some options do not support hybrid run; in these cases, try
using
#SBATCH --cpus-per-task=1
andexport OMP_NUM_THREADS=1
,increasing the total number of tasks, e.g.,
#SBATCH --ntasks=64
specifying the number of tasks for each executable explicitly, for example,
srun -n16 ./real.exe
srun -n64 -c1 ./wrf.exe
3. Job submission
To submit jobs to SLURM queuing system on LANTA, execute sbatch submitWRF.sh
.
If WPS and WRF jobs are going to be submitted separately, users could use --dependency
option of sbatch command to ensure that WRF starts running after WPS is completed.
A complete example is available at /project/common/WRF/Example1
. To run it under a directory, use cp /project/common/WRF/Example1/* .
, vi submitWRF.sh
to edit the account and execute sbatch submitWRF.sh
.
4. Post-processing
Several tools for processing NetCDF files are available in Conda environments such as netcdf-py39
. To use NCL, for instance, add
module load Miniconda3 conda activate netcdf-py39 # For NCL only export NCARG_ROOT=${CONDA_PREFIX} export NCARG_RANGS=/project/common/WPS_Static/rangs export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile # (If used) # Commands such as 'srun -n1 ncl xxx' or 'srun -n1 python xxx'
Other main packages installed in netcdf-py39
are NCO, CDO, netcdf4-python, wrf-python, pyngl, pynio and cartopy.
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th