Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Weather Research and Forecasting (WRF) model is a common atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.

Official website: https://www2.mmm.ucar.edu/wrf/users/

Updated: Feb May 2023

...

Table of Contents

...

Available version

WRF Version

Module name

OptimizationNote

4.4.2

WRF/4.4.2-DMSM-CrayCCEcpeCray-2223.0603

Aggressive

WRF/4.4.2-DMSM-CrayGNU-22.06

Standard

optimization

WPS Version

Module name

Optimization

4.4

WPS/4.4-DM-CrayCCEcpeCray-2223.0603

Aggressive

WPS/4.4-DM-CrayGNU-22.06

Standardoptimization

1. Input file

1.1 To drive run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.

1.2 To set configure the domain configuration and simulation time, the namelist.wps is needed. A short detail brief description of it can be found here. It is advised recommended to define use the WRF domains using WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.

Info

Users can use utilize static geographical data already available on LANTA by specifying
geo_data_path = '/project/common/WPS_Static/geog'

1.3 To run the WRF model, the namelist.input is neededrequired. A concise information description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.

2. Job submission script

An Below is an example of a WRF submission script is shown below. It could can be created by using vi submitWRF.sh. For To ensure whole node allocation, please confirm verify that 128 x (Number of nodes) = (Number of MPI processes) x (Number of OpenMP threads per MPI processes).

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute             # Partition
#SBATCH -N 1                   # Number of nodes
#SBATCH --ntasks=1632            # Number of MPI processes
#SBATCH --cpus-per-task=84      # Number of OpenMP threads per MPI process
#SBATCH -t 02:00:00            # Job runtime limit
#SBATCH -J WRF                 # Job name
#SBATCH -A ltXXXXXXltxxxxxx            # Account *** {USER EDIT} *** 

module purge
module load WPS/4.4-DM-CrayCCEcpeCray-2223.0603
module load WRF/4.4.2-DMSM-CrayCCEcpeCray-22.06

### A fix for CrayMPICH, until further notice ###
module load craype-network-ucx
module swap cray-mpich cray-mpich-ucx
module load libfabric/1.15.0.0
export UCX_TLS=all
export UCX_WARN_UNUSED_ENV_VARS=n

# -- (Recommended) -- #
export 23.03

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

# *** {USER EDIT} *** #
# Please check that namelist.wps and namelist.input are in this same directory
link_grib /--Path-to-your-meteorological-data--/
link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/

# -- WPS -- #
link_wps
srun -n${SLURM_NTASKS} ./geogrid.exe
srun -N1 -n1 ./ungrib.exe
srun -n${SLURM_NTASKS} ./metgrid.exe
unlink_wps

# -- WRF -- #
link_emreal
srun -n${SLURM_NTASKS} ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
unlink_emreal
Info

More Additional information about ThaiSC custom commands for the WPS and WRF modules can be found accessed by using commands such as link_wps --help, man link_emreal or module help WRF.

Note

Some options do not support hybrid run

...

. To determine if this is the case, do the following:

  • use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

...

  • increase the total number of tasks,

...

  • for example, #SBATCH --ntasks=64

...

  • specify the number of tasks for each executable explicitly

...

  • ; for

...

  • instance, use
    srun -n16 ./real.exe
    srun -n64 -c1 ./wrf.exe

3. Job submission

To submit jobs to the SLURM queuing system on LANTA, execute sbatch submitWRF.sh.

Info

If WPS and WRF jobs are going to need be submitted separately, users could use the --dependency option of sbatch command to ensure that WRF starts running only after WPS is completed.

A complete example is available at /project/common/WRF/Example1. To run it under within a directory, use cp /project/common/WRF/Example1/* ., edit the account details by using vi submitWRF.sh to edit the account and execute and then issue the command sbatch submitWRF.sh.

4. Post-processing

...

Other main packages installed in netcdf-py39 are NCO, CDO, netcdf4-python, wrf-python, pyngl, pynio and cartopy.

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th