Weather Research and Forecasting (WRF) model is a common well-known atmospheric modeling system developed by NCAR. It is suitable for both meteorological research and operational weather prediction.
Official website: https://www2.mmm.ucar.edu/wrf/users/
Updated: Feb 2023Mar 2024
...
Table of Contents |
---|
Available version
...
|
...
Modules
Module name | Description |
---|
Note | ||
---|---|---|
WRF/4.4.2-DMSM-cpeCray-23.03 | Standard WRF model | Aggressive optimization |
WRFchem/4. |
5. |
1- |
DM- |
cpeIntel- |
23. |
Aggressive
09 | WRF model with chemistry, | Standard optimization |
WRFchem/4. |
5.2- |
DM- |
cpeCray- |
Standard
WPS Version
Module name
Optimization
23.03 | WRF model with chemistry, | (Experimental) |
WPS/4.4-DM-cpeCray-23.03 | WRF pre-processing system | for WRF 4.4.X |
WPS/4. |
5-DM- |
cpeIntel- |
23. |
Aggressive
WPS/4.4-DM-CrayGNU-22.06
09 | WRF pre-processing system | for WRF 4.5.X |
Expand | ||
---|---|---|
| ||
DM indicates that the module only supports MPI (Slurm’s task); therefore, DMSM indicates that the module supports both MPI and OpenMP. Users can set the number of OpenMP thread per MPI process through For WRF on LANTA, we recommend setting |
1. Input file
1.1 To drive run the WRF model, time-dependent meteorological data (global model output/background state) is required. It can be downloaded from, for example, WRF - Free Data and NCEP GFS / GDAS.
1.2 To set configure the domain configuration and simulation time, the namelist.wps is needed. A short detail brief description of it can be found here. It is advised to define WRF domains using recommended to use the WRF Domain Wizard or the GIS4WRF plug-in for QGIS to define WRF domains.
Info |
---|
Users can use static geographical data already available on LANTA by specifying |
1.3 To run the WRF model, the namelist.input is neededrequired. A concise information description of the essential parameters can be found here, while the full description is available in chapter 5 of the WRF user's guide.
Info |
---|
Two complete examples are available at
then follow the instructions inside the README file. (The |
2. Job submission script
An Below is an example of a WRF submission script is shown below(submitWRF.sh
). It could can be created by using vi submitWRF.sh
. For whole node allocation, please confirm that 128 x (Number of nodes) = (Number of MPI processes) x (Number of OpenMP threads per MPI processes)
.
Code Block | ||
---|---|---|
| ||
#!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks=16 -per-node=32 # Number of MPI processes per node #SBATCH --cpus-per-task=84 # Number of OpenMP threads per MPI process #SBATCH -t 025-00:00:00 # Job runtime limit #SBATCH -J WRF # Job name #SBATCH -A ltXXXXXXltxxxxxx # Account *** {USER EDIT} *** module purge module load WPS/4.4-DM-CrayCCEcpeCray-2223.0603 module load WRF/4.4.2-DMSM-CrayCCEcpeCray-2223.06 03 ### A fix for CrayMPICH, until further notice ### module load craype-network-ucx module swap cray-mpich cray-mpich-ucx module load libfabric/1.15.0.0 export UCX_TLS=all export UCX_WARN_UNUSED_ENV_VARS=n # -- (Recommended) -- # export OMP_STACKSIZE="32M" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} ulimit -s unlimited # *** {USER EDIT} *** # # Please check that namelist.wps and namelist.input areexist inwhere this script sameis directorysubmitted. link_grib /--Path-to-your-meteorological-data--/ link_vtable /--Name-of-Vtable-to-parse-the-above-met-data--/ # -- WPS -- # link_wps srun -n${SLURM_NTASKS} ./geogrid.exe srun -N1 -n1 ./ungrib.exe srun -n${SLURM_NTASKS} ./metgrid.exe unlink_wps # -- WRF -- # link_emreal srun -n${SLURM_NTASKS} ./real.exe srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe unlink_emreal |
...
Info |
---|
More
module help WRF . |
Some options do not support hybrid run; in these cases, try
|
Some physics/dynamics options (and WRF-Chem) DO NOT support hybrid (DM+SM) run. If it is stuck at the beginning. try the following:
Use
#SBATCH --cpus-per-task=1
andexport OMP_NUM_THREADS=1
,
Increase the total number of tasks,
for example,
#SBATCH --ntasks-per-node=
128
Specify the number of tasks for each executable explicitly
; for
instance, use
srun -n16 ./real.exe
srun -
n128 ./wrf.exe
or justsrun ./wrf.exe
3. Job submission
To submit jobs to the SLURM queuing system on LANTA, execute sbatch submitWRF.sh
.
Info |
---|
If WPS and WRF jobs are going to be submitted separately, users could use |
...
Code Block | ||
---|---|---|
| ||
sbatch submitWRF.sh |
4. Post-processing
Several tools NCL, NCO, CDO, Ncview, ecCodes, netcdf4-python, wrf-python, pyngl, pynio, and cartopy are available for processing NetCDF files. They are available in Conda environments such as installed in the netcdf-py39
of Mamba/23.11.0-0
(previously Miniconda3
).
To use NCL, for instance, add
Code Block | ||
---|---|---|
| ||
module load Miniconda3Mamba/23.11.0-0 conda activate netcdf-py39 # For NCL only export NCARG_ROOT=${CONDA_PREFIX} export NCARG_RANGS=/project/common/WPS_Static/rangs export NCARG_SHAPEFILE=/project/common/WPS_Static/shapefile # (If used) # Commands such as 'srun -n1 ncl xxx' or 'srun -n1 python xxx' |
...
for serial run |
Note |
---|
Please abstain from doing heavy post-processing tasks on LANTA frontend/login nodes. |
5. Advanced topics
Child pages (Children Display) |
---|
...
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th