Updated: 28 February 2024
1. Load required modules and set environment variables
Paste the below lines into your terminal. (Putting these in ~/.bashrc is not recommended.)
module purge module load cpeIntel/23.09 module load cray-hdf5-parallel/1.12.2.7 module load cray-netcdf-hdf5parallel/4.9.0.7 module load cray-parallel-netcdf/1.12.3.7 module load libpng/1.6.39-cpeIntel-23.09 module load JasPer/1.900.1-cpeIntel-23.09 module load ADIOS2/2.9.1-cpeIntel-23.09 export WRF_EM_CORE=1 # Explicitly select ARW core developed by NCAR export WRF_NMM_CORE=0 # Not NMM core export WRF_DA_CORE=0 # Not WRFDA export WRF_CHEM=0 # Not WRF-Chem export NETCDF=${CRAY_NETCDF_HDF5PARALLEL_PREFIX} export NETCDFPAR=${CRAY_NETCDF_HDF5PARALLEL_PREFIX} export PNETCDF=${CRAY_PARALLEL_NETCDF_PREFIX} export HDF5=${CRAY_HDF5_PARALLEL_PREFIX} export JASPERINC=${EBROOTJASPER}/include # (Optional) export JASPERLIB=${EBROOTJASPER}/lib # (Optional) export ADIOS2=${EBROOTADIOS2} export WRFIO_NCD_LARGE_FILE_SUPPORT=1 # (Optional) export PNETCDF_QUILT=1 # (Optional) ### Common mistake :: insert spaces before or after =
2. Set install location directory (DIR)
Let set an install path, DIR.
# While being in your desired install location directory, execute export DIR=$(pwd)
3. Download WRF source code
An example for WRF 4.5.2 is as shown below.
cd $DIR wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz tar xzf v4.5.2.tar.gz mv WRFV4.5.2 WRF
For other version, check the WRF stable branches on the GitHub repository or WRF official website.
4. Enable GRIB2 IO (Optional)
Must set JASPERINC and JASPERLIB in Step 1
Open ${DIR}/WRF/arch/Config.pl (or Config_new.pl) using a text editor such as vi
Change the parameter
$I_really_want_to_output_grib2_from_WRF
from"FALSE"
to"TRUE"
vi ${DIR}/WRF/arch/Config.pl # or Config_new.pl # Change $I_really_want_to_output_grib2_from_WRF = "FALSE" ; # to $I_really_want_to_output_grib2_from_WRF = "TRUE" ;
5. Configure
5.1 Run ./configure
cd ${DIR}/WRF ./configure # Type 51, then press Enter <-- (dm+sm) INTEL (ftn/icc): Cray XC # Type 1, then press Enter <-- Basic nesting
5.2 Edit configure.wrf
Open ${DIR}/WRF/configure.wrf using a text editor such as vi
Remove
-fpp -auto
fromOMPCC
Change all
icc
tocc -Wno-implicit-function-declaration -Wno-implicit-int
Append
-Wno-implicit-function-declaration -Wno-implicit-int
to existingcc
.Prepend
-fp-model precise
toFCBASEOPTS_NO_G
.
vi configure.wrf # Change OMPCC = -qopenmp -fpp -auto # to OMPCC = -qopenmp # Change SCC = icc # to SCC = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change CCOMP = icc # to CCOMP = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change DM_CC = cc # to DM_CC = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change FCBASEOPTS_NO_G = -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) # to FCBASEOPTS_NO_G = -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO)
6. Compile WRF
./compile em_real 2>&1 | tee compile.wrf.log # This step takes around 1 hour on LANTA.
7. Example: Job submission script
#!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=32 # Number of MPI processes per node #SBATCH --cpus-per-task=4 # Number of OpenMP threads per MPI process #SBATCH -t 5-00:00:00 # Job runtime limit #SBATCH -J WRF # Job name #SBATCH -A ltxxxxxx # Account *** {USER EDIT} *** module purge module load cpeIntel/23.09 module load cray-hdf5-parallel/1.12.2.7 module load cray-netcdf-hdf5parallel/4.9.0.7 module load cray-parallel-netcdf/1.12.3.7 module load libpng/1.6.39-cpeIntel-23.09 module load JasPer/1.900.1-cpeIntel-23.09 module load ADIOS2/2.9.1-cpeIntel-23.09 export OMP_STACKSIZE="32M" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} ulimit -s unlimited srun -n16 ./real.exe srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
For DM+SM run, it is essential to specify -c
or --cpus-per-tasks
options for srun
to prevent a potential decrease in performance due to improper CPU binding.
Some options DO NOT support DM+SM run. If it is stuck at the beginning. try the following:
Use
#SBATCH --cpus-per-task=1
andexport OMP_NUM_THREADS=1
,Increase the total number of tasks, for example,
#SBATCH --ntasks-per-node=128
Specify the number of tasks for each executable explicitly; for instance, use
srun -n16 ./real.exe
srun -n128 -c1 ./wrf.exe
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th