Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Updated: 28 February 2024


1. Load required modules and set environment variables

Paste the below lines into your terminal. (Putting these in ~/.bashrc is not recommended.)

module purge
module load cpeIntel/23.09
module load cray-hdf5-parallel/1.12.2.7
module load cray-netcdf-hdf5parallel/4.9.0.7
module load cray-parallel-netcdf/1.12.3.7
module load libpng/1.6.39-cpeIntel-23.09
module load JasPer/1.900.1-cpeIntel-23.09
module load ADIOS2/2.9.1-cpeIntel-23.09

export WRF_EM_CORE=1    # Explicitly select ARW core developed by NCAR
export WRF_NMM_CORE=0   # Not NMM core
export WRF_DA_CORE=0    # Not WRFDA
export WRF_CHEM=0       # Not WRF-Chem

export NETCDF=${CRAY_NETCDF_HDF5PARALLEL_PREFIX}
export NETCDFPAR=${CRAY_NETCDF_HDF5PARALLEL_PREFIX}    
export PNETCDF=${CRAY_PARALLEL_NETCDF_PREFIX}           
export HDF5=${CRAY_HDF5_PARALLEL_PREFIX}
export JASPERINC=${EBROOTJASPER}/include              # (Optional)
export JASPERLIB=${EBROOTJASPER}/lib                  # (Optional)
export ADIOS2=${EBROOTADIOS2}

export WRFIO_NCD_LARGE_FILE_SUPPORT=1                 # (Optional)
export PNETCDF_QUILT=1                                # (Optional)

### Common mistake :: insert spaces before or after =

2. Set install location directory (DIR)

Let set an install path, DIR.

# While being in your desired install location directory, execute
export DIR=$(pwd)

3. Download WRF source code

An example for WRF 4.5.2 is as shown below.

cd $DIR
wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz
tar xzf v4.5.2.tar.gz
mv WRFV4.5.2 WRF

4. Enable GRIB2 IO (Optional)

Must set JASPERINC and JASPERLIB in Step 1

  • Open ${DIR}/WRF/arch/Config.pl (or Config_new.pl) using a text editor such as vi

    • Change the parameter $I_really_want_to_output_grib2_from_WRF from "FALSE" to "TRUE"

vi ${DIR}/WRF/arch/Config.pl           # or Config_new.pl
# Change $I_really_want_to_output_grib2_from_WRF = "FALSE" ; 
#   to   $I_really_want_to_output_grib2_from_WRF = "TRUE" ;

5. Configure

5.1 Run ./configure

cd ${DIR}/WRF
./configure
# Type 51, then press Enter <-- (dm+sm) INTEL (ftn/icc): Cray XC
# Type 1, then press Enter  <-- Basic nesting 

5.2 Edit configure.wrf

  • Open ${DIR}/WRF/configure.wrf using a text editor such as vi

    • Change all icc to cc -Wno-implicit-function-declaration -Wno-implicit-int

    • Append -Wno-implicit-function-declaration -Wno-implicit-int to existing cc.

    • Prepend -fp-model precise to FCBASEOPTS_NO_G.

vi configure.wrf
# Change  SFC = icc
#   to    SFC = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  SCC = icc
#   to    SCC = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  DM_CC = cc
#   to    DM_CC = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  FCBASEOPTS_NO_G = -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO)
#   to    FCBASEOPTS_NO_G = -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) 
 Note:

-Wno-implicit-function-declaration -Wno-implicit-int could be omitted when compiling WRF 4.5.2 or later.

6. Compile WRF

./compile em_real 2>&1 | tee compile.wrf.log   # This step takes around 1 hour on LANTA.

7. Example: Job submission script

#!/bin/bash
#SBATCH -p compute              # Partition
#SBATCH -N 1                    # Number of nodes
#SBATCH --ntasks-per-node=32    # Number of MPI processes per node
#SBATCH --cpus-per-task=4       # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00           # Job runtime limit
#SBATCH -J WRF                  # Job name
#SBATCH -A ltxxxxxx             # Account *** {USER EDIT} ***

module purge
module load cpeIntel/23.09
module load cray-hdf5-parallel/1.12.2.7
module load cray-netcdf-hdf5parallel/4.9.0.7
module load cray-parallel-netcdf/1.12.3.7
module load libpng/1.6.39-cpeIntel-23.09
module load JasPer/1.900.1-cpeIntel-23.09
module load ADIOS2/2.9.1-cpeIntel-23.09

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

#srun -n16 -c1 ./geogrid.exe
#srun -N1 -n1 -c1 ./ungrib.exe
#srun -n16 -c1 ./metgrid.exe

srun -n16 ./real.exe
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th

  • No labels