Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Updated: 28 February 2024

...

Code Block
languagebash
cd $DIR
wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz
tar xzf v4.5.2.tar.gz
mv WRFV4.5.2 WRF
Info

For other version, check the WRF stable branches on the GitHub repository or WRF official website.

4. Enable GRIB2 IO (Optional)

...

Code Block
languagebash
vi configure.wrf
# Change  SCC   = icc
#   to    SCC   = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  CCOMP = icc
#   to    CCOMP = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  DM_CC = cc
#   to    DM_CC = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  FCBASEOPTS_NO_G = -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO)
#   to    FCBASEOPTS_NO_G = -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) 
Expand
titleNote: -Wno-implicit-function-declaration -Wno-implicit-int

-Wno-implicit-function-declaration -Wno-implicit-int could be omitted when compiling WRF 4.5.2 or later.

...

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute              # Partition
#SBATCH -N 1                    # Number of nodes
#SBATCH --ntasks-per-node=32    # Number of MPI processes per node
#SBATCH --cpus-per-task=4       # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00           # Job runtime limit
#SBATCH -J WRF                  # Job name
#SBATCH -A ltxxxxxx             # Account *** {USER EDIT} ***

module purge
module load cpeIntel/23.09
module load cray-hdf5-parallel/1.12.2.7
module load cray-netcdf-hdf5parallel/4.9.0.7
module load cray-parallel-netcdf/1.12.3.7
module load libpng/1.6.39-cpeIntel-23.09
module load JasPer/1.900.1-cpeIntel-23.09
module load ADIOS2/2.9.1-cpeIntel-23.09

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

#srunsrun -n16 -c1 ./geogridreal.exe
#srunsrun -N1 -n1 -c1 ./ungrib.exe
#srun -n16 -c1 ./metgrid.exe

srun -n16 n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
Note

For DM+SM run, it is essential to specify -c or --cpus-per-tasks options for srun to prevent a potential decrease in performance due to improper CPU binding.

Some options DO NOT support DM+SM run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

  2. Increase the total number of tasks, for example, #SBATCH --ntasks-per-node=128

  3. Specify the number of tasks for each executable explicitly; for instance, use
    srun -n16 ./real.exe

...


  1. srun

...

  1. -n128 -c1 ./wrf.exe

...

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th

...