Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Updated: 28 February 2024

...

Code Block
languagebash
cd $DIR
wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz
tar xzf v4.5.2.tar.gz
mv WRFV4.5.2 WRF
Info

For other version, check the WRF stable branches on the GitHub repository or WRF official website.

4. Enable GRIB2 IO (Optional)

...

  • Open ${DIR}/WRF/configure.wrf using a text editor such as vi

    • Remove -fpp -auto from OMPCC

    • Change all icc to cc -Wno-implicit-function-declaration -Wno-implicit-int

    • Append -Wno-implicit-function-declaration -Wno-implicit-int to existing cc.

    • Prepend -fp-model precise to FCBASEOPTS_NO_G.

Code Block
languagebash
vi configure.wrf
# Change  OMPCC = -qopenmp -fpp -auto
#   to    OMPCC = -qopenmp
# Change  SCC  SFC = icc
#   to    SFCSCC   = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  SCCCCOMP = icc
#   to    SCCCCOMP = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  DM_CC = cc
#   to    DM_CC = cc -Wno-implicit-function-declaration -Wno-implicit-int
# Change  FCBASEOPTS_NO_G = -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO)
#   to    FCBASEOPTS_NO_G = -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) 
Expand
titleNote: -Wno-implicit-function-declaration -Wno-implicit-int

-Wno-implicit-function-declaration -Wno-implicit-int could be omitted when compiling WRF 4.5.2 or later.

...

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute              # Partition
#SBATCH -N 1                    # Number of nodes
#SBATCH --ntasks-per-node=32    # Number of MPI processes per node
#SBATCH --cpus-per-task=4       # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00           # Job runtime limit
#SBATCH -J WRF                  # Job name
#SBATCH -A ltxxxxxx             # Account *** {USER EDIT} ***

module purge
module load cpeIntel/23.09
module load cray-hdf5-parallel/1.12.2.7
module load cray-netcdf-hdf5parallel/4.9.0.7
module load cray-parallel-netcdf/1.12.3.7
module load libpng/1.6.39-cpeIntel-23.09
module load JasPer/1.900.1-cpeIntel-23.09
module load ADIOS2/2.9.1-cpeIntel-23.09

export OMP_STACKSIZE="32M"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited

#srunsrun -n16 -c1 ./geogridreal.exe
#srunsrun -N1 -n1 -c1n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./ungribwrf.exe
#srun -n16 -c1 ./metgrid.exe

srun -n16 
Note

For DM+SM run, it is essential to specify -c or --cpus-per-tasks options for srun to prevent a potential decrease in performance due to improper CPU binding.

Some options DO NOT support DM+SM run. If it is stuck at the beginning. try the following:

  1. Use #SBATCH --cpus-per-task=1 and export OMP_NUM_THREADS=1,

  2. Increase the total number of tasks, for example, #SBATCH --ntasks-per-node=128

  3. Specify the number of tasks for each executable explicitly; for instance, use
    srun -n16 ./real.exe

...


  1. srun

...

  1. -n128 -c1 ./wrf.exe

...

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th

...