Updated: 28 February 2024
...
Code Block | ||
---|---|---|
| ||
cd $DIR wget https://github.com/wrf-model/WRF/releases/download/v4.5.2/v4.5.2.tar.gz tar xzf v4.5.2.tar.gz mv WRFV4.5.2 WRF |
Info |
---|
For other version, check the WRF stable branches on the GitHub repository or WRF official website. |
4. Enable GRIB2 IO (Optional)
...
Open ${DIR}/WRF/configure.wrf using a text editor such as vi
Remove
-fpp -auto
fromOMPCC
Change all
icc
tocc -Wno-implicit-function-declaration -Wno-implicit-int
Append
-Wno-implicit-function-declaration -Wno-implicit-int
to existingcc
.Prepend
-fp-model precise
toFCBASEOPTS_NO_G
.
Code Block | ||
---|---|---|
| ||
vi configure.wrf # Change OMPCC = -qopenmp -fpp -auto # to OMPCC = -qopenmp # Change SCC SFC = icc # to SFCSCC = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change SCCCCOMP = icc # to SCCCCOMP = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change DM_CC = cc # to DM_CC = cc -Wno-implicit-function-declaration -Wno-implicit-int # Change FCBASEOPTS_NO_G = -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) # to FCBASEOPTS_NO_G = -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) |
Expand | ||
---|---|---|
| ||
|
...
Code Block | ||
---|---|---|
| ||
#!/bin/bash #SBATCH -p compute # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=32 # Number of MPI processes per node #SBATCH --cpus-per-task=4 # Number of OpenMP threads per MPI process #SBATCH -t 5-00:00:00 # Job runtime limit #SBATCH -J WRF # Job name #SBATCH -A ltxxxxxx # Account *** {USER EDIT} *** module purge module load cpeIntel/23.09 module load cray-hdf5-parallel/1.12.2.7 module load cray-netcdf-hdf5parallel/4.9.0.7 module load cray-parallel-netcdf/1.12.3.7 module load libpng/1.6.39-cpeIntel-23.09 module load JasPer/1.900.1-cpeIntel-23.09 module load ADIOS2/2.9.1-cpeIntel-23.09 export OMP_STACKSIZE="32M" export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} ulimit -s unlimited #srunsrun -n16 -c1 ./geogridreal.exe #srunsrun -N1 -n1 -c1n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./ungribwrf.exe #srun -n16 -c1 ./metgrid.exe srun -n16 |
Note |
---|
For DM+SM run, it is essential to specify |
Some options DO NOT support DM+SM run. If it is stuck at the beginning. try the following:
Use
#SBATCH --cpus-per-task=1
andexport OMP_NUM_THREADS=1
,Increase the total number of tasks, for example,
#SBATCH --ntasks-per-node=128
Specify the number of tasks for each executable explicitly; for instance, use
srun -n16 ./real.exe
...
srun
...
-n128 -c1 ./wrf.exe
...
...
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th
...