Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Excerpt
hiddentrue
nameExample 1: Run WRF

ตัวอย่างการใช้งาน WRF เบื้องต้น

การใช้งาน Weather Research and Forecasting (WRF) model เบื้องต้นมีขั้นตอนดังนี้

Table of Contents

...

Tip
  1. ต้องลง WRF เรียบร้อยแล้ว

  2. ต้องลง WPS เรียบร้อยแล้ว

  3. ต้องดาวน์โหลด Geography data for WPS geogrid.exe เรียบร้อยแล้ว

  4. ต้องลง NCL เรียบร้อยแล้ว (สำหรับหัวข้อ 2.2)

1. Prepare global model output

...

Code Block
languagebash
#!/bin/bash
#SBATCH -p devel                    # Partition or machine type [devel/compute/memory]
#SBATCH -N 1 --ntasks-per-node=4    # Number of nodes and Number of core per node
#SBATCH -t 02:00:00                 # Total run time limit (hour:minute:second)
#SBATCH -J WPS_Ex1                  # Job name (short)
#SBATCH -A projxxxx                 # Your project account *** {USER EDIT} ***

module purge
module load netCDF-Fortran/4.5.2-iimpi-2019b
module load libpng/1.6.37-GCCcore-8.3.0
module load JasPer/1.900.1-intel-2019b

export JASPERINC=$EBROOTJASPER/include
export JASPERLIB=$EBROOTJASPER/lib

ulimit -s unlimited         # MUST have otherwise ERROR

srun -n4 ./geogrid.exe      # Run geogrid.exe using 4 CPU cores (-n4) is suggested for geogrid.exe 
srun -n1 ./ungrib.exe       # ungrib.exe MUST run in serial (-n1)
srun -n4 ./metgrid.exe      # Run metgrid.exe using 4 CPU cores 	(-n4)

# It is more efficient suggestedto forrun geogrid.exe
Note
ungrid.exe ต้องรันแบบ Single processor/
, metgrid.exe and real.exe using a few CPU cores and enough memory.
Note

ungrid.exe ต้องรันแบบ Single processor/Serial execution เท่านั้น

...

Code Block
languagebash
#!/bin/bash
#SBATCH -p compute                  # Partition or machine type [devel/compute/memory]
#SBATCH -N 1 --ntasks-per-node=40   # Number of nodes and Number of core per node #SBATCH -t 24:00:00        # Number of nodes
  #SBATCH --ntasks-per-node=40     # Total run time# limit (hour:minute:second)
#SBATCH -J WRF_Ex1   Number of MPI processes per node
#SBATCH --cpus-per-task=1           # Number of OpenMP #threads Jobper nameMPI (short)process
#SBATCH -A projxxxx t 24:00:00                 # Your project account *** {USER EDIT} ***

module purge
module load netCDF-Fortran/4.5.2-iimpi-2019b
module load libpng/1.6.37-GCCcore-8.3.0
module load JasPer/1.900.1-intel-2019b

export JASPERINC=$EBROOTJASPER/include
export JASPERLIB=$EBROOTJASPER/lib

ulimit -s unlimited Total run time limit (hour:minute:second)
#SBATCH -J WRF_Ex1                  # Job name (short)
#SBATCH -A projxxxx                 # MUSTYour haveproject otherwiseaccount ERROR*** {USER srun ./real.exe
srun ./wrf.exeEDIT} ***

module purge
module load netCDF-Fortran/4.5.2-iimpi-2019b
module load libpng/1.6.37-GCCcore-8.3.0
module load JasPer/1.900.1-intel-2019b

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
export JASPERINC=$EBROOTJASPER/include
export JASPERLIB=$EBROOTJASPER/lib

ulimit -s unlimited                 # MUST have otherwise ERROR

srun -n8 ./real.exe                 # Run real.exe using 8 CPU cores (-n8)
srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe

# It is more efficient to run geogrid.exe, metgrid.exe and real.exe using a few CPU cores (but enough memory).

สามารถปรับให้ Submit WPS และ WRF พร้อมกันได้ (See Single WRF Job Submission)

...

Code Block
languagebash
sbatch submitWRF.sh
tail -F rsl.out.0000        # (Optional) see the progress

RETURN: WRF model

...

Contact Us
ThaiSC support service : thaisc-support@nstda.or.th