Excerpt |
---|
hidden | true |
---|
name | Example 1: Run WRF |
---|
|
ตัวอย่างการใช้งาน WRF เบื้องต้น |
...
Code Block |
---|
|
#!/bin/bash
#SBATCH -p compute # Partition or machine type [devel/compute/memory]
#SBATCH -N 1 --ntasks-per-node=40 # Number of nodes and Number of core per node #SBATCH -t 24:00:00 # Number of nodes
#SBATCH --ntasks-per-node=40 # TotalNumber runof timeMPI limit (hourprocesses per node
#SBATCH --cpus-per-task=1 # Number of OpenMP threads per MPI process
#SBATCH -t 24:00:00 # Total run time limit (hour:minute:second)
#SBATCH -J WRF_Ex1 # Job name (short)
#SBATCH -A projxxxx # Your project account *** {USER EDIT} ***
module purge
module load netCDF-Fortran/4.5.2-iimpi-2019b
module load libpng/1.6.37-GCCcore-8.3.0
module load JasPer/1.900.1-intel-2019b
export OMP_NUM_THREADS=1 # OpenMP threads per MPI process${SLURM_CPUS_PER_TASK}
export JASPERINC=$EBROOTJASPER/include
export JASPERLIB=$EBROOTJASPER/lib
ulimit -s unlimited # MUST have otherwise ERROR
srun -n8 ./real.exe # Run real.exe using 8 CPU cores (-n8)
srun -n40 -c1n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} ./wrf.exe
# It is more efficient to run geogrid.exe, metgrid.exe and real.exe using a few CPU cores (but enough memory). |
...
Code Block |
---|
|
sbatch submitWRF.sh
tail -F rsl.out.0000 # (Optional) see the progress |
...
Contact Us
ThaiSC support service : thaisc-support@nstda.or.th