Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel2
include
outlinefalse
indent
exclude
typelist
class
printablefalse

...

Expand
titleMore information
  • Execute cc --version, CC --version, or ftn --version to check which compiler is being used.

  • With PrgEnv-intel loaded, ${MKLROOT} is set to the corresponding Intel Math Kernal Library (MKL).

  • By the defaults of PrgEnv-intel, the C/C++ compiler is ICX/ICPX while the Fortran compiler is IFORT.

  • To use only Intel Classic, execute module swap intel intel-classic after loading PrgEnv-intel

  • To use only Intel oneAPI, execute module swap intel intel-oneapi after loading PrgEnv-intel

  • With PrgEnv-nvhpc loaded, ${NVIDIA_PATH} is set to the corresponding NVIDIA SDK location.

  • There is PrgEnv-nvidia, but it will become deprecated soon, so it is not recommended.

  • With PrgEnv-aocc loaded, ${AOCC_PATH} is set to the corresponding AOCC location.


Code Block
languagebash
--------------------------------- /opt/cray/pe/lmod/modulefiles/core ---------------------------------
PrgEnv-aocc   (2)   cce          (3)   cray-libpals (3)   craypkg-gen   (2)   nvhpc          (2)
PrgEnv-cray   (2)   cpe-cuda     (3)   cray-libsci  (3)   cudatoolkit   (6)   nvidia         (2)
PrgEnv-gnu    (2)   cpe          (3)   cray-mrnet   (2)   gcc           (3)   papi           (3)
PrgEnv-intel  (2)   cray-R       (2)   cray-pals    (3)   gdb4hpc       (3)   perftools-base (3)
PrgEnv-nvhpc  (2)   cray-ccdb    (2)   cray-pmi     (3)   intel-classic (2)   sanitizers4hpc (2)
PrgEnv-nvidia (2)   cray-cti     (5)   cray-python  (2)   intel-oneapi  (2)   valgrind4hpc   (3)
aocc          (2)   cray-dsmml   (1)   cray-stat    (2)   intel         (2)
atp           (3)   cray-dyninst (2)   craype       (3)   iobuf         (1)

--------------------------------- /opt/cray/pe/lmod/modulefiles/craype-targets/default ----------------------------------
craype-x86-milan        (1)     craype-accel-nvidia80   (1)      ... other modules ...

...

Expand
title [Feb 2024] Current CPE toolchains

CPE toolchain

Note

cpeGNU/23.03

GCC 11.2.0

cpeCray/23.03

CCE 15.0.1

cpeIntel/23.03

Deprecated and hidden. It will be removed in the future.

cpeIntel/23.09

Intel Compiler 2023.1.0

...

All ThaiSC modules are located at the same module path, so there is no module hierarchy. Executing module avail on LANTA will display all available ThaiSC modules. For a more concise list, you can use module overview, then, use module whatis spider <name> or module help <name> to learn more about a each specific module.

Users can readily use ThaiSC modules and CPE toolchains to build their applications. Some popular application software are pre-installed as well, for more information, refer to Applications usage.

Code Block
languagebash
username@lanta-xname:~> module overview
------------------------------------- /lustrefs/disk/modules/easybuild/modules/all --------------------------------------
ADIOS2        (2)   FriBidiGATK                  (21)   NASM  MrBayes          (1)   TrimmomaticTcl          (12)   hwlocgroff         (12)
ATK           (2)   GATKGDAL                  (12)   NASMNLopt            (1)2)   Tk      UDUNITS2     (12)   intltoolhwloc         (1)
Amber         (1)   GDALGEOS                  (2)   NLopt   Nextflow        (2)   VASPTrimmomatic         (3(1)   jbigkit intltool      (21)
Apptainer     (1)   GEOSGLM                   (2)   Ninja   Nextflow        (21)   VCFtoolsUDUNITS2     (1)   libGLUjbigkit        (2)
Armadillo     (2)   GLM GLib                  (2)   Ninja OSPRay          (12)   WPSVASP          (23)   libaeclibGLU        (2)
AutoDock-vina (1)   GMP GLib                  (23)   OSPRayOpenCASCADE          (2)   WRFVCFtools     (1)   libaec  (1)   libdeflate    (2)
Autoconf      (1)   GMP  GObject-Introspection (2)   OpenEXR         (2)   WPS  (3)   OpenCASCADE     (2)   libdeflate WRFchem   (2)
Automake  (1)   libdrm (1)   GROMACS    (2) Automake      (1)   GObject-Introspection (2)   OpenEXR OpenFOAM        (2)   WRF   Wayland       (21)   libdrm  libepoxy      (2)
Autotools     (1)   GROMACSGSL                   (23)   OpenFOAMOpenJPEG        (2)   X11    WRFchem      (2)   libffilibepoxy        (12)
BCFtools      (1)   GSL Gaussian                  (31)   OpenJPEGOpenMPI         (21)   XZWayland      (2)    libffi (3)   libgeotiff    (21)
BEDTools      (1)   GaussianGenericIO              (12)   OpenSSL         (1)    Xerces-C++X11          (12)   libglvndlibgeotiff      (2)
BLAST+        (1)   Gmsh     GenericIO             (21)   OpenTURNS       (2)   YasmXZ           (13)   libiconvlibglvnd      (2)
BLASTDB       (1)   Go                    (1)   PCRE            (1)   arpackXerces-ngC++    (21)   libiconv   libjpeg-turbo   (32)
BWA           (1)   HDF-EOS               (2)   PCRE2           (1)   assimpYasm         (1)   libpciaccess libjpeg-turbo (23)
BamTools      (1)   HDF                   (2)   PDAL            (1)   at-spi2-atkarpack-ng    (2)   libpng  libpciaccess      (32)
Beast         (1)   HTSlib                (1)   PETSc           (2)   at-spi2-core (2)assimp    libreadline   (2)  Bison libpng        (13)
Bison         (1)   HYPRE                 (2)   PROJ            (2)   awsat-ofispi2-ncclatk  (12)   libtirpc   libreadline   (2)
Blosc         (2)   HarfBuzz              (2)   Pango           (2)   beagleat-spi2-libcore   (12)   libtool libtirpc      (12)
Boost         (4)   ICU                   (3)   ParMETISParFlow         (21)   binutils     aws-ofi-nccl (1)   libunwindlibtool       (21)
Bowtie        (1)   Imath                 (2)   ParMETIS  ParallelIO      (12)   bzip2   beagle-lib     (31)   libxml2libunwind       (32)
Bowtie2       (1)   JasPer                (23)   PerlParaView        (1)   binutils  (2)   cURL(1)   libxml2      (1)   lz4           (3)
Brotli        (2)   Java                  (2)   PostgreSQLParallelIO      (21)   cairobzip2        (23)   lz4   minimap2        (13)
C-Blosc2      (2)   KaHIP                 (2)   Perl       QuantumESPRESSO     (2)   canucURL         (1)   ncclminimap2          (1)
CFITSIO       (2)   LAME                  (1)   RAxML-NGPostgreSQL      (2)  (1) cairo  cpeCray      (12)   ncursesnccl          (21)
CGAL          (2)   LLVM                  (1)   SAMtoolsQuantumESPRESSO (2)   canu     (1)   cpeGNU (1)   ncurses   (1)   nlohmann_json (12)
CMake         (2)   LMDB                  (1)   SCOTCHRAxML-NG          (21)   cpeIntelcpeCray      (1)   numactl       nlohmann_json (1)
CrayNVHPC     (1)   LibTIFF               (2)   SDL2SAMtools            (21)   cpeGNU ecCodes      (21)   pixmannumactl        (1)
DB            (2)   M4                    (1)   SLEPc SCOTCH          (2)   expat   cpeIntel     (21)   pkgconfpixman        (1)
DBus          (1)   MAFFT                 (1)   SPAdesSDL2            (12)   flex ecCodes        (12)   tbb    pkgconf       (1)
ESMF          (2)   METIS                 (2)   SLEPc SQLite          (2)   fontconfigexpat        (2)   tbb termcap          (1)
EasyBuild     (1)   MPC                   (2)   SWIGSPAdes            (31)   flex freetype     (2)   x264(1)   termcap       (1)
Eigen         (1)   MPFR                  (2)   SZSQLite          (2)   fontconfig (2)   gettext      (1(2)   x265x264          (1)
FDS           (1)   MUMPS                 (23)   SpectrASWIG            (13)   git-lfsfreetype      (1)2)   x265       xorg-macros   (21)
FFmpeg        (2)   Mako                  (2)   SuiteSparseSYCL      (2)   googletest   (1)   xpropgettext      (1)   xorg-macros   (2)
FastQC        (1)   Mamba                 (1)   SZ    SuperLU_DIST          (2)   gperfgit-lfs        (1)   zfpxprop           (2)
FortranGIS    (2)   Mesa                  (2)   TclSpectrA             (21)   gperftoolsgoogletest   (21)   zfp zlib          (2)
FreeXL        (2)   Meson                 (2)   TkSuiteSparse     (2)   gperf        (1)   zlib          (2)
FriBidi       (2)   MrBayes     groff            (1)   SuperLU_DIST    (2)   gperftools   (2)   zstd          (3)
Expand
titleExample: Boost/1.81.0-cpeGNU-23.03
Code Block
languagebash
module purge
module load Boost/1.81.0-cpeGNU-23.03
echo ${CPATH}
echo ${LIBRARY_PATH}
echo ${LD_LIBRARY_PATH}

...

1. Slurm sbatch header

Anchor
SbatchHeader
SbatchHeader
isMissingRequiredParameterstrue

The #SBATCH macro directives can be used to specify sbatch options that mostly unchanged, such as partition, time limit, billing account, and so on. For optional options like job name, users can specify them when submitting the script (see Submitting a job). For more details regarding sbatch options, please visit Slurm sbatch.

Mostly, Slurm sbatch options only define and request computing resources that can be used inside a job script. The actual resources used by a software/executable can be different depending on how it will be invoked/issued (see Stage 5), although these sbatch options are passed and become the default options for it. For GPU jobs, we recommend using either --gpus or --gpus-per-node to request GPUs at this stage ; additionally, please also see GPU binding.2will provide the most flexibility for the next stage, GPU binding.

If your application software only supports parallelization by multi-threading, then your software cannot utilize resources across nodes; in this case, therefore, -N, -n, --ntasks and --ntasks-per-node should be set to 1.

2. Loading modules
It is advised to load every module used when installing the software in the job script, although build dependencies such as CMake, Autotools, and binutils can be omitted. Additionally, those modules should be of the same version as when they were used to compile the program.

...

Expand
titleMore information
  • If some software dependencies were installed locally, their search paths should also be added.

  • We do NOT recommend specifying these search paths in ~/.bashrc directly, as it could lead to library internal conflicts when having more than one main software.

  • Some software provides a script to be sourced before using. In this case, sourcing it in your job script should be equivalent to adding its search paths manually by yourself.


When executing your program, if you encounter

  • If 'xxx' is not a typo you can use command-not-found to lookup ..., then, your current PATH variable may be incorrect.

  • xxx: error while loading shared libraries: libXXX.so: cannot open shared object file, then,

    • If libXXX.so seem to be related to your software, then you may set LD_LIBRARY_PATH variable in Step 3 incorrectly.

    • If libXXX.so seem to be from a module you used to build your software, then loading that module should fix the problem.

  • ModuleNotFoundError: No module named 'xxx', then, your current PYTHONPATH may be incorrect.


Preliminary check could be performed on frontend node by doing something like

Code Block
languagebash
bash   # You should check them in another bash shell

module purge
module load <...>
module load <...>

export PATH=<software-bin-path>:${PATH}
export LD_LIBRARY_PATH=<software-lib/lib64-path>:${LD_LIBRARY_PATH}
export PYTHONPATH=<software-python-site-packages>:${PYTHONPATH}

<executable> --help
<executable> --version

exit

4. Setting environment variables
Some software requires additional environment variables to be set at runtime; for example, the path to the temporary directory. Parameters Output environment variables set by Slurm sbatch (see Slurm sbatch - output environment variables) could be utilized in setting up used to set software-specific environment variablesparameters.
For application with OpenMP threading, OMP_NUM_THREADS, OMP_STACKSIZE, ulimit -s unlimited are commonly set in a job script. An example is shown below.

...

Usually, either srun, mpirun, mpiexec or aprun is required to run MPI programs. On LANTA, srun command MUST be used insteadto launch MPI processes. The table below compares a few options of those commands.

Command

Total MPI processes

CPU per MPI process

MPI processes per node

srun

-n, --ntasks

-c, --cpus-per-task

--ntasks-per-node

mpirun/mpiexec

-n, -np

--map-by socket:PE=N

--map-by ppr:N:node

aprun

-n, --pes

-d, --cpus-per-pe

-N, --pes-per-node

There is usually no need to explicitly add options option to srun since, by default, Slurm will automatically derive them from sbatch, with the exception of --cpus-per-task.

Anchor
SrunGPUBinding
SrunGPUBinding

Expand
titleGPU Binding
  1. When using --gpus-per-node or no additional srun without any options, all tasks on the same node will see the same set of GPU IDs, starting from 0, available on the node. Try

    Code Block
    languagebash
    salloc -p gpu-devel -N2 --gpus-per-node=4 -t 00:05:00 -J "GPU-ID"     # Note: using default --ntasks-per-node=1
    srun nvidia-smi -L
    srun --ntasks-per-node=4 nvidia-smi -L
    srun --ntasks-per-node=2 --gpus-per-node=3 nvidia-smi -L
    exit  squeue --me             # Release salloc
    myqueue            # Check that no "GPU-ID" job still running 

    In this case, you can create a wrapper script to use SLURM_LOCALID or others to set CUDA_VISIBLE_DEVICES of the tasks within the same node, see each task. For example, you could use a wrapper script mentioned in HPE intro_mpi (Section 1) or you could devise an algorithm and use torch.cuda.set_device in PyTorch as demonstrated here.

  2. On the other hands, when using --gpus-per-task or --ntasks-per-gpu to bind resources, the GPU ID IDs seen by each task will be starting start from 0 (CUDA_VISIBLE_DEVICES) but will be bound to a different GPU/UUID. Try

    Code Block
    languagebash
    salloc -p gpu-devel -N1 --gpus=4 -t 00:05:00 -J "GPU-ID"    # Note: using default --ntasks-per-node=1
    srun --ntasks=4 --gpus-per-task=1 nvidia-smi -L
    srun --ntasks-per-gpu=4 nvidia-smi -L
    exit
    squeue --me
    This approach is not recommended when running the software across GPU nodes, see
    -gpu=4 nvidia-smi -L
    exit              # Release salloc
    myqueue           # Check that no "GPU-ID" job still running 

    However, it is stated in HPE intro_mpi (Section 1) that using these options with CrayMPICH could introduce an intra-node MPI performance drawback.

Note

For multi-threaded hybrid (MPI+Multi-threading) applications, it is essential to specify -c or --cpus-per-tasks options for srun to prevent a potential decrease in performance (~50%>10%) due to improper CPU binding.

...

Info

You can test your initial script on compute-devel or gpu-devel partitions, using #SBATCH -t 02:00:00, since they normally have a shorter queuing time.

Your entire job script will only run on the first requested node (${SLURMD_NODENAME}). Only the lines starting with srun could initiate process and run on the other nodes.

...

Example

Installation guide

...