Table of Contents | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
Expand | |||||
---|---|---|---|---|---|
| |||||
|
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
All ThaiSC modules are located at the same module path, so there is no module hierarchy. Executing module avail
on LANTA will display all available ThaiSC modules. For a more concise list, you can use module overview
, then, use module whatis spider <name>
or module help <name>
to learn more about a each specific module.
Users can readily use ThaiSC modules and CPE toolchains to build their applications. Some popular application software are pre-installed as well, for more information, refer to Applications usage.
Code Block | ||
---|---|---|
| ||
username@lanta-xname:~> module overview ------------------------------------- /lustrefs/disk/modules/easybuild/modules/all -------------------------------------- ADIOS2 (2) FriBidiGATK (21) MrBayesNASM (1) Trimmomatic (1) Tcl hwloc (12) ATK groff (2) ATK GATK (2) GDAL (1) NASM (2) NLopt (12) UDUNITS2Tk (12) hwloc intltool (1) Amber (1) GDALGEOS (2) NLopt Nextflow (2) VASP Trimmomatic (3(1) jbigkit intltool (21) Apptainer (1) GEOSGLM (2) NextflowNinja (21) VCFtoolsUDUNITS2 (1) libGLUjbigkit (2) Armadillo (2) GLMGLib (2) Ninja OSPRay (12) WPS VASP (23) libaeclibGLU (2) AutoDock-vina (1) GLibGMP (23) OpenCASCADE OSPRay (2) VCFtools (21) WRFlibaec (12) Autoconf libdeflate (21) Autoconf GObject-Introspection (2) (1)OpenEXR GMP (2) WPS (32) OpenCASCADE libdeflate (2) Automake WRFchem (1) libdrmGROMACS (2) Automake (12) GObject-Introspection (2)OpenFOAM OpenEXR (2) WRF (2) Wayland (21) libepoxylibdrm (2) Autotools (1) GSL GROMACS (23) OpenFOAMOpenJPEG (2) X11 WRFchem (2) libffilibepoxy (12) BCFtools (1) GSL Gaussian (31) OpenJPEGOpenMPI (21) XZWayland (2) libffi (3) libgeotiff (21) BEDTools (1) Gaussian GenericIO (12) OpenSSL (1) X11 Xerces-C++ (12) libglvndlibgeotiff (2) BLAST+ (1) GenericIOGmsh (21) OpenTURNS (2) XZ Yasm (13) libiconvlibglvnd (2) BLASTDB (1) Go (1) PCRE (1) arpackXerces-ngC++ (21) libjpeg-turbolibiconv (32) BWA (1) HDF-EOS (2) PCRE2 (1) assimpYasm (1) libpciaccess libjpeg-turbo (23) BamTools (1) HDF (2) PDAL (1) at-spi2-atkarpack-ng (2) libpng libpciaccess (32) Beast (1) HTSlib (1) PETSc (2) at-spi2-core assimp (2) libreadlinelibpng (23) Bison (1) HYPRE (2) PROJ (2) awsat-ofispi2-ncclatk (12) libtirpc libreadline (2) Blosc (2) HarfBuzz (2) Pango (2) beagle-lib at-spi2-core (12) libtool libtirpc (12) Boost (4) ICU (3) ParMETISParFlow (21) binutils aws-ofi-nccl (1) libunwindlibtool (21) Bowtie (1) Imath (2) ParMETIS ParallelIO (12) bzip2 beagle-lib (31) libxml2libunwind (32) Bowtie2 (1) JasPer (23) PerlParaView (1) binutils (2) cURL (1) libxml2 (1) lz4 (3) Brotli (2) Java (2) PostgreSQLParallelIO (21) cairobzip2 (23) lz4 minimap2 (13) C-Blosc2 (2) KaHIP (2) Perl QuantumESPRESSO (2) canucURL (1) nccl minimap2 (1) CFITSIO (2) LAME (1) RAxML-NGPostgreSQL (2) (1) cairo cpeCray (12) ncursesnccl (21) CGAL (2) LLVM (1) QuantumESPRESSO SAMtools(2) canu (1) cpeGNU (1) ncurses (1) nlohmann_json (12) CMake (2) LMDB (1) SCOTCHRAxML-NG (21) cpeCray cpeIntel (1) numactl nlohmann_json (1) CrayNVHPC (1) LibTIFF (2) SDL2SAMtools (21) cpeGNU ecCodes (21) pixmannumactl (1) DB (2) M4 (1) SLEPcSCOTCH (2) expat cpeIntel (21) pixman pkgconf (1) DBus (1) MAFFT (1) SPAdesSDL2 (12) flex ecCodes (12) tbbpkgconf (1) ESMF (2) METIS (2) SLEPc SQLite (2) expat fontconfig (2) tbb termcap (1) EasyBuild (1) MPC (2) SWIGSPAdes (31) freetypeflex (2) x264(1) termcap (1) Eigen (1) MPFR (2) SZSQLite (2) fontconfig (2) gettextx264 (1) FDS x265 (1) FDS MUMPS (1) MUMPS (3) SWIG (2) SpectrA(3) freetype (12) git-lfsx265 (1) xorg-macros (21) FFmpeg (2) Mako (2) SuiteSparseSYCL (2) googletest (1) xpropgettext (1) xorg-macros (2) FastQC (1) Mamba (1) SuperLU_DISTSZ (2) gperf git-lfs (1) zfpxprop (2) FortranGIS (2) Mesa (2) SpectrA (1) Tclgoogletest (1) zfp (2) FreeXL gperftools (2) Meson zlib (2) SuiteSparse (2) FreeXL gperf (21) Mesonzlib (2) FriBidi (2) MrBayes Tk (21) groffSuperLU_DIST (2) gperftools (2) zstd (3) |
Expand | |||||
---|---|---|---|---|---|
| |||||
|
...
1. Slurm sbatch header
Anchor | ||||||
---|---|---|---|---|---|---|
|
The
#SBATCH
macro directives can be used to specify sbatch
options that mostly unchanged, such as partition, time limit, billing account, and so on. For optional options like job name, users can specify them when submitting the script (see Submitting a job). For more details regarding sbatch
options, please visit Slurm sbatch. Mostly, Slurm sbatch
options only define and request computing resources that can be used inside a job script. The actual resources used by a software/executable can be different depending on how it will be invoked/issued (see Stage 5), although these sbatch
options are passed and become the default options for it. For GPU jobs, we recommend using either --gpus
or --gpus-per-node
to request GPUs at this stage; additionally, please see GPU bindingstage will provide the most flexibility for the next stage, GPU binding.
If your application software only supports parallelization by multi-threading, then your software cannot utilize resources across nodes; in this case, therefore, -N
, -n, --ntasks
and --ntasks-per-node
should be set to 1.
2. Loading modules
It is advised to load every module used when installing the software in the job script, although build dependencies such as CMake, Autotools, and binutils can be omitted. Additionally, those modules should be of the same version as when they were used to compile the program.
...
Expand | |||||
---|---|---|---|---|---|
| |||||
When executing your program, if you encounter
Preliminary check could be performed on frontend node by doing something like
|
4. Setting environment variables
Some software requires additional environment variables to be set at runtime; for example, the path to the temporary directory. Parameters Output environment variables set by Slurm sbatch (see Slurm sbatch - output environment variables) could be utilized in setting up used to set software-specific environment variablesparameters.
For application with OpenMP threading, OMP_NUM_THREADS
, OMP_STACKSIZE
, ulimit -s unlimited
are commonly set in a job script. An example is shown below.
...
Usually, either srun
, mpirun
, mpiexec
or aprun
is required to run MPI programs. On LANTA, srun
command MUST be used insteadto launch MPI processes. The table below compares a few options of those commands.
Command | Total MPI processes | CPU per MPI process | MPI processes per node |
---|---|---|---|
srun | -n, --ntasks | -c, --cpus-per-task | --ntasks-per-node |
mpirun/mpiexec | -n, -np | --map-by socket:PE=N | --map-by ppr:N:node |
aprun | -n, --pes | -d, --cpus-per-pe | -N, --pes-per-node |
There is usually no need to explicitly add options option to srun
since, by default, Slurm will automatically derive them from sbatch
, with the exception of --cpus-per-task.
Anchor | ||||
---|---|---|---|---|
|
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Note |
---|
For multi-threaded hybrid (MPI+Multi-threading) applications, it is essential to specify |
...
Info |
---|
You can test your initial script on compute-devel or gpu-devel partitions, using |
Your entire job script will only run on the first requested node (${SLURMD_NODENAME}). Only the lines starting with srun
could initiate process and run on the other nodes.
...
Example
Installation guide
...