Table of Contents | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
There are mainly three approaches in preparing an environment on LANTA:
Module system Manual installation (this guide)
Conda environment (please visit Mamba / Miniconda3)
Container (please visit Container (Apptainer / Singularity))
Users should select one over another. They should not be mixed, since library conflicts may occur.
...
Expand | |||||
---|---|---|---|---|---|
| |||||
|
...
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
...
All ThaiSC modules are located at the same module path, so there is no module hierarchy. Executing module avail
on LANTA will display all available ThaiSC modules. For a more concise list, you can use module overview
, then, use module whatis <name>
or module help spider <name>
to learn more about a each specific module.
Users can readily use ThaiSC modules and CPE toolchains to build their applications. Some popular application software are pre-installed as well, for more information, refer to Applications usage.
Code Block | ||
---|---|---|
| ||
username@lanta-xname:~> module overview ------------------------------------- /lustrefs/disk/modules/easybuild/modules/all -------------------------------------- ADIOS2 (2) FriBidiGATK (21) MrBayesNASM (1) Trimmomatic (1) Tcl hwloc (12) ATK groff (2) ATK GATK (2) GDAL (12) NASMNLopt (12) Tk UDUNITS2 (12) intltoolhwloc (1) Amber (1) GDALGEOS (2) NLopt Nextflow (2) VASP Trimmomatic (31) jbigkitintltool (21) Apptainer (1) GLM GEOS (2) Ninja Nextflow (21) VCFtoolsUDUNITS2 (1) libGLUjbigkit (2) Armadillo (2) GLM GLib (2) NinjaOSPRay (12) WPS VASP (23) libaeclibGLU (2) AutoDock-vina (1) GMP GLib (23) OSPRayOpenCASCADE (2) VCFtools (21) WRFlibaec (12) Autoconf libdeflate (21) Autoconf GObject-Introspection (2) (1)OpenEXR GMP (2) WPS (32) OpenCASCADE libdeflate (2) Automake WRFchem (1) libdrmGROMACS (2) Automake (12) GObject-Introspection (2)OpenFOAM OpenEXR (2) WRF (2) Wayland (21) libepoxylibdrm (2) Autotools (1) GROMACSGSL (23) OpenFOAMOpenJPEG (2) X11WRFchem (2) libffilibepoxy (12) BCFtools (1) GSLGaussian (1) OpenMPI (3) OpenJPEG (21) XZWayland (2) libffi (3) libgeotiff (21) BEDTools (1) Gaussian GenericIO (12) OpenSSL (1) Xerces-C++X11 (12) libglvnd libgeotiff (2) BLAST+ (1) Gmsh GenericIO (21) OpenTURNS (2) XZ Yasm (13) libiconvlibglvnd (2) BLASTDB (1) Go (1) PCRE (1) arpackXerces-ngC++ (21) libjpeg-turbolibiconv (32) BWA (1) HDF-EOS (2) PCRE2 (1) assimpYasm (1) libpciaccesslibjpeg-turbo (23) BamTools (1) HDF (2) PDAL (1) at-spi2-atkarpack-ng (2) libpnglibpciaccess (3(2) Beast (1) HTSlib (1) PETSc (2) assimp at-spi2-core (2) libreadlinelibpng (23) Bison (1) HYPRE (2) PROJ (2) awsat-ofispi2-ncclatk (12) libtirpc libreadline (2) Blosc (2) HarfBuzz (2) Pango (2) beagle-lib at-spi2-core (12) libtoollibtirpc (12) Boost (4) ICU (3) ParMETISParFlow (21) binutils aws-ofi-nccl (1) libtool libunwind (21) Bowtie (1) Imath (2) ParMETIS ParallelIO (12) bzip2beagle-lib (31) libxml2 libunwind (32) Bowtie2 (1) JasPer (23) PerlParaView (21) cURL binutils (1) lz4libxml2 (3) Brotli (2) Java (2) PostgreSQLParallelIO (21) cairobzip2 (23) minimap2lz4 (13) C-Blosc2 (2) KaHIP (2) Perl QuantumESPRESSO (2) canucURL (1) ncclminimap2 (1) CFITSIO (2) LAME (1) RAxML-NGPostgreSQL (2) (1) cairo cpeCray (12) nccl ncurses (21) CGAL (2) LLVM (1) SAMtoolsQuantumESPRESSO (2) canu (1) cpeGNUncurses (12) CMake nlohmann_json (1) CMake (2) LMDB (1) SCOTCH RAxML-NG (21) cpeIntelcpeCray (1) numactl nlohmann_json (1) CrayNVHPC (1) LibTIFF (2) SDL2SAMtools (21) ecCodescpeGNU (21) pixman numactl (1) DB (2) M4 (1) SLEPc SCOTCH (2) expat cpeIntel (21) pkgconfpixman (1) DBus (1) MAFFT (1) SPAdesSDL2 (12) flex ecCodes (12) tbb pkgconf (1) ESMF (2) METIS (2) SQLiteSLEPc (2) fontconfigexpat (2) tbb termcap (1) EasyBuild (1) MPC (2) SWIG SPAdes (31) flex freetype (2) x264(1) termcap (1) Eigen (1) MPFR (2) SZ SQLite (2) gettextfontconfig (12) x265x264 (1) FDS (1) MUMPS (23) SpectrASWIG (13) git-lfs freetype (12) xorg-macrosx265 (21) FFmpeg (2) Mako (2) SYCL SuiteSparse (2) googletest (1) gettext xprop (1) xorg-macros (2) FastQC (1) Mamba (1) SuperLU_DISTSZ (2) gperfgit-lfs (1) zfp xprop (2) FortranGIS (2) Mesa (2) SpectrA Tcl (1) googletest (1) zfp (2) FreeXL gperftools (2) zlib Meson (2) SuiteSparse (2) FreeXLgperf (21) zlib Meson (2) FriBidi (2) TkMrBayes (21) SuperLU_DIST groff (2) gperftools (2) zstd (3) |
Expand | |||||
---|---|---|---|---|---|
| |||||
|
...
Expand | |||||
---|---|---|---|---|---|
| |||||
|
2.3 Related topics
A separate page is dedicated for explaining how to manage and install local modules in the user’s home/project paths using EasyBuild → Local module & EasyBuild (In progress)
Useful compiler flags
...
3. Running the software
Every main application software must run on compute/gpu/memory nodes. The recommended approach is to write a job script and send it to Slurm scheduler through sbatch
command.
...
1. Slurm sbatch header
Anchor | ||||||
---|---|---|---|---|---|---|
|
The
#SBATCH
macro directives can be used to specify sbatch
options that mostly unchanged, such as partition, time limit, billing account, and so on. For optional options like job name, users can specify them when submitting the script (see Submitting a job). For more details regarding sbatch
options, please visit Slurm sbatch. It should be noted that Mostly, Slurm sbatch
options only define and request computing resources that can be used inside a job script. The actual resources used by a software/executable can be different depending on how it will be invoked/issued (see Stage 5).
2. Loading modules
It is advised to load every module used when installing the software in the job script, although build dependencies such as CMake, Autotools, and binutils can be omitted. Additionally, those modules should be of the same version as when they were used to compile the program.
3. Adding software paths
The Linux OS will not be able to find your program if it is not in its search paths. The commonly used ones are namely PATH
(for executable/binary), LD_LIBRARY_PATH
(for shared library), and PYTHONPATH
(for python packages). Users MUST append or prepend them using syntax such as export PATH=<software-bin-path>:${PATH}
, otherwise, prior search paths added by module load
and others will disappear.
If <your-install-location>
is where your software is installed, then putting the below commands in your job script should be sufficient in most cases.
...
language | bash |
---|
...
, although these sbatch
options are passed and become the default options for it. For GPU jobs, using either --gpus
or --gpus-per-node
to request GPUs at this stage will provide the most flexibility for the next stage, GPU binding.
If your application software only supports parallelization by multi-threading, then your software cannot utilize resources across nodes; in this case, therefore, -N
, -n, --ntasks
and --ntasks-per-node
should be set to 1.
2. Loading modules
It is advised to load every module used when installing the software in the job script, although build dependencies such as CMake, Autotools, and binutils can be omitted. Additionally, those modules should be of the same version as when they were used to compile the program.
3. Adding software paths
The Linux OS will not be able to find your program if it is not in its search paths. The commonly used ones are namely PATH
(for executable/binary), LD_LIBRARY_PATH
(for shared library), and PYTHONPATH
(for python packages). Users MUST append or prepend them using syntax such as export PATH=<software-bin-path>:${PATH}
, otherwise, prior search paths added by module load
and others will disappear.
If <your-install-location>
is where your software is installed, then putting the below commands in your job script should be sufficient in most cases.
Code Block | ||
---|---|---|
| ||
export PATH=<your-install-location>/bin:${PATH}
export LD_LIBRARY_PATH=<your-install-location>/lib:${LD_LIBRARY_PATH}
export LD_LIBRARY_PATH=<your-install-location>/lib64:${LD_LIBRARY_PATH} |
...
Expand | |||||
---|---|---|---|---|---|
| |||||
When executing your program, if you encounter
Preliminary check could be performed on frontend node by doing something like
|
4. Setting environment variables
Some software requires additional environment variables to be set at runtime; for example, the path to the temporary directory. Parameters Output environment variables set by Slurm sbatch (see Slurm sbatch - output environment variables) could be utilized in setting up used to set software-specific environment variablesparameters.
For application with OpenMP threading, OMP_NUM_THREADS
, OMP_STACKSIZE
, ulimit -s unlimited
are commonly set in a job script. An example is shown below.
...
Usually, either srun
, mpirun
, mpiexec
or aprun
is required to run MPI programs. On LANTA, srun
command MUST be used insteadto launch MPI processes. The table below compares a few options of those commands.
Command | Total MPI processes | CPU per MPI process | MPI processes per node |
---|---|---|---|
srun | -n, --ntasks | -c, --cpus-per-task | --ntasks-per-node |
mpirun/mpiexec | -n, -np | --map-by socket:PE=N | --map-by ppr:N:node |
aprun | -n, --pes | -d, --cpus-per-pe | -N, --pes-per-node |
There is usually no need to explicitly add options option to srun
since, by default, Slurm will automatically derive them from sbatch
. However, we recommend explicitly adding GPU binding options such as --gpus-per-task
or --ntasks-per-gpu
according to your software specification to srun
. Please visit Slurm srun for more details.
...
, with the exception of --cpus-per-task.
Anchor | ||||
---|---|---|---|---|
|
Expand | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Note |
---|
For hybrid (MPI+Multi-threading) applications, it is essential to specify |
...
Info |
---|
You can test your initial script on compute-devel or gpu-devel partitions, using |
Your entire job script will only run on the first requested node (${SLURMD_NODENAME}). Only the lines starting with srun
could initiate process and run on the other nodes.
...
Example
MiniWeather (cpeCray + cudatoolkit)
Installation guide
...