Local module is a beneficial approach to efficiently organize your self-installed libraries. Additionally, EasyBuild serves as a framework for building and installing software, particularly those with dependencies. The following sections provide general guidelines on how to use local modules and EasyBuild to install and run software on LANTA.

Updated: May 2024


0. Define local module command (only once)

To utilize local modules, we suggest creating a bash command such as localmod that contains essential setup within your $HOME/.bashrc. The provided set of commands below will create a local module directory based on LOCALMOD_DIR and then define the localmod command in your $HOME/.bashrc.

This localmod command is needed to be executed to activate local modules, before installing software in Step 1 and before using local modules in your job script (see below).

LOCALMOD_DIR=$HOME/localmod           # *** EDITABLE, set to an empty path ***

mkdir -p ${LOCALMOD_DIR}              # Create the directory if not already exist
cat << Eof >> $HOME/.bashrc           # Define the below "localmod" function in your .bashrc
function localmod(){
  export EASYBUILD_PREFIX=${LOCALMOD_DIR}   
  module use ${EASYBUILD_PREFIX}/modules/all
  export EASYBUILD_OPTARCH=x86-milan
  echo "*** Import modules at ${EASYBUILD_PREFIX} on LANTA ***"
}
Eof
  • module use will import extra modules to your Lmod system, i.e., MODULEPATH.

  • EASYBUILD_PREFIX variable is for setting default paths when using EasyBuild.

  • EASYBUILD_OPTARCH variable must be set to x86-milan to be consistent with LANTA hardware.

LOCALMOD_DIR could be set to any preferred directory. To share your local modules with your project members, you could use a directory in your project path, like LOCALMOD_DIR=/project/ltxxxxxx-yyyyyy/localmod, for example.

By default, only you, the person who create the directory, have permission to install modules in it. However, your project members can access and use your modules by executing module use /project/ltxxxxxx-yyyyyy/localmod/modules/all directly or by setting the localmod command with the same LOCALMOD_DIR as you have done.

  • You need to re-login or execute source $HOME/.bashrc for the changes to take effects.

  • The above step needs to be done only once.

1. Prepare environment

note

To install a new software, it is strongly recommended to clean your environment by re-login or open a new bash session, then execute module purge.

To install a new software, it is strongly recommended to clean your environment by re-login or open a new bash session, then execute module purge.

localmod
module load EasyBuild
eb --show-config

2. Get EasyConfig file

  • Employing the CPE toolchain with the same compiler as utilized in the original EasyConfig file should minimize potential issues and changes in Step 3. That is,

    • GCCcore, GCC, foss, gmpichcpeGNU

    • intel, iccifort, iimpicpeIntel

3. Edit EasyConfig file

Open the .eb file using a text editor such as vi, then edit the following

  1. toolchain
    Correct the toolchain information, e.g., toolchain = {'name': 'cpeCray', 'version': '23.03'}. This must be consistent with the CPE toolchain and file name chosen in Step 2.

  2. builddependencies and/or dependencies
    Check if the software dependencies are available in ThaiSC modules by using module spider <dep-name>.
    For example, module spider binutils. If available, you could use them by editing each dependency information ('<dep-name>', '<dep-version>', '<dep-suffix>', <dep-toolchain>) where

  • Use ('<name>', '<version>', '[suffix]', SYSTEM), such as ('binutils', '2.40', '', SYSTEM) and ('Mako', '1.2.4', '-cray-python-3.10.10', SYSTEM), if the module name does not contain cpeXXX.

  • For cray-* modules, use ('cray-<name>/<version>', EXTERNAL_MODULE) where <version> should be the default version that the CPE toolchain used. This is to ensure that they are consistent across your module chain. A summary table is provided below.
    Additionally, you have to sort cray-* modules according to its hierarchy as well (see).

cray-* module

cpe-cuda/23.03

cpe-cuda/23.09

cray-hdf5-parallel

1.12.2.3

1.12.2.7

cray-netcdf-hdf5parallel

4.9.0.3

4.9.0.7

cray-parallel-netcdf

1.12.3.3

1.12.3.7

cray-python

3.9.13.1

3.10.10

cray-R

4.2.1.1

4.2.1.2

  • If you want to add extra options/flags at configure, build, or install stages, you could use preconfigopts, configopts, prebuildopts, buildopts, preinstallopts and installopts (see).

note

As a demonstration, the .eb files for installing GNU nano (text editor) and feh (image viewer) are available at /project/common/EasyBuild.

As a demonstration, the .eb files for installing GNU nano (text editor) and feh (image viewer) are available at /project/common/EasyBuild.

4. Install the software

  • -D :
    Do a short dry run to check EasyConfig hierarchy. No modules will be installed.

  • --robot-paths=$(pwd):
    Add all EasyConfig files in the current directory to EasyBuild search path

  • -r:
    Robot. Recursively install dependencies prior to the target software.
    Note: you may use -r $(pwd) instead of -r --robot-paths=$(pwd).

  • --parallel=N:
    Build the software using N CPU cores, e.g., make -j N

  • --trace --tmpdir=$(pwd): (Optional)
    Provide more information on progress and save logs to the current path. Recommend using them together.

  • For more information, execute eb --help, eb -a and others in eb --help | grep 'avail'

--parallel=N where N<8 is essential since the default 128 would put a burden on the frontend node, prompting admins to kill your process.

Some software may not support parallel build; therefore, --parallel=1 is recommended.

5. Job script template for using local modules

note

Use module --ignore-cache avail to check if the module is available.

Use module --ignore-cache avail to check if the module is available.

Don’t forget to enable local module by executing the localmod command defined in Step 0

#!/bin/bash
#SBATCH -p compute              # Partition
#SBATCH -N 1                    # Number of nodes
#SBATCH --ntasks-per-node=128   # Number of MPI processes per node
#SBATCH --cpus-per-task=1       # Number of OpenMP threads per MPI process
#SBATCH -t 5-00:00:00           # Job runtime limit
#SBATCH -J MyJob                # Job name
#SBATCH -A ltxxxxxx             # Account *** {USER EDIT} *** 

source $HOME/.bashrc
localmod

module purge
module load <your-local-module>

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

srun -c${SLURM_CPUS_PER_TASK} <software-command>

Reference

EasyConfig repositories