ParaView

ParaView is a well-known open-source post-processing tool designed for use on HPC clusters. It supports interactive remote visualization and Python batch processing. With ParaView, users can directly visualize large scientific datasets on LANTA, utilizing the powerful processors and the abundant memory on its compute nodes.

Official website: https://www.paraview.org/

Updated: Sep 2024



Modules

Module name

Description

Note

Module name

Description

Note

ParaView/5.12.1-cpeCray-23.03-NoGUI

Server-side ParaView (pvserver, pvbatch, pvpython)

CPU rendering using OSMesa.
Built with MPI and OpenMP.

To use remote rendering approach, you need to install client-side ParaView of the same version (e.g., 5.12.1) on your local machine https://www.paraview.org/download/.

I. Remote visualization using pvserver

In this approach, you will run pvserver on LANTA while running paraview on your local machine. Data processing and rendering will be done by pvserver then sent to your paraview GUI.

  1. Start a pvserver job on LANTA

    #!/bin/bash #SBATCH -p compute-devel # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=16 # Number of MPI processes per node #SBATCH --cpus-per-task=8 # Number of OpenMP threads per MPI process #SBATCH -t 02:00:00 # Job runtime limit #SBATCH -J pvserver # Job name #SBATCH -A ltxxxxxx # SLURM account *** {USER EDIT} *** module purge module load ParaView/5.12.1-cpeCray-23.03-NoGUI PV_TOKEN=$(shuf -n 1 -i 1-9999) PV_PORT=$(shuf -n 1 -i 10000-49151) cat << EoF --- Starting ParaView server --- LANTA username : ${USER} Job head node : ${SLURMD_NODENAME} Connect ID : ${PV_TOKEN} Server port : ${PV_PORT} Note: - Please wait for 'Waiting for client...' from pvserver before trying to connect. - Please wait upto 2-3 minutes for the connection to be fully established. - You can safely ignore 'omp_set_nested' and 'CRAYBLAS' warnings. --- pvserver output --- EoF export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} --unbuffered \ pvserver --mpi --force-offscreen-rendering \ --hostname=${SLURMD_NODENAME} \ --connect-id=${PV_TOKEN} \ --server-port=${PV_PORT} 2>&1
    1. Create a job script, e.g., submit.sh, using the above template with your project account substituted.

    2. Execute sbatch submit.sh to send your job to Slurm scheduler and get corresponding job ID in return.

    3. Wait for the job to run, then print the output of the job, e.g., cat slurm-<your-job-id>.out. You should get an output of the format below, containing information for the next stage.
      Warning: The job will keep running until timeout if it does not get connected to a paraview client.

      --- Starting ParaView server --- LANTA username : <username> Job head node : <lanta-hostname> Connect ID : <4-digit-token-number> Server port : <5-digit-port-number> Note: - Please wait for 'Waiting for client...' from pvserver before trying to connect. - Please wait upto 2-3 minutes for the connection to be fully established. - You can safely ignore 'omp_set_nested' and 'CRAYBLAS' warnings. --- pvserver output --- Waiting for client... Connection URL: cs://<lanta-hostname>:<5-digit-port-number> Accepting connection(s): <lanta-xname>:<5-digit-port-number>
  2. Connect paraview on your local machine to pvserver on LANTA

    1. [First time] Download the below .pvsc file to your local machine.

      • --- For Linux or MacOS with xterm and ssh.

      • --- For Windows with plink.exe (PuTTY).

    2. Run client-side ParaView (of the same version as used in the above job script) on your local machine, then click the Connect icon.

      image-20240823-093928.png
    3. Select LANTA configuration then click the “Connect”.
      [First time] Click “Load Servers” then select the .pvsc file that you previously downloaded to import LANTA configuration.

      image-20240823-094358.png
    4. Insert the information output from your pvserver job (Step 1.c), then click “OK“.
      Note: Please wait for the Waiting for client... line to be output from your pvserver job before completing this step.

    5. A window should be prompted for you to enter your SSH login password and 2FA verification code (for Windows, press Enter once more after that)--you could ignore the “Not Responding” and other messages. The connection is fully established when the pipeline name on the left panel is changed to LANTA. If so, you are ready to use ParaView remotely.

    6. After connected, the previous window should now show your job status, including remaining job runtime for you to check regularly. This window represents your connection to LANTA and your job.

    7. When you are done using ParaView, click the Disconnect icon. The status window should close itself within 10 seconds (For Windows, press a key). You should also check that your pvserver job is terminated, using myqueue command on LANTA terminal.

      *** Additional note ***

      1. Depending on your internet speed, it is common that a “Not responding“ message window would appear when you open a large data file. Before clicking any button, please monitor the progress bar at the bottom for 2-5 minutes.

      2. If the file is successfully opened but nothing appears, try adjusting object size, scales and camera.

      3. We recommend explicitly selecting the file format before opening it.

II. Parallel Python interface and scripting pvbatch

Alternatively, users can run ParaView without GUI in Python and iteratively render images in one go.

  1. Create a Python trace

    1. It is favorable to generate a baseline Python script by running a short ParaView GUI session.

    2. By clicking ‘Tools’ → 'Start Trace', every action you do after is recorded as an equivalent Python instruction. Therefore, after enabling it, you should proceed as normal and render an image.

    3. When finish, click ‘Tools' → 'Stop Trace’ then your baseline Python script should be displayed. Most of the lines are commented or self-explained. Save the script.

    4. With some basic Python knowledge should you be able to adjust and make it render several images at once. An example is provided at /project/common/ParaView/PVBatch/---both before and after adjustments. Also see documents in the Further Reading section below.

  2. Run pvbatch

    1. Create a pvbatch job script (e.g., submitPVBatch.sh) using the below template.

      #!/bin/bash #SBATCH -p compute-devel # Partition #SBATCH -N 1 # Number of nodes #SBATCH --ntasks-per-node=4 # Number of MPI processes per node #SBATCH --cpus-per-task=2 # Number of OpenMP threads per MPI process #SBATCH -t 02:00:00 # Job runtime limit #SBATCH -J pvbatch # Job name #SBATCH -A ltxxxxxx # SLURM account *** {USER EDIT} *** module purge module load ParaView/5.12.1-cpeCray-23.03-NoGUI export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} srun -n${SLURM_NTASKS} -c${SLURM_CPUS_PER_TASK} --unbuffered \ pvbatch --mpi --force-offscreen-rendering <your-python-script>
    2. Submit the script to the system by executing sbatch submitPVBatch.sh.
      *** Additional note ***

      1. If you encounter slurmstepd: error: Detected ... oom-kill event(s) ... killed by the cgroup out-of-memory handler, try increasing --cpus-per-task, --ntasks-per-node, adding --mem-per-cpu=3800M or using memory partition.

      2. If you render a large number of images, don’t for get to extend the job time limit in the job script.

Further reading


Contact Us
ThaiSC support service : thaisc-support@nstda.or.th