Relion could be used at Prometheus supercomputer in three ways:
- inside graphical interactive job using
pro-viz
service (Main documentation - polish - Obliczenia w trybie graficznym: pro-viz) - in SLURM batch job though SLURM script submitted from command line
- in SLURM batch job submitted from Relion GUI started via
pro-viz
service in dedicated partition
Interactive Relion job with Relion GUI
In order to start interactive Relion job with access to Relion GUI
Log into Prometheus login node
Log into Prometheus login nodessh <login>@pro.cyfronet.pl
Load
pro-viz
moduleLoad pro-viz modulemodule load tools/pro-viz
- Start
pro-viz
jobSubmit
pro-viz
job to qeuueCPU-only job
Submission of CPU pro-viz jobpro-viz start -N <number-of-nodes> -P <cores-per-node> -p <partition/queue> -t <maximal-time> -m <memory>
GPU job
Submission of GPU pro-viz jobpro-viz start -N <number-of-nodes> -P <cores-per-node> -g <number-of-gpus-per-node> -p <partition/queue> -t <maximal-time> -m <memory>
- Check status of submitted jobStatus of pro-viz job(s)
pro-viz list
Get password to
pro-viz
session (when job is already running)\Pro-viz job passwordpro-viz password <JobID>
exemple output
Pro-viz password example outputWeb Access link: https://viz.pro.cyfronet.pl/go?c=<hash>&token=<token> link is valid until: Sun Nov 14 02:04:02 CET 2021 session password (for external client): <password> full commandline (for external client): vncviewer -SecurityTypes=VNC,UnixLogin,None -via <username>@pro.cyfronet.pl -password=<password> <worker-node>:<display>
- Connect to graphical
pro-viz
session- you could use weblink obtained in previous point
- you could use VNC client (i.e. TurboVNC). Configuration of client described in Obliczenia w trybie graficznym: pro-viz (in polish)
- Setup Relion environment
When connected to GUI open Terminal and load Relion module
Load Relion modulemodule load plgrid/tools/relion
Start Relion GUI in background
Start relionrelion &
- Use Relion GUI for computation.
Relion in SLURM batch jobs
Most of Relion jobs could be run as batch jobs using SLURM
Log into Prometheus login node
Log into Prometheus login nodessh <login>@pro.cyfronet.pl
Move to Relion project directory
Change directoriescd $SCRATCH/<relion-project>
Usage of filesystems
Relion project during computations should be stored in $SCRATCH filesystem on Prometheus. More info - https://kdm.cyfronet.pl/portal/Prometheus:Basics#Disk_storage. For longer storage user should use $PLG_GROUPS_STORAGE/<team_name> filesystem.
Submit job
Job submisionsbatch script.slurm
Example CPU-only SLURM script
Relion CPU-only SLURM script#!/bin/bash # Number of allocated nodes #SBATCH --nodes=1 # Number of MPI processes per node #SBATCH --ntasks-per-node=4 # Number of threads per MPI process #SBATCH --cpus-per-task=6 # Partition #SBATCH --partition=plgrid # Requested maximal walltime #SBATCH --time=0-1 # Requested memory per node #SBATCH --mem=110GB # Computational grant #SBATCH --account=<name-of-grant> export RELION_SCRATCH_DIR=$SCRATCHDIR module load plgrid/tools/relion/3.1.2 mpirun <relion-command>
Example GPU SLURM script
Relion GPU SLURM script#!/bin/bash # Number of allocated nodes #SBATCH --nodes=1 # Number of MPI processes per node #SBATCH --ntasks-per-node=4 # Number of threads per MPI process #SBATCH --cpus-per-task=6 # Partition #SBATCH --partition=plgrid-gpu # Number of GPUs per node #SBATCH --gres=gpu:2 # Requested maximal walltime #SBATCH --time=0-1 # Requested memory per node #SBATCH --mem=110GB # Computational grant #SBATCH --account=<name-of-grant> export RELION_SCRATCH_DIR=$SCRATCHDIR module load plgrid/tools/relion/3.1.2 mpirun <relion-command> --gpu $CUDA_VISIBLE_DEIVCES