You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »

Configuration of cryoSPARC environment

License

Each user should have his/her own license obtained from https://cryosparc.com/download and apply for membership in plggcryospar team in Portal PLGrid

To get access to cryoSPARC instalation at Prometheus cluster

  1. Apply  for apply for membership in plggcryospar team in Portal PLGrid and ask for registration in Cyfronet's internal cryoSPARC users database and dedicated port for access to cryoSPARC master  through Helpdesk PLGrid.
  2. Log in to Prometheus login node

    Log into Prometheus login node
    ssh <login>@pro.cyfronet.pl
  3. Load cryoSPARC module  using command

    Set cryoSPARC environment
    module add plgrid/apps/cryosparc/3.1
  4. Run cryoSPARC configuration script. It will configure your cryoSPARC environment and create your user in cryoSPARC database and configure two lanes for external jobs - prometheus-gpu which is going to use plgrid-gpu partition for GPU jobs and prometheus-gpu-v100 for plgrid-gpu-v100 partition. Both lanes are going to use plgrid partition for CPU only jobs. As argument for script pass license id, your e-mail and password (they are going to be used to login to cryoSPARC webapp), your first and last name.

    Configure cryoSPARC
    cryosparc_configuration --license <XXXX> --email <your-email> --password <password> --firstname <Givenname> --lastname <Surname> 

    Access problems

    In case of "cryosparc_configuration: command not found" error run in terminal

    newgrp plggcryospar

    to start new subshell with permissions of plggcryospar team.

    Optional lanes/clusters

    You could create additional lanes/clusters for other maximal duration of SLURM job:

    • copy cluster config cluster_info.json  and script template cluster_script.sh from directory  /net/software/local/cryosparc/3.1/cyfronet to your working directory
    • modify files accordingly
      • in config cluster_info.json change name of lane/cluster to avoid overwriting default prometheus lane
      • in cluster_script.sh change --time, --partition or other parts of script template accordingly
    • run command cryosparcm cluster connect <name-of-cluster-form-cluster_info.json> to add lane/cluster
    • repeat above points to create another lane if nessesary 

    Access to GPU partitions

    To use GPUs on Promehteus cluster you have to apply for GPU resources at Portal PLGrid

    To check whether you have an access to partition on Prometheus login node run below command and check whether your accounts are on AllowAccounts list

    • partition plgrid-gpu

      scontrol show partition plgrid-gpu-v100| grep Accounts
    • partition plgrid-gpu-v100

      scontrol show partition plgrid-gpu-v100| grep Accounts

    If you do not have access to one or both of above partitions please contact Helpdesk PLGrid

  5. Your cryoSPARC master setup already done. All succeeding crypoSPARC master instances  should be run in batch jobs.


cryoSPARC master job

cryoSPARC master must not be run on login nodes of Prometheus cluster. It should be run in plgrid-servicies trough SLURM job described below.

Automated cryoSPARK master in batch job

cryoSPARC master could be started trough batch job. 

cryospark-master.slurm
#!/bin/bash
#SBATCH --partition plgrid-services
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 6
#SBATCH --time 14-0
#SBATCH -C localfs
#SBATCH --dependency=singleton
#SBATCH --job-name cryosparc-master
#SBATCH --output cryosparc-master-log-%J.txt
 
## Load environment for cryoSPARC
module add plgrid/apps/cryosparc/3.1
 
## get tunneling info
ipnport=$CRYOSPARC_BASE_PORT
ipnip=$(hostname -i)
user=$USER
 
## print tunneling instructions to cryosparc-master-log-<JobID>.txt
echo -e "
    Copy/Paste this in your local terminal to ssh tunnel with remote
    -----------------------------------------------------------------
    ssh -o ServerAliveInterval=300 -N -L $ipnport:$ipnip:$ipnport ${user}@pro.cyfronet.pl
    -----------------------------------------------------------------
 
    Then open a browser on your local machine to the following address
    ------------------------------------------------------------------
    localhost:$ipnport
    ------------------------------------------------------------------
    "
 
## start a cryoSPARC master server
cryosparcm restart

## loop which keep job running till scancel <JobID> by user or automatic kill by SLURM at end of requested walltime
while true; do sleep 600; done

Above script is located at /net/software/local/cryosparc/3.1/cyfronet/cryosparc-master.slurm.  You could copy it your working folder

SLURM script template copy command
cp /net/software/local/cryosparc/3.1/cyfronet/cryosparc-master.slurm .
  1. Submit job

    job submision
    sbatch cryosparc-master.slurm

    cryoSPARC master job

    There should be only one job which run cryoSPARC master in plgrid-servicies partition per user.

  2. Check whether job was started

    jobs status
    squeue -j <JobID>
  3. Common states of jobs

    • PD - PENDING - Job is awaiting resource allocation.
    • R - RUNNING - Job currently has an allocation and is running.
    • CF - CONFIGURING  - Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting). On Prometheus CF state could last for up to 8 minutes in case when nodes that have been in power save mode.
    • CG - COMPLETING  - Job is in the process of completing. Some processes on some nodes may still be active.
  4. Make a tunnel

    In your directory cat job log file:

    Listing of job's log
    cat cryosparc-master-log-<JobID>.txt

    where `XXXXXXX` is your sbatch job id which is displayed after you run it f.e. `cat cryosparc-master-log-49145683.txt`

    It will show you something like this:

    Example of job log
    Copy/Paste this in your local terminal to ssh tunnel with remote
    -----------------------------------------------------------------
    ssh -o ServerAliveInterval=300 -N -L 40100:172.20.68.193:40100 plgusername@pro.cyfronet.pl
    -----------------------------------------------------------------
    Then open a browser on your local machine to the following address
    ------------------------------------------------------------------
    localhost:48511 
    ------------------------------------------------------------------
  5. Exec in another shell at your local computer given command to make a tunnel:

    Tunneling
    ssh -o ServerAliveInterval=300 -N -L 40100:172.20.68.193:40100 plgusername@pro.cyfronet.pl
  6. Log into cryoSPARK web application - open in browser: `localhost:40100`





  • No labels