Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: changed loade module to default

...

  1. Apply  for apply for membership in plggcryospar team in Portal PLGrid and ask for registration in Cyfronet's internal cryoSPARC users database and dedicated port for access to cryoSPARC master  through Helpdesk PLGrid.
  2. Log in to Prometheus login node

    Code Block
    languagebash
    titleLog into Prometheus login node
    ssh <login>@pro.cyfronet.pl


  3. Load cryoSPARC module  using command

    Code Block
    languagebash
    titleSet cryoSPARC environment
    module add plgrid/apps/cryosparc/3.2


  4. Run cryoSPARC configuration script. It will configure your cryoSPARC environment and create your user in cryoSPARC database and configure two lanes for external jobs - prometheus-gpu which is going to use plgrid-gpu partition for GPU jobs and prometheus-gpu-v100 for plgrid-gpu-v100 partition. Both lanes are going to use plgrid partition for CPU only jobs. As argument for script pass license id, your e-mail and password (they are going to be used to login to cryoSPARC webapp), your first and last name.

    Code Block
    languagebash
    titleConfigure cryoSPARC
    cryosparc_configuration --license <XXXX> --email <your-email> --password <password> --firstname <Givenname> --lastname <Surname> 


    Info
    titleAccess problems

    In case of "cryosparc_configuration: command not found" error run in terminal

    Code Block
    languagebash
    newgrp plggcryospar

    to start new subshell with permissions of plggcryospar team.


    Info
    titleAccess to GPU partitions

    To use GPUs on Promehteus cluster you have to apply for GPU resources at Portal PLGrid

    To check whether you have an access to partition on Prometheus login node run below command and check whether your PLGrid computational grants are on AllowAccounts list

    • partition plgrid-gpu

      Code Block
      languagebash
      scontrol show partition plgrid-gpu | grep Accounts | grep <PLGrid grant name>


    • partition plgrid-gpu-v100

      Code Block
      languagebash
      scontrol show partition plgrid-gpu-v100 | grep Accounts | grep <PLGrid grant name>


    In case that you do not have access to one or both of above partitions check your PLGrid computational grant details at Portal PLGrid If your grant lists GPU resources, and access to required queue or queues is not possible please contact Helpdesk at https://helpdesk.plgrid.pl.


  5. Your cryoSPARC master setup already done. All succeeding crypoSPARC master instances  should be run in batch jobs.

...

Code Block
languagebash
titlecryospark-master.slurm
#!/bin/bash
#SBATCH --partition plgrid-services
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 4
#SBATCH --mem 10GB
#SBATCH --time 14-0
#SBATCH -C localfs
#SBATCH --dependency=singleton
#SBATCH --job-name cryosparc-master
#SBATCH --output cryosparc-master-log-%J.txt

## Load environment for cryoSPARC
module add plgrid/apps/cryosparc/3.2

## get tunneling info
ipnport=$CRYOSPARC_BASE_PORT
ipnip=$(hostname -i)
user=$USER

## print tunneling instructions to cryosparc-master-log-<JobID>.txt
echo -e "
    Copy/Paste this in your local terminal to ssh tunnel with remote
    -----------------------------------------------------------------
    ssh -o ServerAliveInterval=300 -N -L $ipnport:$ipnip:$ipnport ${user}@pro.cyfronet.pl
    -----------------------------------------------------------------

    Then open a browser on your local machine to the following address
    ------------------------------------------------------------------
    localhost:$ipnport
    ------------------------------------------------------------------
    "

## start a cryoSPARC master server
cryosparcm restart

## loop which keep job running till scancel <JobID> by user or automatic kill by SLURM at end of requested walltime
while true; do sleep 600; done

...

  1. Start interactive job using command

    Code Block
    languagebash
    titleInteractive job
    srun -p plgrid-services --nodes=1 --ntasks=1 --time=0-1 --pty bash


  2. Load cryoSPARC environment using modules

    Code Block
    languagebash
    titleLoad cryoSPARC environment
    module add plgrid/apps/cryosparc/3.2


  3. Copy cluster config cluster_info.json  and script template cluster_script.sh from $CRYOSPARC_ADDITIONAL_FILES_DIR directory  to your working directory

    Code Block
    languagebash
    titleCopy files
    cp $CRYOSPARC_ADDITIONAL_FILES_DIR/cluster_info.json .
    cp $CRYOSPARC_ADDITIONAL_FILES_DIR/cluster_script.sh .


  4. Modify files accordingly
    1. in config cluster_info.json change name of lane/cluster to avoid overwriting default prometheus* lanes
    2. in cluster_script.sh change --time, --partition or other parts of script template accordingly
  5. Start  cryoSPARC master

    Warning
    titlecryoSPARC master job

    There should be only one job which run cryoSPARC master per user. Therefore you should stop job with cryoSPARC master before this step.


    Code Block
    languagebash
    titlerun cryoSPARC master
    cryosparcm restart



  6. run command cryosparcm cluster connect <name-of-cluster-form-cluster_info.json> to add lane/cluster

    Code Block
    languagebash
    titleadd line
    cryosparcm cluster connect <name-of-cluster-form-cluster_info.json>


  7. Repeat above points to create another lane if necessary
  8. Stop cryoSPARC master

    Code Block
    languagebash
    titlerun cryoSPARC master
    cryosparcm stop


  9. End interactive job

    Code Block
    languagebash
    titleend interactive job
    exit


...