Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Apply  for apply for membership in plggcryospar team in Portal PLGrid and ask for registration in Cyfronet's internal cryoSPARC users database and dedicated port for access to cryoSPARC master  through Helpdesk PLGrid.
  2. Log in to Prometheus login node

    Code Block
    languagebash
    ssh <login>@pro.cyfronet.pl


  3. Load cryoSPARC module  using command

    Code Block
    languagebash
    module add plgrid/apps/cryosparc/3.1


  4. Run cryoSPARC configuration script. It will configure your cryoSPARC environment and create your user in cryoSPARC database and configure two lanes for external jobs - prometheus-gpu which is going to use plgrid-gpu partition for GPU jobs and prometheus-gpu-v100 for plgrid-gpu-v100 partition. Both lanes are going to use plgrid partition for CPU only jobs. As argument for script pass license id, your e-mail and password (they are going to be used to login to cryoSPARC webapp), your first and last name.

    Code Block
    languagebash
    cryosparc_configuration --license <XXXX> --email <your-email> --password <password> --firstname <Givenname> --lastname <Surname> 


    Info
    titlePort settings

    In case of "cryosparc_configuration: command not found" error run in terminal

    Code Block
    languagebash
    newgrp plggcryospar

    to start new subshell with permissions of plggcryospar team.


    Info
    titleOptional lanes/clusters

    You could create additional lanes/clusters for other maximal duration of SLURM job:

    • copy cluster config cluster_info.json  and script template cluster_script.sh from directory  /net/software/local/cryosparc/3.1/cyfronet to your working directory
    • modify files accordingly
      • in config cluster_info.json change name of lane/cluster to avoid overwriting default prometheus lane
      • in cluster_script.sh change --time, --partition or other parts of script template accordingly
    • run command cryosparcm cluster connect <name-of-cluster-form-cluster_info.json> to add lane/cluster
    • repeat above points to create another lane if nessesary 


    Info
    titleAccess to GPU partitions

    To use GPUs on Promehteus cluster you have to apply for GPU resources at Portal PLGrid

    To check whether you have an access to partition on Prometheus login node run below command and check whether your accounts are on AllowAccounts list

    • partition plgrid-gpu

      Code Block
      languagebash
      scontrol show partition plgrid-gpu-v100| grep Accounts


    • partition plgrid-gpu-v100

      Code Block
      languagebash
      scontrol show partition plgrid-gpu-v100| grep Accounts


    If you do not have acccess to above partitions please contact Helpdesk PLGrid


  5. Your cryoSPARC master setup already done. All succeeding crypoSPARC master instances  should be run in batch jobs

...