...
- Apply for apply for membership in
plggcryospar
team in Portal PLGrid and ask for registration in Cyfronet's internal cryoSPARC users database and dedicated port for access to cryoSPARC master through Helpdesk PLGrid. Log in to Prometheus login node
Code Block language bash title Log into Prometheus login node ssh <login>@pro.cyfronet.pl
Load cryoSPARC module using command
Code Block language bash title Set cryoSPARC environment module add plgrid/apps/cryosparc/3.2
Run cryoSPARC configuration script. It will configure your cryoSPARC environment and create your user in cryoSPARC database and configure two lanes for external jobs -
prometheus-gpu
which is going to useplgrid-gpu
partition for GPU jobs andprometheus-gpu-v100
forplgrid-gpu-v100
partition. Both lanes are going to useplgrid
partition for CPU only jobs. As argument for script pass license id, your e-mail and password (they are going to be used to login to cryoSPARC webapp), your first and last name.Code Block language bash title Configure cryoSPARC cryosparc_configuration --license <XXXX> --email <your-email> --password <password> --firstname <Givenname> --lastname <Surname>
Info title Access problems In case of "
cryosparc_configuration: command not found
" error run in terminalCode Block language bash newgrp plggcryospar
to start new subshell with permissions of
plggcryospar
team.Info title Access to GPU partitions To use GPUs on Promehteus cluster you have to apply for GPU resources at Portal PLGrid.
To check whether you have an access to partition on Prometheus login node run below command and check whether your PLGrid computational grants are on AllowAccounts list
partition plgrid-gpu
Code Block language bash scontrol show partition plgrid-gpu | grep Accounts | grep <PLGrid grant name>
partition plgrid-gpu-v100
Code Block language bash scontrol show partition plgrid-gpu-v100 | grep Accounts | grep <PLGrid grant name>
In case that you do not have access to one or both of above partitions check your PLGrid computational grant details at Portal PLGrid. If your grant lists GPU resources, and access to required queue or queues is not possible please contact Helpdesk at https://helpdesk.plgrid.pl.
- Your cryoSPARC master setup already done. All succeeding crypoSPARC master instances should be run in batch jobs.
...
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/bash
#SBATCH --partition plgrid-services
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 4
#SBATCH --mem 10GB
#SBATCH --time 14-0
#SBATCH -C localfs
#SBATCH --dependency=singleton
#SBATCH --job-name cryosparc-master
#SBATCH --output cryosparc-master-log-%J.txt
## Load environment for cryoSPARC
module add plgrid/apps/cryosparc/3.2
## get tunneling info
ipnport=$CRYOSPARC_BASE_PORT
ipnip=$(hostname -i)
user=$USER
## print tunneling instructions to cryosparc-master-log-<JobID>.txt
echo -e "
Copy/Paste this in your local terminal to ssh tunnel with remote
-----------------------------------------------------------------
ssh -o ServerAliveInterval=300 -N -L $ipnport:$ipnip:$ipnport ${user}@pro.cyfronet.pl
-----------------------------------------------------------------
Then open a browser on your local machine to the following address
------------------------------------------------------------------
localhost:$ipnport
------------------------------------------------------------------
"
## start a cryoSPARC master server
cryosparcm restart
## loop which keep job running till scancel <JobID> by user or automatic kill by SLURM at end of requested walltime
while true; do sleep 600; done
|
...
Start interactive job using command
Code Block language bash title Interactive job srun -p plgrid-services --nodes=1 --ntasks=1 --time=0-1 --pty bash
Load cryoSPARC environment using modules
Code Block language bash title Load cryoSPARC environment module add plgrid/apps/cryosparc/3.2
Copy cluster config
cluster_info.json
and script templatecluster_script.sh
from$CRYOSPARC_ADDITIONAL_FILES_DIR
directory to your working directoryCode Block language bash title Copy files cp $CRYOSPARC_ADDITIONAL_FILES_DIR/cluster_info.json . cp $CRYOSPARC_ADDITIONAL_FILES_DIR/cluster_script.sh .
- Modify files accordingly
- in config
cluster_info.json
change name of lane/cluster to avoid overwriting default prometheus* lanes - in
cluster_script.sh
change--time
,--partition
or other parts of script template accordingly
- in config
Start cryoSPARC master
Warning title cryoSPARC master job There should be only one job which run cryoSPARC master per user. Therefore you should stop job with cryoSPARC master before this step.
Code Block language bash title run cryoSPARC master cryosparcm restart
run command
cryosparcm cluster connect <name-of-cluster-form-cluster_info.json>
to add lane/clusterCode Block language bash title add line cryosparcm cluster connect <name-of-cluster-form-cluster_info.json>
- Repeat above points to create another lane if necessary
Stop cryoSPARC master
Code Block language bash title run cryoSPARC master cryosparcm stop
End interactive job
Code Block language bash title end interactive job exit
...