You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Support

Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties in using the cluster.

For important information and announcements, please follow this page and the messages displayed in the login message.

Machine description

Available login nodes:

  • ssh <login>@ares.cyfronet.pl

Ares is built with Infiniband EDR interconnect and nodes of the following specification:

PartitionNumber of nodesCPURAMAccelerator
plgrid and plgrid-*53248 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz192GB
plgrid-bigmem25648 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz384GB
plgrid-gpu-v100932 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz384GBTesla V100-SXM2

Job submission

Ares is using Slurm resource manager, jobs should be submitted to the following partitions:

NameTimelimitRemarks
plgrid72hStandard partition.
plgrid-long168hUsed for jobs with extended runtime.
plgrid-testing1hHigh priority, testing jobs, limited to 3 jobs.
plgrid-bigmem72hJobs using an extended amount of memory.
plgrid-now12hThe highest priority, interactive jobs, limited to 1 running job.
plgrid-gpu-v10072hGPU partition.

Accounts and computing grants

Ares uses a new scheme of naming accounts for CPU and GPU computing grants. CPU only grants are named: grantname-cpu, while GPU accounts use grantname-gpu appropriate suffix. Please mind that sbatch -A grantname won't work on its own, you need to add the -cpu or -gpu suffix! Available computing grants, with respective account names (allocations), can be viewed by using the hpc-grants command.

Storage

Available storage spaces are described in the following table:

LocationLocation in the filesystemPurpose
$HOME/net/people/<login>Storing own applications, configuration files
$SCRATCH/net/pr2/scratch/people/<login>High-speed storage used for short-lived data heavily used in computations. Data present for more than 30 days can be deleted without notice.
group storage/net/pr2/projects/plgrid/<group name>Long term storage, used for data living for the period of computing grant.

Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs command.

System utilities

Please use the following commands for interacting with the account and storage management system:

  • hpc-grants - shows available grants, resource allocations
  • hpc-fs - shows available storage
  • hpc-jobs - shows currently pending/running jobs
  • hpc-jobs-history - shows information about past jobs

Software

Applications and libraries are available through the modules system, list of available modules can be obtained by issuing the command:

module avail

a module can be loaded by the add command:

module add openmpi/4.1.1-gcc-11.2.0

and the environment can be purged by:

module purge

More information

Ares is following Prometheus' configuration and usage patterns. Prometheus documentation can be found here: https://kdm.cyfronet.pl/portal/Prometheus:Basics

  • No labels