Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Support

Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties using the cluster. 

Machine description

Available login nodes:

  • ssh <login>@ares.cyfronet.pl

Ares is built with Infiniband EDR interconnect and nodes of the following specification:

PartitionNumber of nodesCPURAMAccelerator
plgrid and plgrid-*53248 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz192GB
plgrid-bigmem25648 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz384GB
plgrid-gpu-v100932 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz384GBTesla V100-SXM2

Job submission

Jobs should be submitted to the following partitions:

NameTimelimitRemarks
plgrid72hStandard partition.
plgrid-long168hUsed for jobs with extended runtime.
plgrid-testing1hHigh priority, testing jobs, limited to 3 jobs.
plgrid-bigmem72hJobs using an extended amount of memory.
plgrid-now12hThe highest priority, interactive jobs, limited to 1 running job.
plgrid-gpu-v10072hGPU partition.

Storage

Available storage spaces are described in the following table:

Please note that the storage
system was modified, old $SCRATCH is available only on login01 node as a
readonly filesystem under the /net/ascratch/people/<login> directory. It will
be available until 28th of March.

...

LocationPhysical locationPurpose
$HOME/net/people/<login>Storing own applications, configuration files
$SCRATCH/net/pr2/scratch/people/<login>High-speed storage used for short-lived data heavily used in computations.
group storage
/net/pr2/projects/plgrid/<group name>

...

If you are using an account named aresXX, please create a proper PLGrid grant,
as aresXX temporary accounts will be disabled in the near future.

Long term storage, used for data living for the period of computing grant.

Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs command.

System utilities

Please use the following commands for managing your accountsinteracting with the account and storage management system:

  • hpc-grants

...

  • - shows available grants, resource allocations
  • hpc-fs

...

  • - shows available storage
  • hpc-jobs

...

  • - shows currently pending/running jobs
  • hpc-jobs-history

...

  • - shows information about past jobs

...

Software

Applications and libraries are available through modules system, list of available modules can be obtained by issuing the command:

module avail

a module can be loaded by:

module add openmpi/4.1.1-gcc-11.2.0

and the environment can be purged by:

module purge

Jobs should be submitted to plgrid, plgrid-long, plgrid-bigmem, plgrid-gpu and
other plgrid-* queues.

The standard partition is built with nodes containing 48 cores and 192 GB of
RAM, while the plgrid-bigmem contains nodes with 48 cores and 384 GB of RAM.
GPU partition contains nodes with 32 cores, 384 GB of memory and 8 NVIDIA Tesla
V100 GPUs.