You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Preliminary access essentials

Disclaimer

Athena is still under development, and even despite our best efforts, Ahena might experience unscheduled outages or even data loss.

Content

Support

Please get in touch with the PLGrid Helpdesk: https://helpdesk.plgrid.pl/ regarding any difficulties in using the cluster.

For important information and announcements, please follow this page and the messages displayed in the login message.

Access to Ares

Computing resources on Ares are assigned based on PLGrid computing grants. To perform computations on Ares you need to obtain a computing grant and apply for Ares access, through the PLGrid portal (https://aplikacje.plgrid.pl/service/dostep-do-klastra-ares-w-osrodku-cyfronet/).

If your grant is active, and you have applied for the service access, the request should be accepted in about half an hour, please report any issues through the helpdesk.

Machine description

Available login nodes:

  • ssh <login>@ares.cyfronet.pl

Note that Ares uses PLGrid accounts and grants. Make sure to request the "Ares access" access service in the PLGrid portal.

Ares is built with Infiniband EDR interconnect and nodes of the following specification:

PartitionNumber of nodesCPURAMAccelerator
plgrid53248 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz192GB
plgrid-bigmem25648 cores, Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz384GB
plgrid-gpu-v100932 cores, Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz384GB8x Tesla V100-SXM2

Job submission

Ares is using Slurm resource manager, jobs should be submitted to the following partitions:

NameTimelimitRemarks
plgrid72hStandard partition.
plgrid-long168hCurrently unavailable. Used for jobs with extended runtime.
plgrid-testing1hCurrently unavailable. High priority, testing jobs, limited to 3 jobs.
plgrid-bigmem72hCurrently unavailable. For jobs using an extended amount of memory.
plgrid-now12hCurrently unavailable. The highest priority, interactive jobs, 1 running job at most.
plgrid-gpu-v10048hGPU partition.

Accounts and computing grants

Ares uses a new scheme of naming accounts for CPU and GPU computing grants. CPU only grants are named: grantname-cpu, while GPU accounts use grantname-gpu suffix. Please mind that sbatch -A grantname won't work on its own. You need to add the -cpu or -gpu suffix! Available computing grants, with respective account names (allocations), can be viewed by using the hpc-grants command.

Resource allocated on Ares doesn't use normalization, 1 hour of CPU time equals 1 hour spent on a computing core, similarly in the case of GPUs.

Storage

Available storage spaces are described in the following table:

LocationLocation in the filesystemPurpose
$HOME/net/people/plgrid/<login>Storing own applications, and configuration files
$SCRATCH

/net/pr2/scratch/people/<login>

or

/net/ascratch/people/<login>

High-speed storage for short-lived data used in computations. Data older than 30 days can be deleted without notice. Please note that scratch can have two physical locations. It is best to rely on the $SCRATCH environment variable.
$PLG_GROUPS_STORAGE/<group name>/net/pr2/projects/plgrid/<group name>Long term storage, for data living for the period of computing grant.

Current usage, capacity and other storage attributes can be checked by issuing the hpc-fs command.

System Utilities

Please use the following commands for interacting with the account and storage management system:

  • hpc-grants - shows available grants, resource allocations
  • hpc-fs - shows available storage
  • hpc-jobs - shows currently pending/running jobs
  • hpc-jobs-history - shows information about past jobs

Software

Applications and libraries are available through the modules system. Please note that the module structure was flattened and module paths have changed compared to Prometheus! The list of available modules can be obtained by issuing the command:

module avail

the list is searchable by using the '/' key. The specific module can be loaded by the add command:

module add openmpi/4.1.1-gcc-11.2.0

and the environment can be purged by:

module purge

More information

Ares is following Prometheus' configuration and usage patterns. Prometheus documentation can be found here: https://kdm.cyfronet.pl/portal/Prometheus:Basics

  • No labels