Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Ares uses a new naming scheme for CPU and GPU computing accounts, which are supplied by the -A parameter in sbatch command. Before, the account name was the same as the grant name. Currently, plain CPU accounts are named in the following manner:

Resourceaccount name
CPUgrantname-cpu
CPU bigmem nodesgrantname-cpu-bigmem
GPUgrantname-gpu

Please mind that sbatch -A grantname won't work on its own. You need to add the -cpu, -cpu-bigmem, or -gpu suffix! Available computing grants, with respective account names (allocations), can be viewed using the hpc-grants command.

Resource allocated on Ares doesn't use normalization, which was used on Prometheus and previous clusters. 1 hour of CPU time equals 1 hour spent on a computing core with the default a proportional amount of memory (consult the table above). The billing system accounts for jobs where with more memory was used than the default valueproportional amount. If the job uses more memory for each allocated CPU than the proportional amount (consult the table above), then the job , it will be billed as it would have used more CPUs. Jobs on CPU partitions are always billed in CPU hours. The amount billed The billed amount can be calculated by dividing the memory used by the proportional memory per core. Jobs on CPU partitions are always billed in CPU hours.

The same principle was applied to GPU resources, where the GPU-hour is a billing unit, while and there are default values for proportional memory per GPU and proportional CPUs per GPU defined (consult the table above).

For example, for a typical CPU job if it the job uses the default propoertional amount of memory per core, or less, then the job is simply billed simply for the time spent using CPUs. If the job used more memory than the default proportional amount, the cost can be is expressed as a simple algorithm for CPUs:

Code Block
cost_cpu    = job_cpus_used * job_duration
cost_memory = job_memory_used/memory_per_cpu * job_duration
final_cost  = max(cost_cpu, cost_memory)

and for GPUs, where a GPU has the respective amount of memory per GPU and CPUs per GPU, respectively:

Code Block
cost_gpu    = job_gpus_used * job_duration
cost_cpu    = job_cpus_used/cpus_per_gpu * job_duration
cost_memory = job_memory_used/memory_per_gpu * job_duration
final_cost  = max(cost_gpu, cost_cpu, cost_memory)

...