You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

CPU job script

This is the simple script for submitting basic CPU jobs:

Example CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=<grantname-cpu>
#SBATCH --partition=plgrid

module load python 
python myapp.py

The job will be named "job_name", declares a run time of 1 hour, is being run with the "grantname-cpu" account, is submitted to "plgrid" (default for CPU jobs) partition. The job operates in the directory where the batch command was issued, loads a python module, and executes a python application. Job's output will be written to a file named slurm-<JOBID>.out in the current directory. More information and a detailed explanation for each parameter can be found here: https://slurm.schedmd.com/quickstart.html and https://slurm.schedmd.com/sbatch.html

The advanced job could look like the following example:

Example advanced CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=grantname-cpu
#SBATCH --partition=plgrid
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --mem=180G
#SBATCH --output="joblog.txt"
#SBATCH --error="joberr.txt"

module load scipy-bundle 
mpiexec myapp-mpi.py

Please note the additional parameters and the MPI-enabled application! This job uses 2 nodes, with 48 tasks on each node, and each task uses 1 CPU. Each node participating in execution will allocate 180GB of memory. The job's stdout and stderr are redirected to respective joblog.txt and joberr.txt files. This example assumes, that the myapp-mpi.py application is an MPI application and the mpiexec command is responsible for spawning the additional application ranks (processes).

GPU job script

The simple script for submitting GPU jobs:

Example advanced CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=grantname-gpu
#SBATCH --partition=plgrid-gpu-v100
#SBATCH --cpus-per-task=4
#SBATCH --mem=40G
#SBATCH --gres=gpu

module load cuda 
./myapp

Please note the specific account name and partition for GPU jobs. The job allocated one GPU with the --gres parameter. The whole GPU is allocated for the job, --memory parameter refers to the system memory used by the job. More information on how to use GPU's can be found here: https://slurm.schedmd.com/gres.html

  • No labels