You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

CPU job script

This is the simple script for submitting basic CPU jobs:

Example CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=<grantname-cpu>
#SBATCH --partition=plgrid

module load python 
python myapp.py

The job will be named "job_name", declares a run time of 1 hour, is being run with the "grantname-cpu" account, is submitted to "plgrid" (default for CPU jobs) partition. The job operates in the directory where the batch command was issued, loads a python module, and executes a python application. Job's output will be written to a file named slurm-<JOBID>.out in the current directory. More information and a detailed explanation for each parameter can be found here: https://slurm.schedmd.com/quickstart.html and https://slurm.schedmd.com/sbatch.html

The advanced job could look like the following example:

Example advanced CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=grantname-cpu
#SBATCH --partition=plgrid
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=48
#SBATCH --cpus-per-task=1
#SBATCH --mem=184G
#SBATCH --output="joblog.txt"
#SBATCH --error="joberr.txt"

module load openmpi 
mpiexec myapp.bin

Please note the additional parameters and the MPI-enabled application! This job uses 2 nodes, with 48 tasks on each node, and each task uses 1 CPU. Each node participating in execution will allocate 184GB of memory. The job's stdout and stderr are redirected to respective joblog.txt and joberr.txt files. Note that existing output files will be overwritten. This example assumes that the myapp.bin application is an MPI application and that the mpiexec command is responsible for spawning the additional application ranks (processes). In most cases, mpiexec handles the application configuration (e.g., number of processes) by communicating with Slurm, so explicitly specifying the -np parameter is not required.

GPU job script

The simple script for submitting GPU jobs:

Example advanced CPU job
#!/bin/bash
#SBATCH --job-name=job_name
#SBATCH --time=01:00:00
#SBATCH --account=grantname-gpu
#SBATCH --partition=plgrid-gpu-v100
#SBATCH --cpus-per-task=4
#SBATCH --mem=40G
#SBATCH --gres=gpu

module load cuda 
./myapp

Please note the specific account name and partition for GPU jobs. The job allocated one GPU with the --gres parameter. The whole GPU is allocated for the job, --memory parameter refers to the system memory used by the job. More information on how to use GPU's can be found here: https://slurm.schedmd.com/gres.html

  • No labels