Differences between revisions 12 and 63 (spanning 51 versions)
Revision 12 as of 2020-06-30 13:21:27
Size: 13203
Editor: stroth
Comment:
Revision 63 as of 2023-10-13 18:06:21
Size: 11414
Editor: stroth
Comment: Disabled 1 GPU on biwirender[05,13]
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
#rev 2020-09-10 stroth
Line 3: Line 5:
= Slurm Pilot project for Biwi =
 * The following information is an abbreviated How-To with specific information for the pilot cluster
 * Our official documentation for slurm is the [[Services/SLURM|Computing wiki article]], you need to read this as well.
 * Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi.
 * The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section [[#Accounts_and_limits|Accouns and limits]] has only informative purpose at the moment.
= CVL Slurm cluster =
The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:
Line 9: Line 8:
The alpha version of a [[https://people.ee.ethz.ch/~stroth/gpumon_pilot/index.html|GPUMon alternative]] is available. Please don't send feedback yet, use it as it is.
 
== Pilot-specific information ==
Involved machines are
 * `biwirender01` for '''CPU computing'''
 * `biwirender03` for '''GPU-computing'''
All available GPU partitions are overlayed on `biwirender03`. They will be available on different nodes in the final cluster.
 * [[Services/SLURM|Computing wiki main Slurm article]]
 * [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL wiki slurm article]]
Line 17: Line 11:
/!\ `long` partitions are not yet implemented in the pilot!

== Initialising slurm ==
All slurm command read the cluster configuration from the environment variable `SLURM_CONF`, so it needs to be set:
== Setting environment ==
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
Line 24: Line 16:
If you're interested, feel free to have a look at the configuration, feedback is welcome!
Line 26: Line 17:
== Available partitions ==
The equivalent to SGE's queues is called ''partitions'' in slurm.<<BR>>
`sinfo` shows all available partitions:
{{{#!highlight bash numbers=disable
sinfo
}}}
||'''PARTITION'''||'''AVAIL'''||'''TIMELIMIT'''||'''NODES'''||'''STATE'''||'''NODELIST'''||
||cpu.medium.normal||up||2-00:00:00||38||idle||bender[01-06,39-70]||
||gpu.low.normal||up||2-00:00:00||1||idle||biwirender[03,04]||
||gpu.medium.normal||up||2-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.medium.long||up||5-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.high.normal||up||2-00:00:00||3||idle||biwirender[13-15]||
||gpu.high.long||up||5-00:00:00||3||idle||biwirender[13-15]||
||gpu.debug||up||8:00:00||1||idle||biwirender[03,04]||
== Hardware ==
The following tables summarizes node specific information:
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch SSD'''||'''/scratch Size'''||'''GPUs'''||'''Operating System'''||
||bender[59-70] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||3.7 TB||-||Debian 10||
||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36 ||36 ||251 GB||-||186 GB||-||Debian 10||
||biwirender03 ||Intel Xeon E5-2650 v2||2.60 GHz ||16 ||32 ||125 GB||-||820 GB||4 Tesla K40c (11 GB)||Debian 10||
||biwirender04 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 || 8 ||125 GB||✓||6.1 TB||5 Tesla K40c (11 GB)||Debian 10||
||biwirender05 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||4 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender06 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender07 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||3 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender08 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender09 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||3 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender10 ||Intel Xeon E5-2650 v4||2.20 GHz ||24 ||24 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender11 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||4 !GeForce RTX 2080 Ti (10 GB)||Debian 10||
||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||✓||701 GB||5 TITAN Xp (12 GB)||Debian 10||
||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||701 GB||7 TITAN Xp (12 GB)||Debian 10||
||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||1.1 TB||7 TITAN Xp (12 GB)||Debian 10||
||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 10||
||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 10||
||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||✓||1.1 TB||6 TITAN X (12 GB)||Debian 10||
||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||692 GB||5 TITAN Xp (12 GB)||Debian 10||
||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 10||
||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 10||
||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||4 TITAN Xp (12 GB)||Debian 10||
||bmicgpu06 ||AMD EPYC 7742 ||1.50 GHz ||128 ||128 ||503 GB||✓||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB) ||Debian 10||
||bmicgpu07 ||AMD EPYC 7763 ||1.50 GHz ||128 ||128 ||755 GB||✓||6.9 TB||8 A6000 (48 GB)||Debian 11||
Line 41: Line 45:
Only the interactive partition `gpu.debug` should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.

=== Interactive jobs ===
For testing purposes a job with an interactive session with 1 GPU can be started:
{{{#!highlight bash numbers=disable
srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i
}}}
 * Such jobs are placed in `gpu.debug` by the scheduler

== Allocating resources ==
=== GPUs ===
For a job to have access to a GPU, GPU resources need to be requested with the option `--gres=gpu:<n>`<<BR>>
Here's the sample job submission script `primes_1GPU.sh` requesting 1 GPU:
{{{#!highlight bash numbers=disable
#!/bin/sh
#
#SBATCH --mail-type=ALL
#SBATCH --gres=gpu:1
#SBATCH --output=log/%j.out
export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out
# env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job
#
# binary to execute
codebin/primes $1
echo ""
echo "Job statistics: "
sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15
echo ""
exit 0;
}}}
 * Make sure the directory wherein to store logfiles exists before submitting a job.
 * Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to `$SLURM_JOB_ID` anymore).
 * slurm also sets CUDA_VISIBLE_DEVICES. See the section [[Services/SLURM#GPU_jobs|GPU jobs]] in the main slurm article.
 * A job requesting more GPUs than allowed by the QOS of the users's account (see [[#Accounts_and_limits|Accouns and limits]]) will stay in "PENDING" state.

=== Memory ===
If you omit the `--mem` option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose `gpu.medium.normal`:
{{{
sbatch primes_1GPU.sh
sbatch: GRES requested : gpu:1
sbatch: GPUs requested : 1
sbatch: Requested Memory : ---
sbatch: CPUs requested : ---
sbatch: Your job is a gpu job.
Submitted batch job 133
}}}
{{{#!highlight bash numbers=disable
squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10
}}}
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02
}}}
An explicit `--mem` option selects the partition as follows:
||'''--mem'''||'''Partition'''||
||< 30G||gpu.low.normal||
||30G - 50G||gpu.medium.normal||
||>50G - 70G||gpu.high.normal||
||>70G||not allowed||
For example with:
{{{#!highlight bash numbers=disable
sbatch --mem=50G primes_2GPU.sh
}}}
the above `squeue` command shows:
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
136 gpu.high.normal biwirender03 testuser cpu=6,mem=100G,node=1,billing=6,gres/gpu=2 0:28
}}}

== Accounts and limits ==
In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to:
{{{#!highlight bash numbers=disable
sacctmgr show account
}}}
{{{
 Account Descr Org
-------- -------------------- ----
deadconf deadline_conference biwi
deadline deadline biwi
    long longer time limit biwi
    root default root account root
   staff staff biwi
 student student biwi
}}}
 * Accounts `isg` and `root` are not accessible to Biwi

GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account.
{{{#!highlight bash numbers=disable
sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
Account User Partition MaxJobs QOS Def QOS
--------- -------- --------- ------- ----- -------
 deadconf ........ gpu_4 gpu_4
 deadline ........ gpu_5 gpu_5
     long ........ gpu_2 gpu_2
    staff ........ gpu_7 gpu_7
  student ........ gpu_3 gpu_3
}}}

The QOS' `gpu_x` only contain a limit for the amount of GPUs per user:
{{{#!highlight bash numbers=disable
sacctmgr show qos format=name%15,maxtrespu%30
}}}
{{{
          Name MaxTRESPU
--------------- ------------------------------
         normal
          gpu_1 gres/gpu=1
          gpu_2 gres/gpu=2
          gpu_3 gres/gpu=3
          gpu_4 gres/gpu=4
          gpu_5 gres/gpu=5
          gpu_6 gres/gpu=6
}}}

Users with administrative privileges can move a user between accounts `deadline` or `deadconf`.<<BR>>

List associations of testuser:
{{{#!highlight bash numbers=disable
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
       deadline testuser gpu_3 gpu_3
}}}
Move testuser from deadline to staff:
{{{#!highlight bash numbers=disable
/home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff
}}}
List associations of testuser again:
{{{#!highlight bash numbers=disable
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
          staff testuser gpu_2 gpu_2
}}}

Accounts with administrative privileges can be shown with:
{{{#!highlight bash numbers=disable
sacctmgr show user format=user%15,defaultaccount%15,admin%15'
}}}

== Last words ==
Have fun using SLURM for your jobs!

= Content for the final page =
Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process.

== Nodes ==
The following tables summarizes node specific information:
||'''Server'''||'''CPU'''||'''Frequency'''||'''Cores'''||'''Memory'''||'''/scratch SSD'''||'''GPUs'''||'''Operating System'''||
||bender[01-06]||Intel Xeon E5-2670 v2||2.50 GHz||40||125 GB||-||-||Debian 9||
||bender[39-52]||Intel Xeon X5650||2.67 GHz||24||94 GB||-||-||Debian 9||
||bender[53-70]||Intel Xeon E5-2665 0||2.40 GHz||32||125 GB||-||-||Debian 9||
||biwirender03||Intel Xeon E5-2650 v2||2.60 GHz||32||125 GB||-||6 Tesla K40c (11 GB)||Debian 9||
||biwirender04||Intel Xeon E5-2637 v2||3.50 GHz||8||125 GB||✓||5 Tesla K40c (11 GB)||Debian 9||
||biwirender0[5,6]||Intel Xeon E5-2637 v2||3.50 GHz||8||251 GB||✓||5 GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender0[7-9]||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender10||Intel Xeon E5-2650 v4||2.20 GHz||24||251 GB||✓||5 GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender11||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender12||Intel Xeon E5-2640 v3||2.60 GHz||32||251 GB||✓||6 GeForce RTX 2080 Ti (10 GB)||Debian 9||
||biwirender13||Intel Xeon E5-2680 v3||2.50 GHz||24||503 GB||✓||4 TITAN Xp (12 GB)<<BR>>3 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender14||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||3 TITAN Xp (12 GB)<<BR>>4 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender15||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||3 TITAN Xp (12 GB)<<BR>>3 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender17||Intel Xeon E5-2620 v4||2.10 GHz||32||503 GB||✓||8 GeForce GTX 1080 Ti (11 GB)||Debian 9||
||biwirender20||Intel Xeon E5-2620 v4||2.10 GHz||32||377 GB||✓||8 GeForce GTX 1080 Ti (11 GB)||Debian 9||
||bmicgpu01||Intel Xeon E5-2680 v3||2.50 GHz||24||251 GB||✓||6 TITAN X (Pascal) (12 GB)||Debian 9||
||bmicgpu02||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9||
||bmicgpu0[3-5]||Intel Xeon E5-2630 v4||2.20 GHz||20||251 GB||✓||6 TITAN Xp (12 GB)||Debian 9||

Detailled information about all nodes can be seen by issuing the command
Detailed information about all nodes can be seen by issuing the command
Line 226: Line 56:
== Automatic resource assignment ==
As the [[#Hardware|hardware outfit]] of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases:
 * Jobs not explicitely specifying resource allocations receive defaults
 * Upper limits on resource allocations are imposed on all jobs
These defaults and limits differ by [[#Partitions|partition]]. For details, see the ''job submit script'' `/home/sladmcvl/slurm/job_submit.lua` which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits.

/!\ Don't use the '''--mem''' and/or '''--cpus-per-task''' options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message:
{{{
srun: error: Unable to allocate resources: Requested node configuration is not available
}}}
To properly warn about impossible conditions, the ''job submit script'' would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission.
Line 227: Line 69:
Partitions including their limits are shown in the following table: Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table:
Line 229: Line 71:
||cpu.medium.normal||-||-||-||-||2 d||
||gpu.low.normal||20 GB||25 GB||3||3||2 d||
||gpu.medium.normal||40 GB||50 GB||3||5||2 d||
||gpu.medium.long||40 GB||50 GB||3||5||5 d||
||gpu.high.normal||70 GB||70 GB||4||4||2 d||
||gpu.high.long||70 GB||70 GB||4
||4||5 d||
||gpu.debug||20 GB||25 GB||3||3||8 h||
||gpu.mon||-||-||-||-||15 m||
||cpu.medium.normal||-    ||- ||- ||-||2 d||
||gpu.low.normal   ||30 GB||30 GB||3 ||3||2 d||
||gpu.medium.normal||30 GB||50 GB||3 ||5||2 d||
||gpu.medium.long  ||30 GB||50 GB||3 ||5||5 d||
||gpu.high.normal  ||50 GB||70 GB||4 ||4||2 d||
||gpu.high.long ||50 GB||
70 GB||4 ||4||5 d||
||gpu.debug        ||30 GB||70 GB||3 ||5||8 h||
||gpu.bmic ||64 GB||- ||16||-||2 d||
Line 240: Line 82:
This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed. This partition is reserved to run interactive jobs for debugging purposes.
A typical use case is to start an interactive shell for a short time in this partitions to debug a job script. In the following example, the time limit is set to 10 minutes:
{{{#!highlight bash numbers=disable
srun --time 10 --gres=gpu:1 --partition=gpu.debug --pty bash -i
}}}
Line 242: Line 88:
=== gpu.mon ===
This partition is reserved to run interactive jobs for monitoring other running jobs. No GPUs can be allocated, only 1 core per job and 1 job per person is allowed.
=== gpu.bmic ===
Access to this partition is restricted to members of the [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]].
Line 245: Line 91:
/!\ This might be replaced by an explanation how to reserve a job step in a job for monitoring and attach to such a job ste. ==== Access to gpu.bmic ====
Access to the partition `gpu.bmic` is available for members of the Slurm account `bmic`. You can check your Slurm account membership with the following command:
{{{#!highlight bash numbers=disable
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
}}}
If you're a member of the Slurm account `staff` and have also been added to `bmic`, your default account is the latter and all your jobs will by default be sent to partition `gpu.bmic`.<<BR>>
If you want to have your jobs sent to other partitions, you have to specify the account `staff` as in the following example:
{{{#!highlight bash numbers=disable
sbatch --account=staff job_script.sh
}}}
If you already have a PENDING job in the wrong partition you can correct it to partition `<partition name>` by issuing the following command:
{{{#!highlight bash numbers=disable
scontrol update jobid=<job id> partition=<partition name> account=staff
}}}
If you want to send your jobs to nodes in other partitions, make sure to always specify `--account=staff`. Job quotas are calculated per account, by setting the account to `staff` you will make sure not to use up your quota from account `bmic` on nodes in partitions outside of `gpu.bmic`.
Line 248: Line 108:
The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>. The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL administration|]].
Line 252: Line 112:

=== GPU availability ===
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
{{{#!highlight bash numbers=disable
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
}}}
For monitoring its content the following aliases can be used:
{{{#!highlight bash numbers=disable
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""
}}}

=== GPU quota ===
A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command:
{{{#!highlight bash numbers=disable
sacctmgr show qos format=name%8,maxtrespu%12
}}}
Line 253: Line 132:
Show a sorted list of users and a summary of the GPU's used by their jobs: Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:
Line 255: Line 134:
scontrol -a show jobs \
    |grep -E '(UserId|TRES)=' \
    |paste - - \
    |grep '
gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*gres/gpu=([0-9]+)$:\1;\2:' \
(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    &&
scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*
gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
Line 261: Line 143:
    |sort     |sort \
    |tr '_' ';'
) \
|column -s ';' -t

CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Physical cores

Logical processors

Memory

/scratch SSD

/scratch Size

GPUs

Operating System

bender[59-70]

Intel Xeon E5-2665 0

2.40 GHz

32

64

125 GB

-

3.7 TB

-

Debian 10

bmiccomp01

Intel Xeon E5-2697 v4

2.30 GHz

36

36

251 GB

-

186 GB

-

Debian 10

biwirender03

Intel Xeon E5-2650 v2

2.60 GHz

16

32

125 GB

-

820 GB

4 Tesla K40c (11 GB)

Debian 10

biwirender04

Intel Xeon E5-2637 v2

3.50 GHz

8

8

125 GB

6.1 TB

5 Tesla K40c (11 GB)

Debian 10

biwirender05

Intel Xeon E5-2637 v2

3.50 GHz

8

16

251 GB

6.1 TB

4 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender06

Intel Xeon E5-2637 v2

3.50 GHz

8

16

251 GB

6.1 TB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender07

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

701 GB

3 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender08

Intel Xeon E5-2640 v3

2.60 GHz

16

32

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender09

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

701 GB

3 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender10

Intel Xeon E5-2650 v4

2.20 GHz

24

24

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender11

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16

32

251 GB

701 GB

4 GeForce RTX 2080 Ti (10 GB)

Debian 10

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

24

503 GB

701 GB

5 TITAN Xp (12 GB)

Debian 10

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

701 GB

7 TITAN Xp (12 GB)

Debian 10

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

1.1 TB

7 TITAN Xp (12 GB)

Debian 10

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16

32

503 GB

403 GB

6 GeForce GTX 1080 Ti (11 GB)

Debian 10

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16

32

376 GB

403 GB

6 GeForce GTX 1080 Ti (11 GB)

Debian 10

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

24

251 GB

1.1 TB

6 TITAN X (12 GB)

Debian 10

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

692 GB

5 TITAN Xp (12 GB)

Debian 10

bmicgpu03

Intel Xeon E5-2630 v4

2.20 GHz

20

40

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Debian 10

bmicgpu05

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Debian 10

bmicgpu05

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

4 TITAN Xp (12 GB)

Debian 10

bmicgpu06

AMD EPYC 7742

1.50 GHz

128

128

503 GB

1.8 TB

4 A100 (40 GB)
1 A100 (80GB)

Debian 10

bmicgpu07

AMD EPYC 7763

1.50 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Debian 11

Detailed information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Automatic resource assignment

As the hardware outfit of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases:

  • Jobs not explicitely specifying resource allocations receive defaults
  • Upper limits on resource allocations are imposed on all jobs

These defaults and limits differ by partition. For details, see the job submit script /home/sladmcvl/slurm/job_submit.lua which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits.

/!\ Don't use the --mem and/or --cpus-per-task options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message:

srun: error: Unable to allocate resources: Requested node configuration is not available

To properly warn about impossible conditions, the job submit script would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission.

Partitions

Partitions group nodes with similar a hardware outfit together. Their defaults and limits are shown in the following table:

Partition

DefMPG

MaxMPG

DefCPG

MaxCPG

Time limit

cpu.medium.normal

-

-

-

-

2 d

gpu.low.normal

30 GB

30 GB

3

3

2 d

gpu.medium.normal

30 GB

50 GB

3

5

2 d

gpu.medium.long

30 GB

50 GB

3

5

5 d

gpu.high.normal

50 GB

70 GB

4

4

2 d

gpu.high.long

50 GB

70 GB

4

4

5 d

gpu.debug

30 GB

70 GB

3

5

8 h

gpu.bmic

64 GB

-

16

-

2 d

Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU

gpu.debug

This partition is reserved to run interactive jobs for debugging purposes. A typical use case is to start an interactive shell for a short time in this partitions to debug a job script. In the following example, the time limit is set to 10 minutes:

srun --time 10 --gres=gpu:1 --partition=gpu.debug --pty bash -i

gpu.bmic

Access to this partition is restricted to members of the BMIC group.

Access to gpu.bmic

Access to the partition gpu.bmic is available for members of the Slurm account bmic. You can check your Slurm account membership with the following command:

sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}

If you're a member of the Slurm account staff and have also been added to bmic, your default account is the latter and all your jobs will by default be sent to partition gpu.bmic.
If you want to have your jobs sent to other partitions, you have to specify the account staff as in the following example:

sbatch --account=staff job_script.sh

If you already have a PENDING job in the wrong partition you can correct it to partition <partition name> by issuing the following command:

scontrol update jobid=<job id> partition=<partition name> account=staff

If you want to send your jobs to nodes in other partitions, make sure to always specify --account=staff. Job quotas are calculated per account, by setting the account to staff you will make sure not to use up your quota from account bmic on nodes in partitions outside of gpu.bmic.

*.long

The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by CVL administration.

Display specific information

The following is a collection of command sequences to quickly extract specific summaries.

GPU availability

Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:

alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"

For monitoring its content the following aliases can be used:

alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""

GPU quota

A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command:

sacctmgr show qos format=name%8,maxtrespu%12

GPUs per user

Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:

(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    && scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
    |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
    |sort \
    |tr '_' ';'
) \
|column -s ';' -t

Services/SLURM-Biwi (last edited 2025-03-06 08:06:29 by stroth)