Size: 7751
Comment:
|
Size: 7817
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 59: | Line 59: |
These defaults and limits differ by [[#Partitions|partition]]. For details, see the script `/home/sladmcvl/slurm/job_submit.lua` which is interpreted for each job by the Slurm controller to enforce defaults and limits. | These defaults and limits differ by [[#Partitions|partition]]. For details, see the script `/home/sladmcvl/slurm/job_submit.lua` which is interpreted for each job by the Slurm controller to set defaults and enforce limits. |
Line 67: | Line 67: |
Partitions with their defaults and limits are shown in the following table: | Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table: |
Contents
CVL Slurm cluster
The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:
Setting environment
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
Hardware
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Physical cores |
Logical processors |
Memory |
/scratch SSD |
GPUs |
Operating System |
bender[01] |
Intel Xeon E5-2670 v2 |
2.50 GHz |
20 |
40 |
125 GB |
- |
- |
Debian 9 |
bender[02] |
Intel Xeon E5-2670 v2 |
2.50 GHz |
20 |
20 |
125 GB |
- |
- |
Debian 9 |
bender[03-06] |
Intel Xeon E5-2670 v2 |
2.50 GHz |
20 |
40 |
125 GB |
- |
- |
Debian 9 |
bender[39-52] |
Intel Xeon X5650 |
2.67 GHz |
24 |
48 |
94 GB |
- |
- |
Debian 9 |
bender[53-70] |
Intel Xeon E5-2665 0 |
2.40 GHz |
32 |
64 |
125 GB |
- |
- |
Debian 9 |
bmiccomp01 |
Intel Xeon E5-2697 v4 |
2.30 GHz |
36 |
36 |
251 GB |
- |
- |
Debian 9 |
biwirender03 |
Intel Xeon E5-2650 v2 |
2.60 GHz |
16 |
32 |
125 GB |
- |
6 Tesla K40c (11 GB) |
Debian 9 |
biwirender04 |
Intel Xeon E5-2637 v2 |
3.50 GHz |
8 |
8 |
125 GB |
✓ |
5 Tesla K40c (11 GB) |
Debian 9 |
biwirender[05,06] |
Intel Xeon E5-2637 v2 |
3.50 GHz |
8 |
16 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender[07,09] |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender[08] |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender10 |
Intel Xeon E5-2650 v4 |
2.20 GHz |
24 |
24 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender11 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
251 GB |
✓ |
6 GeForce RTX 2080 Ti (10 GB) |
Debian 9 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
503 GB |
✓ |
6 TITAN Xp (12 GB) |
Debian 9 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
✓ |
7 TITAN Xp (12 GB) |
Debian 9 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
✓ |
7 TITAN Xp (12 GB) |
Debian 9 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
503 GB |
✓ |
8 GeForce GTX 1080 Ti (11 GB) |
Debian 9 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
376 GB |
✓ |
8 GeForce GTX 1080 Ti (11 GB) |
Debian 9 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
251 GB |
✓ |
6 TITAN X (12 GB) |
Debian 9 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
✓ |
5 TITAN Xp (12 GB) |
Debian 9 |
bmicgpu03 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
✓ |
6 TITAN Xp (12 GB) |
Debian 9 |
bmicgpu[04,05] |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
✓ |
5 TITAN Xp (12 GB) |
Debian 9 |
Detailed information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Automatic resource assignment
As the hardware outfit of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization:
- Jobs not explicitely specifying resource allocations receive defaults
- Upper limits on resource allocations are imposed on all jobs
These defaults and limits differ by partition. For details, see the script /home/sladmcvl/slurm/job_submit.lua which is interpreted for each job by the Slurm controller to set defaults and enforce limits.
Don't use the --mem and/or --cpus-per-task options for GPU jobs outside of the defaults and limits or you will create impossible decisions for the above script to handle. If you do, the following error will be shown:
srun: error: Unable to allocate resources: Requested node configuration is not available
Partitions
Partitions group nodes with similar a hardware outfit together. Their defaults and limits are shown in the following table:
Partition |
DefMPG |
MaxMPG |
DefCPG |
MaxCPG |
Time limit |
cpu.medium.normal |
- |
- |
- |
- |
2 d |
gpu.low.normal |
30 GB |
30 GB |
3 |
3 |
2 d |
gpu.medium.normal |
30 GB |
50 GB |
3 |
5 |
2 d |
gpu.medium.long |
30 GB |
50 GB |
3 |
5 |
5 d |
gpu.high.normal |
50 GB |
70 GB |
4 |
4 |
2 d |
gpu.high.long |
50 GB |
70 GB |
4 |
4 |
5 d |
gpu.debug |
30 GB |
70 GB |
3 |
5 |
8 h |
Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU
gpu.debug
This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.
*.long
The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.
Display specific information
The following is a collection of command sequences to quickly extract specific summaries.
GPU quota
A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command:
sacctmgr show qos format=name%8,maxtrespu%12
GPUs per user
Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:
(
echo 'User;Account;QOS;GPUs' \
&& echo '----;-------;---;----' \
&& scontrol -a show jobs \
|grep -E '(UserId|Account|JobState|TRES)=' \
|paste - - - - \
|grep -E 'JobState=RUNNING.*gres/gpu' \
|sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
|awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
|sort \
|tr '_' ';'
) \
|column -s ';' -t