Differences between revisions 37 and 38
Revision 37 as of 2021-03-09 13:17:26
Size: 9507
Editor: stroth
Comment: Added /scratch sizes to node table
Revision 38 as of 2021-03-25 09:28:13
Size: 9525
Editor: bonaccos
Comment:
Deletions are marked like this. Additions are marked like this.
Line 20: Line 20:
||bender[01] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||40 ||125 GB||-||3.7 TB||-||Debian 9||
||bender[02] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||20 ||125 GB||-||3.7 TB||-||Debian 9||
||bender[03-06] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||40 ||125 GB||-||3.7 TB||-||Debian 9||
||bender[39-52] ||Intel Xeon X5650 ||2.67 GHz ||24 ||48 || 94 GB||-||3.7 TB||-||Debian 9||
||bender[53-58] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||897 GB||-||Debian 9||
||bender[59-70] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||3.7 TB||-||Debian 9||
||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36 ||36 ||251 GB||-||186 GB||-||Debian 9||
||biwirender03 ||Intel Xeon E5-2650 v2||2.60 GHz ||16 ||32 ||125 GB||-||820 GB||6 Tesla K40c (11 GB)||Debian 9||
||biwirender04 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 || 8 ||125 GB||✓||6.1 TB||5 Tesla K40c (11 GB)||Debian 9||
||biwirender[05,06]||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender[07,09]||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender[08] ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender10 ||Intel Xeon E5-2650 v4||2.20 GHz ||24 ||24 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender11 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||6 !GeForce RTX 2080 Ti (10 GB)||Debian 9||
||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||✓||701 GB||6 TITAN Xp (12 GB)||Debian 9||
||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||701 GB||7 TITAN Xp (12 GB)||Debian 9||
||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||1.1 TB||7 TITAN Xp (12 GB)||Debian 9||
||bender[01] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||40 ||125 GB||-||3.7 TB||-||Debian 10||
||bender[02] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||20 ||125 GB||-||3.7 TB||-||Debian 10||
||bender[03-06] ||Intel Xeon E5-2670 v2||2.50 GHz ||20 ||40 ||125 GB||-||3.7 TB||-||Debian 10||
||bender[39-52] ||Intel Xeon X5650 ||2.67 GHz ||24 ||48 || 94 GB||-||3.7 TB||-||Debian 10||
||bender[53-58] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||897 GB||-||Debian 10||
||bender[59-70] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||3.7 TB||-||Debian 10||
||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36 ||36 ||251 GB||-||186 GB||-||Debian 10||
||biwirender03 ||Intel Xeon E5-2650 v2||2.60 GHz ||16 ||32 ||125 GB||-||820 GB||6 Tesla K40c (11 GB)||Debian 10||
||biwirender04 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 || 8 ||125 GB||✓||6.1 TB||5 Tesla K40c (11 GB)||Debian 10||
||biwirender[05,06]||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender[07,09]||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender[08] ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender10 ||Intel Xeon E5-2650 v4||2.20 GHz ||24 ||24 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender11 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10||
||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||6 !GeForce RTX 2080 Ti (10 GB)||Debian 10||
||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||✓||701 GB||6 TITAN Xp (12 GB)||Debian 10||
||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||701 GB||7 TITAN Xp (12 GB)||Debian 10||
||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||1.1 TB||7 TITAN Xp (12 GB)||Debian 10||

CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Physical cores

Logical processors

Memory

/scratch SSD

/scratch Size

GPUs

Operating System

bender[01]

Intel Xeon E5-2670 v2

2.50 GHz

20

40

125 GB

-

3.7 TB

-

Debian 10

bender[02]

Intel Xeon E5-2670 v2

2.50 GHz

20

20

125 GB

-

3.7 TB

-

Debian 10

bender[03-06]

Intel Xeon E5-2670 v2

2.50 GHz

20

40

125 GB

-

3.7 TB

-

Debian 10

bender[39-52]

Intel Xeon X5650

2.67 GHz

24

48

94 GB

-

3.7 TB

-

Debian 10

bender[53-58]

Intel Xeon E5-2665 0

2.40 GHz

32

64

125 GB

-

897 GB

-

Debian 10

bender[59-70]

Intel Xeon E5-2665 0

2.40 GHz

32

64

125 GB

-

3.7 TB

-

Debian 10

bmiccomp01

Intel Xeon E5-2697 v4

2.30 GHz

36

36

251 GB

-

186 GB

-

Debian 10

biwirender03

Intel Xeon E5-2650 v2

2.60 GHz

16

32

125 GB

-

820 GB

6 Tesla K40c (11 GB)

Debian 10

biwirender04

Intel Xeon E5-2637 v2

3.50 GHz

8

8

125 GB

6.1 TB

5 Tesla K40c (11 GB)

Debian 10

biwirender[05,06]

Intel Xeon E5-2637 v2

3.50 GHz

8

16

251 GB

6.1 TB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender[07,09]

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender[08]

Intel Xeon E5-2640 v3

2.60 GHz

16

32

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender10

Intel Xeon E5-2650 v4

2.20 GHz

24

24

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender11

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

701 GB

5 GeForce GTX TITAN X (12 GB)

Debian 10

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16

32

251 GB

701 GB

6 GeForce RTX 2080 Ti (10 GB)

Debian 10

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

24

503 GB

701 GB

6 TITAN Xp (12 GB)

Debian 10

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

701 GB

7 TITAN Xp (12 GB)

Debian 10

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

1.1 TB

7 TITAN Xp (12 GB)

Debian 10

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16

32

503 GB

403 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16

32

376 GB

403 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

24

251 GB

1.1 TB

6 TITAN X (12 GB)

Debian 9

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

692 GB

5 TITAN Xp (12 GB)

Debian 9

bmicgpu03

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

6 TITAN Xp (12 GB)

Debian 9

bmicgpu[04,05]

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Debian 9

Detailed information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Automatic resource assignment

As the hardware outfit of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases:

  • Jobs not explicitely specifying resource allocations receive defaults
  • Upper limits on resource allocations are imposed on all jobs

These defaults and limits differ by partition. For details, see the job submit script /home/sladmcvl/slurm/job_submit.lua which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits.

/!\ Don't use the --mem and/or --cpus-per-task options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message:

srun: error: Unable to allocate resources: Requested node configuration is not available

To properly warn about impossible conditions, the job submit script would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission.

Partitions

Partitions group nodes with similar a hardware outfit together. Their defaults and limits are shown in the following table:

Partition

DefMPG

MaxMPG

DefCPG

MaxCPG

Time limit

cpu.medium.normal

-

-

-

-

2 d

gpu.low.normal

30 GB

30 GB

3

3

2 d

gpu.medium.normal

30 GB

50 GB

3

5

2 d

gpu.medium.long

30 GB

50 GB

3

5

5 d

gpu.high.normal

50 GB

70 GB

4

4

2 d

gpu.high.long

50 GB

70 GB

4

4

5 d

gpu.debug

30 GB

70 GB

3

5

8 h

Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU

gpu.debug

This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.

*.long

The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.

Display specific information

The following is a collection of command sequences to quickly extract specific summaries.

GPU usage

Information about the GPU nodes and usage of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl2/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:

alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl2/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl2/smon.txt"

For monitoring its content the following aliases can be used:

alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl2/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl2/smon.txt\""

GPU quota

A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command:

sacctmgr show qos format=name%8,maxtrespu%12

GPUs per user

Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:

(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    && scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
    |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
    |sort \
    |tr '_' ';'
) \
|column -s ';' -t

Services/SLURM-Biwi (last edited 2024-03-05 13:15:42 by stroth)