CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the main Slurm article in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here:

Access

Access to the CVL Slurm cluster is granted by Kristine Haberer.

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Physical cores

Logical processors

Memory

/scratch Size

GPUs

GPU architecture

Operating system

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16

32

661 GB

701 GB

4 RTX 2080 Ti (10 GB)

Turing

Debian 11

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

24

503 GB

701 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

701 GB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

1.1 TB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16

32

503 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16

32

376 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

24

251 GB

1.1 TB

6 TITAN X (12 GB)

Pascal

Debian 11

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

692 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu03

Intel Xeon E5-2630 v4

2.20 GHz

20

40

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu04

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu05

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

4 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu06

AMD EPYC 7742

3.41 GHz

128

128

503 GB

1.8 TB

4 A100 (40 GB)
1 A100 (80GB)
3 A6000 (48 GB)

Ampere

Debian 11

bmicgpu07

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu08

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu09

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu10

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus01

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus02

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus03

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus04

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

Detailed information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Automatic/default resource assignment

Limits

Need for longer run time

If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account long at ISG D-ITET support
After you've been added to long, specify this account as in the following example to run longer jobs:

sbatch --account=long job_script.sh

Display GPU availability

Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:

alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"

For monitoring its content the following aliases can be used:

alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""

Access local scratch of diskless clients

Local /scratch disks of managed diskless clients are available on a remote host at /scratch_net/<hostname> as an automount (on demand). Typically you set up your personal directory with your username $USER on the local /scratch of the managed client you work on.

BMIC specific information

The BMIC group of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in partitions cpu.bmic, gpu.bmic and gpu.bmic.long.
Access to these partitions is available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:

sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}

Notable differences

With access to the BMIC resources, the following differences to the common defaults and limits apply:

Services/SLURM-Biwi (last edited 2025-03-06 08:06:29 by stroth)