Differences between revisions 85 and 86
Revision 85 as of 2024-12-06 13:59:04
Size: 8931
Editor: stroth
Comment: Remove obsolete sections "GPU quota" and "GPUs per user"
Revision 86 as of 2024-12-06 14:01:11
Size: 8289
Editor: stroth
Comment: Remove obsolete seciton "Partitions"
Deletions are marked like this. Additions are marked like this.
Line 56: Line 56:
== Partitions ==
Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table:
||'''Partition'''||'''DefMPG'''||'''MaxMPG'''||'''DefCPG'''||'''MaxCPG'''||'''Time limit'''||
||gpu.medium.normal||30 GB||50 GB||3 ||5||2 d||
||gpu.medium.long ||30 GB||50 GB||3 ||5||5 d||
||gpu.high.normal ||50 GB||70 GB||4 ||4||2 d||
||gpu.high.long ||50 GB||70 GB||4 ||4||5 d||
||gpu.bmic ||64 GB||- ||16||-||2 d||
||gpu.bmic.long ||64 GB||- ||16||-||2 w||

'''Def''': Default, '''Max''': Maximum, '''MPG''': Memory Per GPU, '''CPG''': CPUs Per GPU

CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Physical cores

Logical processors

Memory

/scratch Size

GPUs

GPU architecture

Operating system

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16

32

251 GB

701 GB

4 RTX 2080 Ti (10 GB)

Turing

Debian 11

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

24

503 GB

701 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

701 GB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

1.1 TB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16

32

503 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16

32

376 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

24

251 GB

1.1 TB

6 TITAN X (12 GB)

Pascal

Debian 11

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

692 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu03

Intel Xeon E5-2630 v4

2.20 GHz

20

40

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu04

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu05

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

4 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu06

AMD EPYC 7742

3.41 GHz

128

128

503 GB

1.8 TB

4 A100 (40 GB)
1 A100 (80GB)

Ampere

Debian 11

bmicgpu07

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu08

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu09

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus01

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus02

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus03

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus04

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

Detailed information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Automatic/default resource assignment and limits

  • Jobs not explicitely requesting GPU resources receive the default of 1 GPU
  • Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU
  • Run time for interactive jobs is limited to 2 hours

cpu.bmic, gpu.bmic, gpu.bmic.long

Access to these partitions is restricted to members of the BMIC group.

Access to cpu.bmic, gpu.bmic, gpu.bmic.long

Access to the partitions cpu.bmic, gpu.bmic and gpu.bmic.long are available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:

sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
  • If you're a member of the Slurm account staff and have also been added to bmic, your default account is the latter and all your jobs will by default be sent to partition gpu.bmic.

  • If you do not ask for GPU resources your jobs will be sent to partition cpu.bmic.

  • If you want to run longer jobs in partition gpu.bmic.long, coordinate this request with your group and request to be added to the account gpu.bmic.long at ISG D-ITET support.

  • If you want to have your jobs sent to other partitions, you have to specify the account staff (or bmic.long) as in the following example:

    sbatch --account=staff job_script.sh
    
  • If you already have a PENDING job in the wrong partition you can correct it to partition <partition name> by issuing the following command:

    scontrol update jobid=<job id> partition=<partition name> account=staff
    
  • If you want to send your jobs to nodes in other partitions, make sure to always specify --account=staff. Job quotas are calculated per account, by setting the account to staff you will make sure not to use up your quota from account bmic on nodes in partitions outside of gpu.bmic.

Time limit for interactive jobs

In gpu.bmic the time limit for interactive jobs is 8 h.

gpu.medium.long, gpu.high.long

The partitions gpu.medium.long and gpu.high.long are only accessible to members of the account "long". Membership is temporary and granted on demand by CVL administration.

Display specific information

The following is a collection of command sequences to quickly extract specific summaries.

GPU availability

Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:

alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"

For monitoring its content the following aliases can be used:

alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""

Services/SLURM-Biwi (last edited 2025-03-06 08:06:29 by stroth)