Differences between revisions 20 and 21
Revision 20 as of 2020-08-21 12:10:29
Size: 13551
Editor: stroth
Comment:
Revision 21 as of 2020-08-27 09:27:12
Size: 6075
Editor: stroth
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
= Slurm Pilot project for Biwi =
 * The following information is an abbreviated How-To with specific information for the pilot cluster
 * Our official documentation for slurm is the [[Services/SLURM|Computing wiki article]], you need to read this as well.
 * Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi.
 * The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section [[#Accounts_and_limits|Accouns and limits]] has only informative purpose at the moment.
= CVL Slurm cluster =
The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:
Line 9: Line 6:
The alpha version of a [[https://people.ee.ethz.ch/~stroth/gpumon_pilot/index.html|GPUMon alternative]] is available. Please don't send feedback yet, use it as it is.
 
== Pilot-specific information ==
Involved machines are
 * `biwirender01` for '''CPU computing'''
 * `biwirender03` for '''GPU-computing'''
All available GPU partitions are overlayed on `biwirender03`. They will be available on different nodes in the final cluster.
 * [[Services/SLURM|Computing wiki main Slurm article]]
 * [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL wiki slurm article]]
Line 17: Line 9:
/!\ `long` partitions are not yet implemented in the pilot!

== Initialising slurm ==
All slurm command read the cluster configuration from the environment variable `SLURM_CONF`, so it needs to be set:
== Setting environment ==
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
Line 22: Line 12:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
}}}
If you're interested, feel free to have a look at the configuration, feedback is welcome!

== Available partitions ==
The equivalent to SGE's queues is called ''partitions'' in slurm.<<BR>>
`sinfo` shows all available partitions:
{{{#!highlight bash numbers=disable
sinfo
}}}
||'''PARTITION'''||'''AVAIL'''||'''TIMELIMIT'''||'''NODES'''||'''STATE'''||'''NODELIST'''||
||cpu.medium.normal||up||2-00:00:00||38||idle||bender[01-06,39-70]||
||gpu.low.normal||up||2-00:00:00||1||idle||biwirender[03,04]||
||gpu.medium.normal||up||2-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.medium.long||up||5-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.high.normal||up||2-00:00:00||3||idle||biwirender[13-15]||
||gpu.high.long||up||5-00:00:00||3||idle||biwirender[13-15]||
||gpu.debug||up||8:00:00||1||idle||biwirender[03,04]||

Only the interactive partition `gpu.debug` should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.

=== Interactive jobs ===
For testing purposes a job with an interactive session with 1 GPU can be started:
{{{#!highlight bash numbers=disable
srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i
}}}
 * Such jobs are placed in `gpu.debug` by the scheduler

== Allocating resources ==
=== GPUs ===
For a job to have access to a GPU, GPU resources need to be requested with the option `--gres=gpu:<n>`<<BR>>
Here's the sample job submission script `primes_1GPU.sh` requesting 1 GPU:
{{{#!highlight bash numbers=disable
#!/bin/sh
#
#SBATCH --mail-type=ALL
#SBATCH --gres=gpu:1
#SBATCH --output=log/%j.out
export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out
# env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job
#
# binary to execute
codebin/primes $1
echo ""
echo "Job statistics: "
sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15
echo ""
exit 0;
}}}
 * Make sure the directory wherein to store logfiles exists before submitting a job.
 * Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to `$SLURM_JOB_ID` anymore).
 * slurm also sets CUDA_VISIBLE_DEVICES. See the section [[Services/SLURM#GPU_jobs|GPU jobs]] in the main slurm article.
 * A job requesting more GPUs than allowed by the QOS of the users's account (see [[#Accounts_and_limits|Accouns and limits]]) will stay in "PENDING" state.

=== Memory ===
If you omit the `--mem` option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose `gpu.medium.normal`:
{{{
sbatch primes_1GPU.sh
sbatch: GRES requested : gpu:1
sbatch: GPUs requested : 1
sbatch: Requested Memory : ---
sbatch: CPUs requested : ---
sbatch: Your job is a gpu job.
Submitted batch job 133
}}}
{{{#!highlight bash numbers=disable
squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10
}}}
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02
}}}
An explicit `--mem` option selects the partition as follows:
||'''--mem'''||'''Partition'''||
||< 30G||gpu.low.normal||
||30G - 50G||gpu.medium.normal||
||>50G - 70G||gpu.high.normal||
||>70G||not allowed||
For example with:
{{{#!highlight bash numbers=disable
sbatch --mem=50G primes_2GPU.sh
}}}
the above `squeue` command shows:
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
136 gpu.high.normal biwirender03 testuser cpu=6,mem=100G,node=1,billing=6,gres/gpu=2 0:28
export SLURM_CONF=/home/sladmitet/slurm/slurm.conf
Line 110: Line 15:
== Accounts and limits ==
In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to:
{{{#!highlight bash numbers=disable
sacctmgr show account
}}}
||'''Account'''||'''Descr'''||'''Org'''||
||deadconf||deadline_conference||biwi||
||deadline||deadline||biwi||
||long||longer time limit||biwi||
||root||default root account||root||
||staff||staff||biwi||
||student||student||biwi||
 * Accounts `isg` and `root` are not accessible to Biwi

GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account.
{{{#!highlight bash numbers=disable
sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
||'''Account'''||'''User'''||'''Partition'''||'''!MaxJobs'''||'''QOS'''||'''Def QOS'''||
||deadconf||........|| || ||gpu_4||gpu_4||
||deadline||........|| || ||gpu_5||gpu_5||
||long||........|| || ||gpu_2||gpu_2||
||staff||........|| || ||gpu_7||gpu_7||
||student||........|| || ||gpu_3||gpu_3||

The QOS' `gpu_x` only contain a limit for the amount of GPUs per user:
{{{#!highlight bash numbers=disable
sacctmgr show qos format=name%15,maxtrespu%30
}}}
||'''Name'''||'''MaxTRESPU'''||
||normal||||
||gpu_1||gres/gpu=1||
||gpu_2||gres/gpu=2||
||gpu_3||gres/gpu=3||
||gpu_4||gres/gpu=4||
||gpu_5||gres/gpu=5||
||gpu_6||gres/gpu=6||

Users with administrative privileges can move a user between accounts `deadline` or `deadconf`.<<BR>>

List associations of testuser:
{{{#!highlight bash numbers=disable
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
       deadline testuser gpu_3 gpu_3
}}}
Move testuser from deadline to staff:
{{{#!highlight bash numbers=disable
/home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff
}}}
List associations of testuser again:
{{{#!highlight bash numbers=disable
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
          staff testuser gpu_2 gpu_2
}}}

Accounts with administrative privileges can be shown with:
{{{#!highlight bash numbers=disable
sacctmgr show user format=user%15,defaultaccount%15,admin%15'
}}}

== Last words ==
Have fun using SLURM for your jobs!

= Content for the final page =
Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process.

== Nodes ==
== Hardware ==
Line 192: Line 23:
||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36/0 ||251 GB||-||-||Debian 9||

CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmitet/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Cores (P/L)

Memory

/scratch SSD

GPUs

Operating System

bender[01]

Intel Xeon E5-2670 v2

2.50 GHz

20/20

125 GB

-

-

Debian 9

bender[02]

Intel Xeon E5-2670 v2

2.50 GHz

20/0

125 GB

-

-

Debian 9

bender[03-06]

Intel Xeon E5-2670 v2

2.50 GHz

20/20

125 GB

-

-

Debian 9

bender[39-52]

Intel Xeon X5650

2.67 GHz

24/24

94 GB

-

-

Debian 9

bender[53-70]

Intel Xeon E5-2665 0

2.40 GHz

32/32

125 GB

-

-

Debian 9

bmiccomp01

Intel Xeon E5-2697 v4

2.30 GHz

36/0

251 GB

-

-

Debian 9

biwirender03

Intel Xeon E5-2650 v2

2.60 GHz

16/16

125 GB

-

6 Tesla K40c (11 GB)

Debian 9

biwirender04

Intel Xeon E5-2637 v2

3.50 GHz

8/0

125 GB

5 Tesla K40c (11 GB)

Debian 9

biwirender[05,06]

Intel Xeon E5-2637 v2

3.50 GHz

8/0

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender[07,09]

Intel Xeon E5-2640 v3

2.60 GHz

16/0

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender[08]

Intel Xeon E5-2640 v3

2.60 GHz

16/16

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender10

Intel Xeon E5-2650 v4

2.20 GHz

24/0

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender11

Intel Xeon E5-2640 v3

2.60 GHz

16/0

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16/16

251 GB

6 GeForce RTX 2080 Ti (10 GB)

Debian 9

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24/0

503 GB

7 TITAN Xp (12 GB)

Debian 9

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28/0

503 GB

7 TITAN Xp (12 GB)

Debian 9

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28/0

503 GB

6 TITAN Xp (12 GB)

Debian 9

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16/16

503 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16/16

377 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24/0

251 GB

6 TITAN X (12 GB)

Debian 9

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16/0

251 GB

5 TITAN Xp (12 GB)

Debian 9

bmicgpu[03]

Intel Xeon E5-2630 v4

2.20 GHz

20/0

251 GB

6 TITAN Xp (12 GB)

Debian 9

bmicgpu[04,05]

Intel Xeon E5-2630 v4

2.20 GHz

20/0

251 GB

5 TITAN Xp (12 GB)

Debian 9

Detailled information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Partitions

Partitions including their limits are shown in the following table:

Partition

DefMPG

MaxMPG

DefCPG

MaxCPG

Time limit

cpu.medium.normal

-

-

-

-

2 d

gpu.low.normal

20 GB

25 GB

3

3

2 d

gpu.medium.normal

40 GB

50 GB

3

5

2 d

gpu.medium.long

40 GB

50 GB

3

5

5 d

gpu.high.normal

70 GB

70 GB

4

4

2 d

gpu.high.long

70 GB

70 GB

4

4

5 d

gpu.debug

20 GB

25 GB

3

3

8 h

Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU

gpu.debug

This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.

*.long

The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.

Display specific information

The following is a collection of command sequences to quickly extract specific summaries.

GPUs per user

Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:

(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    && scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
    |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
    |sort \
    |tr '_' ';'
) \
|column -s ';' -t

Services/SLURM-Biwi (last edited 2024-03-05 13:15:42 by stroth)