Differences between revisions 1 and 17 (spanning 16 versions)
Revision 1 as of 2020-05-19 11:22:25
Size: 8882
Editor: stroth
Comment:
Revision 17 as of 2020-08-03 12:03:01
Size: 12706
Editor: stroth
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
<<TableOfContents(3)>>
Line 3: Line 5:
 * If something is unclear or seems incomplete, check the [[Services/SLURM|Computing wiki article]] for more information  * Our official documentation for slurm is the [[Services/SLURM|Computing wiki article]], you need to read this as well.
Line 5: Line 7:
 * The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section [[#Accounts_and_limits|Accouns and limits]] has only informative purpose at the moment.

The alpha version of a [[https://people.ee.ethz.ch/~stroth/gpumon_pilot/index.html|GPUMon alternative]] is available. Please don't send feedback yet, use it as it is.
 
Line 16: Line 21:
{{{ {{{#!highlight bash numbers=disable
Line 19: Line 24:
If you're interested, feel free to have a look at the configuration, feedback is welcome!
Line 23: Line 29:
{{{ {{{#!highlight bash numbers=disable
Line 26: Line 32:
{{{
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
cpu.medium.normal up 2-00:00:00 1 idle biwirender01
gpu.low.normal up 2-00:00:00 1 idle biwirender03
gpu.medium.normal up 2-00:00:00 1 idle biwirender03
gpu.medium.long up 5-00:00:00 1 idle biwirender03
gpu.high.normal up 2-00:00:00 1 idle biwirender03
gpu.high.long up 5-00:00:00 1 idle biwirender03
gpu.debug up 6:00:00 1 idle biwirender03
gpu.mon up 6:00:00 1 idle biwirender03
}}}

Only interactive partitions `gpu.debug` and `gpu.monitor` can and should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.
||'''PARTITION'''||'''AVAIL'''||'''TIMELIMIT'''||'''NODES'''||'''STATE'''||'''NODELIST'''||
||cpu.medium.normal||up||2-00:00:00||38||idle||bender[01-06,39-70]||
||gpu.low.normal||up||2-00:00:00||1||idle||biwirender[03,04]||
||gpu.medium.normal||up||2-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.medium.long||up||5-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]||
||gpu.high.normal||up||2-00:00:00||3||idle||biwirender[13-15]||
||gpu.high.long||up||5-00:00:00||3||idle||biwirender[13-15]||
||gpu.debug||up||8:00:00||1||idle||biwirender[03,04]||

Only the interactive partition `gpu.debug` should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.
Line 42: Line 45:
{{{
srun --time 10 --gres=gpu:1 --pty bash -i
{{{#!highlight bash numbers=disable
srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i
Line 46: Line 49:

To monitor a running job, an interactive session can be started with explicitly selecting the monitoring partition. The node where the batch job is running needs to be specified as well:
{{{
srun --time 10 --partition=gpu.mon --nodelist=biwirender03 --pty bash -i
}}}
 * Allocating GPU resources is prohibited for such interactive jobs
 * It may be possible to attach an interactive session to an already running job, which will make the above obsolete. This is still under investigation at the moment.
Line 58: Line 54:
{{{ {{{#!highlight bash numbers=disable
Line 78: Line 74:
 * A job requesting more GPUs than allowed by the QOS of the users's account (see [[#Accounts_and_limits|Accouns and limits]]) will stay in "PENDING" state.
Line 90: Line 87:
{{{ {{{#!highlight bash numbers=disable
Line 103: Line 100:
For example with:
{{{#!highlight bash numbers=disable
sbatch --mem=50G primes_2GPU.sh
}}}
the above `squeue` command shows:
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
136 gpu.high.normal biwirender03 testuser cpu=6,mem=100G,node=1,billing=6,gres/gpu=2 0:28
}}}
Line 106: Line 112:
{{{ {{{#!highlight bash numbers=disable
Line 109: Line 115:
{{{
   Account Descr Org
---------- -------------------- --------------------
  deadconf deadline_conference biwi
  deadline deadline biwi
       isg isg isg
      root default root account root
     staff staff biwi
   student student biwi
}}}
||'''Account'''||'''Descr'''||'''Org'''||
||deadconf||deadline_conference||biwi||
||deadline||deadline||biwi||
||long||longer time limit||biwi||
||root||default root account||root||
||staff||staff||biwi||
||student||student||biwi||
Line 122: Line 125:
{{{ {{{#!highlight bash numbers=disable
Line 125: Line 128:
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
           root normal
           root root normal
       deadconf gpu_4 gpu_4
       deadline gpu_3 gpu_3
       deadline ........ gpu_3 gpu_3
            isg normal
            isg sladmall normal
          staff gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
        student gpu_1 gpu_1
}}}
||'''Account'''||'''User'''||'''Partition'''||'''!MaxJobs'''||'''QOS'''||'''Def QOS'''||
||deadconf||........|| || ||gpu_4||gpu_4||
||deadline||........|| || ||gpu_5||gpu_5||
||long||........|| || ||gpu_2||gpu_2||
||staff||........|| || ||gpu_7||gpu_7||
||student||........|| || ||gpu_3||gpu_3||
Line 145: Line 136:
{{{ {{{#!highlight bash numbers=disable
Line 148: Line 139:
{{{
          Name MaxTRESPU
--------------- ------------------------------
         normal
          gpu_1 gres/gpu=1
          gpu_2 gres/gpu=2
          gpu_3 gres/gpu=3
          gpu_4 gres/gpu=4
          gpu_5 gres/gpu=5
          gpu_6 gres/gpu=6
}}}
||'''Name'''||'''MaxTRESPU'''||
||normal||||
||gpu_1||gres/gpu=1||
||gpu_2||gres/gpu=2||
||gpu_3||gres/gpu=3||
||gpu_4||gres/gpu=4||
||gpu_5||gres/gpu=5||
||gpu_6||gres/gpu=6||
Line 163: Line 151:
{{{ {{{#!highlight bash numbers=disable
Line 172: Line 160:
{{{ {{{#!highlight bash numbers=disable
Line 176: Line 164:
{{{ {{{#!highlight bash numbers=disable
Line 185: Line 173:
Accounts with administrative privileges can be shown with:
{{{#!highlight bash numbers=disable
sacctmgr show user format=user%15,defaultaccount%15,admin%15'
}}}
Line 186: Line 179:
Have fun with using SLURM for your Jobs!



----
interactive queue beschreiben (keine gpu, etc.
lange optionen mit -- benutzen

long noch weglassen, falls zeit noch hinzufügen

dann ins wiki
Have fun using SLURM for your jobs!

= Content for the final page =
Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process.

== Nodes ==
The following tables summarizes node specific information:
||'''Server'''||'''CPU'''||'''Frequency'''||'''Cores'''||'''Memory'''||'''/scratch SSD'''||'''GPUs'''||'''Operating System'''||
||bender[01-06]||Intel Xeon E5-2670 v2||2.50 GHz||40||125 GB||-||-||Debian 9||
||bender[39-52]||Intel Xeon X5650||2.67 GHz||24||94 GB||-||-||Debian 9||
||bender[53-70]||Intel Xeon E5-2665 0||2.40 GHz||32||125 GB||-||-||Debian 9||
||biwirender03||Intel Xeon E5-2650 v2||2.60 GHz||32||125 GB||-||6 Tesla K40c (11 GB)||Debian 9||
||biwirender04||Intel Xeon E5-2637 v2||3.50 GHz||8||125 GB||✓||5 Tesla K40c (11 GB)||Debian 9||
||biwirender[05,06]||Intel Xeon E5-2637 v2||3.50 GHz||8||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender[07-09]||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender10||Intel Xeon E5-2650 v4||2.20 GHz||24||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender11||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9||
||biwirender12||Intel Xeon E5-2640 v3||2.60 GHz||32||251 GB||✓||6 !GeForce RTX 2080 Ti (10 GB)||Debian 9||
||biwirender13||Intel Xeon E5-2680 v3||2.50 GHz||24||503 GB||✓||4 TITAN Xp (12 GB)<<BR>>3 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender14||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||3 TITAN Xp (12 GB)<<BR>>4 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender15||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||3 TITAN Xp (12 GB)<<BR>>3 TITAN Xp COLLECTORS EDITION (12 GB)||Debian 9||
||biwirender17||Intel Xeon E5-2620 v4||2.10 GHz||32||503 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9||
||biwirender20||Intel Xeon E5-2620 v4||2.10 GHz||32||377 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9||
||bmicgpu01||Intel Xeon E5-2680 v3||2.50 GHz||24||251 GB||✓||6 TITAN X (Pascal) (12 GB)||Debian 9||
||bmicgpu02||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9||
||bmicgpu[03-05]||Intel Xeon E5-2630 v4||2.20 GHz||20||251 GB||✓||6 TITAN Xp (12 GB)||Debian 9||

Detailled information about all nodes can be seen by issuing the command
{{{#!highlight bash numbers=disable
scontrol show nodes
}}}

An overview of utilization of individual node's resources can be shown with:
{{{#!highlight bash numbers=disable
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
}}}
(Adapt the field length for gres and gresused to your needs)

== Partitions ==
Partitions including their limits are shown in the following table:
||'''Partition'''||'''DefMPG'''||'''MaxMPG'''||'''DefCPG'''||'''MaxCPG'''||'''Time limit'''||
||cpu.medium.normal||-||-||-||-||2 d||
||gpu.low.normal||20 GB||25 GB||3||3||2 d||
||gpu.medium.normal||40 GB||50 GB||3||5||2 d||
||gpu.medium.long||40 GB||50 GB||3||5||5 d||
||gpu.high.normal||70 GB||70 GB||4||4||2 d||
||gpu.high.long||70 GB||70 GB||4||4||5 d||
||gpu.debug||20 GB||25 GB||3||3||8 h||

'''Def''': Default, '''Max''': Maximum, '''MPG''': Memory Per GPU, '''CPG''': CPUs Per GPU
=== gpu.debug ===
This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.

=== *.long ===
The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.

== Display specific information ==
The following is a collection of command sequences to quickly extract specific summaries.
=== GPUs per user ===
Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:
{{{#!highlight bash numbers=disable
(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    && scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
    |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
    |sort \
    |tr '_' ';'
) \
|column -s ';' -t
}}}

Slurm Pilot project for Biwi

  • The following information is an abbreviated How-To with specific information for the pilot cluster
  • Our official documentation for slurm is the Computing wiki article, you need to read this as well.

  • Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi.
  • The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section Accouns and limits has only informative purpose at the moment.

The alpha version of a GPUMon alternative is available. Please don't send feedback yet, use it as it is.

Pilot-specific information

Involved machines are

  • biwirender01 for CPU computing

  • biwirender03 for GPU-computing

All available GPU partitions are overlayed on biwirender03. They will be available on different nodes in the final cluster.

/!\ long partitions are not yet implemented in the pilot!

Initialising slurm

All slurm command read the cluster configuration from the environment variable SLURM_CONF, so it needs to be set:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

If you're interested, feel free to have a look at the configuration, feedback is welcome!

Available partitions

The equivalent to SGE's queues is called partitions in slurm.
sinfo shows all available partitions:

sinfo

PARTITION

AVAIL

TIMELIMIT

NODES

STATE

NODELIST

cpu.medium.normal

up

2-00:00:00

38

idle

bender[01-06,39-70]

gpu.low.normal

up

2-00:00:00

1

idle

biwirender[03,04]

gpu.medium.normal

up

2-00:00:00

15

idle

biwirender[05-12,17,20],bmicgpu[01-05]

gpu.medium.long

up

5-00:00:00

15

idle

biwirender[05-12,17,20],bmicgpu[01-05]

gpu.high.normal

up

2-00:00:00

3

idle

biwirender[13-15]

gpu.high.long

up

5-00:00:00

3

idle

biwirender[13-15]

gpu.debug

up

8:00:00

1

idle

biwirender[03,04]

Only the interactive partition gpu.debug should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.

Interactive jobs

For testing purposes a job with an interactive session with 1 GPU can be started:

srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i
  • Such jobs are placed in gpu.debug by the scheduler

Allocating resources

GPUs

For a job to have access to a GPU, GPU resources need to be requested with the option --gres=gpu:<n>
Here's the sample job submission script primes_1GPU.sh requesting 1 GPU:

#!/bin/sh
#
#SBATCH  --mail-type=ALL
#SBATCH  --gres=gpu:1
#SBATCH  --output=log/%j.out
export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out
# env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job
#
# binary to execute
codebin/primes $1
echo ""
echo "Job statistics: "
sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15
echo ""
exit 0;
  • Make sure the directory wherein to store logfiles exists before submitting a job.
  • Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to $SLURM_JOB_ID anymore).

  • slurm also sets CUDA_VISIBLE_DEVICES. See the section GPU jobs in the main slurm article.

  • A job requesting more GPUs than allowed by the QOS of the users's account (see Accouns and limits) will stay in "PENDING" state.

Memory

If you omit the --mem option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose gpu.medium.normal:

sbatch primes_1GPU.sh
sbatch: GRES requested     : gpu:1
sbatch: GPUs requested     : 1
sbatch: Requested Memory   : ---
sbatch: CPUs requested     : ---
sbatch: Your job is a gpu job.
Submitted batch job 133

squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10

JOBID   PARTITION           NODELIST(REASON)    USER      TRES_ALLOC                                   TIME
133     gpu.medium.normal   biwirender03        testuser  cpu=3,mem=30G,node=1,billing=3,gres/gpu=1    0:02

An explicit --mem option selects the partition as follows:

--mem

Partition

< 30G

gpu.low.normal

30G - 50G

gpu.medium.normal

>50G - 70G

gpu.high.normal

>70G

not allowed

For example with:

sbatch --mem=50G primes_2GPU.sh

the above squeue command shows:

JOBID   PARTITION           NODELIST(REASON)    USER      TRES_ALLOC                                   TIME
136     gpu.high.normal     biwirender03        testuser  cpu=6,mem=100G,node=1,billing=6,gres/gpu=2   0:28

Accounts and limits

In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to:

sacctmgr show account

Account

Descr

Org

deadconf

deadline_conference

biwi

deadline

deadline

biwi

long

longer time limit

biwi

root

default root account

root

staff

staff

biwi

student

student

biwi

  • Accounts isg and root are not accessible to Biwi

GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account.

sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15

Account

User

Partition

MaxJobs

QOS

Def QOS

deadconf

........

gpu_4

gpu_4

deadline

........

gpu_5

gpu_5

long

........

gpu_2

gpu_2

staff

........

gpu_7

gpu_7

student

........

gpu_3

gpu_3

The QOS' gpu_x only contain a limit for the amount of GPUs per user:

sacctmgr show qos format=name%15,maxtrespu%30

Name

MaxTRESPU

normal

gpu_1

gres/gpu=1

gpu_2

gres/gpu=2

gpu_3

gres/gpu=3

gpu_4

gres/gpu=4

gpu_5

gres/gpu=5

gpu_6

gres/gpu=6

Users with administrative privileges can move a user between accounts deadline or deadconf.

List associations of testuser:

sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15

        Account            User       Partition  MaxJobs             QOS         Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
       deadline        testuser                                    gpu_3           gpu_3

Move testuser from deadline to staff:

/home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff

List associations of testuser again:

sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15

        Account            User       Partition  MaxJobs             QOS         Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
          staff        testuser                                    gpu_2           gpu_2

Accounts with administrative privileges can be shown with:

sacctmgr show user format=user%15,defaultaccount%15,admin%15'

Last words

Have fun using SLURM for your jobs!

Content for the final page

Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process.

Nodes

The following tables summarizes node specific information:

Server

CPU

Frequency

Cores

Memory

/scratch SSD

GPUs

Operating System

bender[01-06]

Intel Xeon E5-2670 v2

2.50 GHz

40

125 GB

-

-

Debian 9

bender[39-52]

Intel Xeon X5650

2.67 GHz

24

94 GB

-

-

Debian 9

bender[53-70]

Intel Xeon E5-2665 0

2.40 GHz

32

125 GB

-

-

Debian 9

biwirender03

Intel Xeon E5-2650 v2

2.60 GHz

32

125 GB

-

6 Tesla K40c (11 GB)

Debian 9

biwirender04

Intel Xeon E5-2637 v2

3.50 GHz

8

125 GB

5 Tesla K40c (11 GB)

Debian 9

biwirender[05,06]

Intel Xeon E5-2637 v2

3.50 GHz

8

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender[07-09]

Intel Xeon E5-2640 v3

2.60 GHz

16

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender10

Intel Xeon E5-2650 v4

2.20 GHz

24

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender11

Intel Xeon E5-2640 v3

2.60 GHz

16

251 GB

5 GeForce GTX TITAN X (12 GB)

Debian 9

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

32

251 GB

6 GeForce RTX 2080 Ti (10 GB)

Debian 9

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

503 GB

4 TITAN Xp (12 GB)
3 TITAN Xp COLLECTORS EDITION (12 GB)

Debian 9

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

503 GB

3 TITAN Xp (12 GB)
4 TITAN Xp COLLECTORS EDITION (12 GB)

Debian 9

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

503 GB

3 TITAN Xp (12 GB)
3 TITAN Xp COLLECTORS EDITION (12 GB)

Debian 9

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

32

503 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

32

377 GB

8 GeForce GTX 1080 Ti (11 GB)

Debian 9

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

251 GB

6 TITAN X (Pascal) (12 GB)

Debian 9

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

251 GB

5 TITAN Xp (12 GB)

Debian 9

bmicgpu[03-05]

Intel Xeon E5-2630 v4

2.20 GHz

20

251 GB

6 TITAN Xp (12 GB)

Debian 9

Detailled information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Partitions

Partitions including their limits are shown in the following table:

Partition

DefMPG

MaxMPG

DefCPG

MaxCPG

Time limit

cpu.medium.normal

-

-

-

-

2 d

gpu.low.normal

20 GB

25 GB

3

3

2 d

gpu.medium.normal

40 GB

50 GB

3

5

2 d

gpu.medium.long

40 GB

50 GB

3

5

5 d

gpu.high.normal

70 GB

70 GB

4

4

2 d

gpu.high.long

70 GB

70 GB

4

4

5 d

gpu.debug

20 GB

25 GB

3

3

8 h

Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU

gpu.debug

This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.

*.long

The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.

Display specific information

The following is a collection of command sequences to quickly extract specific summaries.

GPUs per user

Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:

(
    echo 'User;Account;QOS;GPUs' \
    && echo '----;-------;---;----' \
    && scontrol -a show jobs \
    |grep -E '(UserId|Account|JobState|TRES)=' \
    |paste - - - - \
    |grep -E 'JobState=RUNNING.*gres/gpu' \
    |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
    |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
    |sort \
    |tr '_' ';'
) \
|column -s ';' -t

Services/SLURM-Biwi (last edited 2025-03-06 08:06:29 by stroth)