Size: 8882
Comment:
|
Size: 10201
Comment: Remove references to partition gpu.low.normal
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= Slurm Pilot project for Biwi = * The following information is an abbreviated How-To with specific information for the pilot cluster * If something is unclear or seems incomplete, check the [[Services/SLURM|Computing wiki article]] for more information * Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi. |
#rev 2020-09-10 stroth |
Line 6: | Line 3: |
== Pilot-specific information == Involved machines are * `biwirender01` for '''CPU computing''' * `biwirender03` for '''GPU-computing''' All available GPU partitions are overlayed on `biwirender03`. They will be available on different nodes in the final cluster. |
<<TableOfContents(3)>> |
Line 12: | Line 5: |
/!\ `long` partitions are not yet implemented in the pilot! | = CVL Slurm cluster = The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here: |
Line 14: | Line 8: |
== Initialising slurm == All slurm command read the cluster configuration from the environment variable `SLURM_CONF`, so it needs to be set: {{{ |
* [[Services/SLURM|Computing wiki main Slurm article]] * [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL wiki slurm article]] == Setting environment == The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster: {{{#!highlight bash numbers=disable |
Line 20: | Line 17: |
== Available partitions == The equivalent to SGE's queues is called ''partitions'' in slurm.<<BR>> `sinfo` shows all available partitions: {{{ sinfo }}} {{{ PARTITION AVAIL TIMELIMIT NODES STATE NODELIST cpu.medium.normal up 2-00:00:00 1 idle biwirender01 gpu.low.normal up 2-00:00:00 1 idle biwirender03 gpu.medium.normal up 2-00:00:00 1 idle biwirender03 gpu.medium.long up 5-00:00:00 1 idle biwirender03 gpu.high.normal up 2-00:00:00 1 idle biwirender03 gpu.high.long up 5-00:00:00 1 idle biwirender03 gpu.debug up 6:00:00 1 idle biwirender03 gpu.mon up 6:00:00 1 idle biwirender03 |
== Hardware == The following tables summarizes node specific information: ||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch SSD'''||'''/scratch Size'''||'''GPUs'''||'''Operating System'''|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||4 !GeForce RTX 2080 Ti (10 GB)||Debian 11|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||✓||701 GB||5 TITAN Xp (12 GB)||Debian 11|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||701 GB||7 TITAN Xp (12 GB)||Debian 11|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||1.1 TB||7 TITAN Xp (12 GB)||Debian 11|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 11|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 11|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||✓||1.1 TB||6 TITAN X (12 GB)||Debian 11|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||692 GB||5 TITAN Xp (12 GB)||Debian 11|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 11|| ||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 11|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||4 TITAN Xp (12 GB)||Debian 11|| ||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||✓||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB) ||Debian 11|| ||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||✓||6.9 TB||8 A6000 (48 GB)||Debian 11|| ||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||✓||6.9 TB||8 A6000 (48 GB)||Debian 11|| ||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||✓||6.9 TB||8 A6000 (48 GB)||Debian 11|| Detailed information about all nodes can be seen by issuing the command {{{#!highlight bash numbers=disable scontrol show nodes |
Line 38: | Line 41: |
Only interactive partitions `gpu.debug` and `gpu.monitor` can and should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it. | An overview of utilization of individual node's resources can be shown with: {{{#!highlight bash numbers=disable sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10 }}} (Adapt the field length for gres and gresused to your needs) |
Line 40: | Line 47: |
=== Interactive jobs === For testing purposes a job with an interactive session with 1 GPU can be started: |
== Automatic resource assignment == As the [[#Hardware|hardware outfit]] of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases: * Jobs not explicitely specifying resource allocations receive defaults * Upper limits on resource allocations are imposed on all jobs * Run time for interactive jobs is limited to 2 hours These defaults and limits differ by [[#Partitions|partition]]. For details, see the ''job submit script'' `/home/sladmcvl/slurm/job_submit.lua` which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits. /!\ Don't use the '''--mem''' and/or '''--cpus-per-task''' options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message: |
Line 43: | Line 57: |
srun --time 10 --gres=gpu:1 --pty bash -i | srun: error: Unable to allocate resources: Requested node configuration is not available |
Line 45: | Line 59: |
* Such jobs are placed in `gpu.debug` by the scheduler | To properly warn about impossible conditions, the ''job submit script'' would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission. |
Line 47: | Line 61: |
To monitor a running job, an interactive session can be started with explicitly selecting the monitoring partition. The node where the batch job is running needs to be specified as well: {{{ srun --time 10 --partition=gpu.mon --nodelist=biwirender03 --pty bash -i |
== Partitions == Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table: ||'''Partition'''||'''DefMPG'''||'''MaxMPG'''||'''DefCPG'''||'''MaxCPG'''||'''Time limit'''|| ||gpu.medium.normal||30 GB||50 GB||3 ||5||2 d|| ||gpu.medium.long ||30 GB||50 GB||3 ||5||5 d|| ||gpu.high.normal ||50 GB||70 GB||4 ||4||2 d|| ||gpu.high.long ||50 GB||70 GB||4 ||4||5 d|| ||gpu.bmic ||64 GB||- ||16||-||2 d|| ||gpu.bmic.long ||64 GB||- ||16||-||2 w|| '''Def''': Default, '''Max''': Maximum, '''MPG''': Memory Per GPU, '''CPG''': CPUs Per GPU === cpu.bmic, gpu.bmic, gpu.bmic.long === Access to these partitions is restricted to members of the [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]]. ==== Access to cpu.bmic, gpu.bmic, gpu.bmic.long ==== Access to the partitions `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long` are available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} |
Line 51: | Line 81: |
* Allocating GPU resources is prohibited for such interactive jobs * It may be possible to attach an interactive session to an already running job, which will make the above obsolete. This is still under investigation at the moment. |
* If you're a member of the Slurm account `staff` and have also been added to `bmic`, your default account is the latter and all your jobs will by default be sent to partition `gpu.bmic`.<<BR>> * If you do not ask for GPU resources your jobs will be sent to partition `cpu.bmic`.<<BR>> * If you want to run longer jobs in partition `gpu.bmic.long`, coordinate this request with your group and request to be added to the account `gpu.bmic.long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]].<<BR>> * If you want to have your jobs sent to other partitions, you have to specify the account `staff` (or `bmic.long`) as in the following example: {{{#!highlight bash numbers=disable sbatch --account=staff job_script.sh }}} * If you already have a PENDING job in the wrong partition you can correct it to partition `<partition name>` by issuing the following command: {{{#!highlight bash numbers=disable scontrol update jobid=<job id> partition=<partition name> account=staff }}} * If you want to send your jobs to nodes in other partitions, make sure to always specify `--account=staff`. Job quotas are calculated per account, by setting the account to `staff` you will make sure not to use up your quota from account `bmic` on nodes in partitions outside of `gpu.bmic`. |
Line 54: | Line 92: |
== Allocating resources == === GPUs === For a job to have access to a GPU, GPU resources need to be requested with the option `--gres=gpu:<n>`<<BR>> Here's the sample job submission script `primes_1GPU.sh` requesting 1 GPU: {{{ #!/bin/sh # #SBATCH --mail-type=ALL #SBATCH --gres=gpu:1 #SBATCH --output=log/%j.out export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out # env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job # # binary to execute codebin/primes $1 echo "" echo "Job statistics: " sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15 echo "" exit 0; |
==== Time limit for interactive jobs ==== In `gpu.bmic` the time limit for interactive jobs is 8 h. === gpu.medium.long, gpu.high.long === The partitions `gpu.medium.long` and `gpu.high.long` are only accessible to members of the account "long". Membership is temporary and granted on demand by [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL administration|]]. == Display specific information == The following is a collection of command sequences to quickly extract specific summaries. === GPU availability === Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs: {{{#!highlight bash numbers=disable alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt" alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt" |
Line 75: | Line 107: |
* Make sure the directory wherein to store logfiles exists before submitting a job. * Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to `$SLURM_JOB_ID` anymore). * slurm also sets CUDA_VISIBLE_DEVICES. See the section [[Services/SLURM#GPU_jobs|GPU jobs]] in the main slurm article. === Memory === If you omit the `--mem` option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose `gpu.medium.normal`: {{{ sbatch primes_1GPU.sh sbatch: GRES requested : gpu:1 sbatch: GPUs requested : 1 sbatch: Requested Memory : --- sbatch: CPUs requested : --- sbatch: Your job is a gpu job. Submitted batch job 133 }}} {{{ squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10 }}} {{{ JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME 133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02 }}} An explicit `--mem` option selects the partition as follows: ||'''--mem'''||'''Partition'''|| ||< 30G||gpu.low.normal|| ||30G - 50G||gpu.medium.normal|| ||>50G - 70G||gpu.high.normal|| ||>70G||not allowed|| == Accounts and limits == In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to: {{{ sacctmgr show account }}} {{{ Account Descr Org ---------- -------------------- -------------------- deadconf deadline_conference biwi deadline deadline biwi isg isg isg root default root account root staff staff biwi student student biwi }}} * Accounts `isg` and `root` are not accessible to Biwi GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account. {{{ sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 }}} {{{ Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- root normal root root normal deadconf gpu_4 gpu_4 deadline gpu_3 gpu_3 deadline ........ gpu_3 gpu_3 isg normal isg sladmall normal staff gpu_2 gpu_2 staff ........ gpu_2 gpu_2 staff ........ gpu_2 gpu_2 staff ........ gpu_2 gpu_2 staff ........ gpu_2 gpu_2 staff ........ gpu_2 gpu_2 student gpu_1 gpu_1 |
For monitoring its content the following aliases can be used: {{{#!highlight bash numbers=disable alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\"" alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\"" |
Line 144: | Line 113: |
The QOS' `gpu_x` only contain a limit for the amount of GPUs per user: {{{ sacctmgr show qos format=name%15,maxtrespu%30 }}} {{{ Name MaxTRESPU --------------- ------------------------------ normal gpu_1 gres/gpu=1 gpu_2 gres/gpu=2 gpu_3 gres/gpu=3 gpu_4 gres/gpu=4 gpu_5 gres/gpu=5 gpu_6 gres/gpu=6 |
=== GPU quota === A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command: {{{#!highlight bash numbers=disable sacctmgr show qos format=name%8,maxtrespu%12 |
Line 160: | Line 119: |
Users with administrative privileges can move a user between accounts `deadline` or `deadconf`.<<BR>> List associations of testuser: {{{ sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 |
=== GPUs per user === Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs: {{{#!highlight bash numbers=disable ( echo 'User;Account;QOS;GPUs' \ && echo '----;-------;---;----' \ && scontrol -a show jobs \ |grep -E '(UserId|Account|JobState|TRES)=' \ |paste - - - - \ |grep -E 'JobState=RUNNING.*gres/gpu' \ |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \ |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \ |sort \ |tr '_' ';' ) \ |column -s ';' -t |
Line 166: | Line 136: |
{{{ Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- deadline testuser gpu_3 gpu_3 }}} Move testuser from deadline to staff: {{{ /home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff }}} List associations of testuser again: {{{ sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 }}} {{{ Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- staff testuser gpu_2 gpu_2 }}} == Last words == Have fun with using SLURM for your Jobs! ---- interactive queue beschreiben (keine gpu, etc. lange optionen mit -- benutzen long noch weglassen, falls zeit noch hinzufügen dann ins wiki |
Contents
CVL Slurm cluster
The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:
Setting environment
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
Hardware
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Physical cores |
Logical processors |
Memory |
/scratch SSD |
/scratch Size |
GPUs |
Operating System |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
251 GB |
✓ |
701 GB |
4 GeForce RTX 2080 Ti (10 GB) |
Debian 11 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
503 GB |
✓ |
701 GB |
5 TITAN Xp (12 GB) |
Debian 11 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
✓ |
701 GB |
7 TITAN Xp (12 GB) |
Debian 11 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
✓ |
1.1 TB |
7 TITAN Xp (12 GB) |
Debian 11 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
503 GB |
✓ |
403 GB |
6 GeForce GTX 1080 Ti (11 GB) |
Debian 11 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
376 GB |
✓ |
403 GB |
6 GeForce GTX 1080 Ti (11 GB) |
Debian 11 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
251 GB |
✓ |
1.1 TB |
6 TITAN X (12 GB) |
Debian 11 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
✓ |
692 GB |
5 TITAN Xp (12 GB) |
Debian 11 |
bmicgpu03 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
40 |
251 GB |
✓ |
1.1 TB |
5 TITAN Xp (12 GB) |
Debian 11 |
bmicgpu04 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
✓ |
1.1 TB |
5 TITAN Xp (12 GB) |
Debian 11 |
bmicgpu05 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
✓ |
1.1 TB |
4 TITAN Xp (12 GB) |
Debian 11 |
bmicgpu06 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
503 GB |
✓ |
1.8 TB |
4 A100 (40 GB) |
Debian 11 |
bmicgpu07 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
✓ |
6.9 TB |
8 A6000 (48 GB) |
Debian 11 |
bmicgpu08 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
✓ |
6.9 TB |
8 A6000 (48 GB) |
Debian 11 |
bmicgpu09 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
✓ |
6.9 TB |
8 A6000 (48 GB) |
Debian 11 |
Detailed information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Automatic resource assignment
As the hardware outfit of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases:
- Jobs not explicitely specifying resource allocations receive defaults
- Upper limits on resource allocations are imposed on all jobs
- Run time for interactive jobs is limited to 2 hours
These defaults and limits differ by partition. For details, see the job submit script /home/sladmcvl/slurm/job_submit.lua which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits.
Don't use the --mem and/or --cpus-per-task options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message:
srun: error: Unable to allocate resources: Requested node configuration is not available
To properly warn about impossible conditions, the job submit script would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission.
Partitions
Partitions group nodes with similar a hardware outfit together. Their defaults and limits are shown in the following table:
Partition |
DefMPG |
MaxMPG |
DefCPG |
MaxCPG |
Time limit |
gpu.medium.normal |
30 GB |
50 GB |
3 |
5 |
2 d |
gpu.medium.long |
30 GB |
50 GB |
3 |
5 |
5 d |
gpu.high.normal |
50 GB |
70 GB |
4 |
4 |
2 d |
gpu.high.long |
50 GB |
70 GB |
4 |
4 |
5 d |
gpu.bmic |
64 GB |
- |
16 |
- |
2 d |
gpu.bmic.long |
64 GB |
- |
16 |
- |
2 w |
Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU
cpu.bmic, gpu.bmic, gpu.bmic.long
Access to these partitions is restricted to members of the BMIC group.
Access to cpu.bmic, gpu.bmic, gpu.bmic.long
Access to the partitions cpu.bmic, gpu.bmic and gpu.bmic.long are available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
If you're a member of the Slurm account staff and have also been added to bmic, your default account is the latter and all your jobs will by default be sent to partition gpu.bmic.
If you do not ask for GPU resources your jobs will be sent to partition cpu.bmic.
If you want to run longer jobs in partition gpu.bmic.long, coordinate this request with your group and request to be added to the account gpu.bmic.long at ISG D-ITET support.
If you want to have your jobs sent to other partitions, you have to specify the account staff (or bmic.long) as in the following example:
sbatch --account=staff job_script.sh
If you already have a PENDING job in the wrong partition you can correct it to partition <partition name> by issuing the following command:
scontrol update jobid=<job id> partition=<partition name> account=staff
If you want to send your jobs to nodes in other partitions, make sure to always specify --account=staff. Job quotas are calculated per account, by setting the account to staff you will make sure not to use up your quota from account bmic on nodes in partitions outside of gpu.bmic.
Time limit for interactive jobs
In gpu.bmic the time limit for interactive jobs is 8 h.
gpu.medium.long, gpu.high.long
The partitions gpu.medium.long and gpu.high.long are only accessible to members of the account "long". Membership is temporary and granted on demand by CVL administration.
Display specific information
The following is a collection of command sequences to quickly extract specific summaries.
GPU availability
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
For monitoring its content the following aliases can be used:
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""
GPU quota
A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command:
sacctmgr show qos format=name%8,maxtrespu%12
GPUs per user
Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs:
(
echo 'User;Account;QOS;GPUs' \
&& echo '----;-------;---;----' \
&& scontrol -a show jobs \
|grep -E '(UserId|Account|JobState|TRES)=' \
|paste - - - - \
|grep -E 'JobState=RUNNING.*gres/gpu' \
|sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \
|awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
|sort \
|tr '_' ';'
) \
|column -s ';' -t