Size: 6074
Comment:
|
Size: 8931
Comment: Remove obsolete sections "GPU quota" and "GPUs per user"
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#rev 2020-09-10 stroth |
|
Line 17: | Line 19: |
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Cores (P/L)'''||'''Memory'''||'''/scratch SSD'''||'''GPUs'''||'''Operating System'''|| ||bender[01] ||Intel Xeon E5-2670 v2||2.50 GHz ||20/20 ||125 GB||-||-||Debian 9|| ||bender[02] ||Intel Xeon E5-2670 v2||2.50 GHz ||20/0 ||125 GB||-||-||Debian 9|| ||bender[03-06] ||Intel Xeon E5-2670 v2||2.50 GHz ||20/20 ||125 GB||-||-||Debian 9|| ||bender[39-52] ||Intel Xeon X5650 ||2.67 GHz ||24/24 || 94 GB||-||-||Debian 9|| ||bender[53-70] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32/32 ||125 GB||-||-||Debian 9|| ||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36/0 ||251 GB||-||-||Debian 9|| ||biwirender03 ||Intel Xeon E5-2650 v2||2.60 GHz ||16/16 ||125 GB||-||6 Tesla K40c (11 GB)||Debian 9|| ||biwirender04 ||Intel Xeon E5-2637 v2||3.50 GHz || 8/0 ||125 GB||✓||5 Tesla K40c (11 GB)||Debian 9|| ||biwirender[05,06]||Intel Xeon E5-2637 v2||3.50 GHz || 8/0 ||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender[07,09]||Intel Xeon E5-2640 v3||2.60 GHz ||16/0 ||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender[08] ||Intel Xeon E5-2640 v3||2.60 GHz ||16/16 ||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender10 ||Intel Xeon E5-2650 v4||2.20 GHz ||24/0 ||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender11 ||Intel Xeon E5-2640 v3||2.60 GHz ||16/0 ||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16/16 ||251 GB||✓||6 !GeForce RTX 2080 Ti (10 GB)||Debian 9|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24/0 ||503 GB||✓||7 TITAN Xp (12 GB)||Debian 9|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28/0 ||503 GB||✓||7 TITAN Xp (12 GB)||Debian 9|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28/0 ||503 GB||✓||6 TITAN Xp (12 GB)||Debian 9|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16/16 ||503 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16/16 ||377 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24/0 ||251 GB||✓||6 TITAN X (12 GB)||Debian 9|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16/0 ||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9|| ||bmicgpu[03] ||Intel Xeon E5-2630 v4||2.20 GHz ||20/0 ||251 GB||✓||6 TITAN Xp (12 GB)||Debian 9|| ||bmicgpu[04,05] ||Intel Xeon E5-2630 v4||2.20 GHz ||20/0 ||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9|| |
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch Size'''||'''GPUs'''||'''GPU architecture'''||'''Operating system'''|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||701 GB||4 RTX 2080 Ti (10 GB) ||Turing||Debian 11|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||701 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||701 GB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||1.1 TB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||1.1 TB||6 TITAN X (12 GB) ||Pascal||Debian 11|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||692 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||4 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB)||Ampere||Debian 11|| ||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus01 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus02 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus03 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus04 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| |
Line 42: | Line 40: |
Detailled information about all nodes can be seen by issuing the command | Detailed information about all nodes can be seen by issuing the command |
Line 53: | Line 51: |
== Automatic/default resource assignment and limits == * Jobs not explicitely requesting GPU resources receive the default of 1 GPU * Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU * Run time for interactive jobs is limited to 2 hours |
|
Line 54: | Line 57: |
Partitions including their limits are shown in the following table: | Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table: |
Line 56: | Line 59: |
||cpu.medium.normal||-||-||-||-||2 d|| ||gpu.low.normal||20 GB||25 GB||3||3||2 d|| ||gpu.medium.normal||40 GB||50 GB||3||5||2 d|| ||gpu.medium.long||40 GB||50 GB||3||5||5 d|| ||gpu.high.normal||70 GB||70 GB||4||4||2 d|| ||gpu.high.long||70 GB||70 GB||4||4||5 d|| ||gpu.debug||20 GB||25 GB||3||3||8 h|| |
||gpu.medium.normal||30 GB||50 GB||3 ||5||2 d|| ||gpu.medium.long ||30 GB||50 GB||3 ||5||5 d|| ||gpu.high.normal ||50 GB||70 GB||4 ||4||2 d|| ||gpu.high.long ||50 GB||70 GB||4 ||4||5 d|| ||gpu.bmic ||64 GB||- ||16||-||2 d|| ||gpu.bmic.long ||64 GB||- ||16||-||2 w|| |
Line 65: | Line 67: |
=== gpu.debug === This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed. |
|
Line 68: | Line 68: |
=== *.long === The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>. |
=== cpu.bmic, gpu.bmic, gpu.bmic.long === Access to these partitions is restricted to members of the [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]]. ==== Access to cpu.bmic, gpu.bmic, gpu.bmic.long ==== Access to the partitions `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long` are available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} }}} * If you're a member of the Slurm account `staff` and have also been added to `bmic`, your default account is the latter and all your jobs will by default be sent to partition `gpu.bmic`.<<BR>> * If you do not ask for GPU resources your jobs will be sent to partition `cpu.bmic`.<<BR>> * If you want to run longer jobs in partition `gpu.bmic.long`, coordinate this request with your group and request to be added to the account `gpu.bmic.long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]].<<BR>> * If you want to have your jobs sent to other partitions, you have to specify the account `staff` (or `bmic.long`) as in the following example: {{{#!highlight bash numbers=disable sbatch --account=staff job_script.sh }}} * If you already have a PENDING job in the wrong partition you can correct it to partition `<partition name>` by issuing the following command: {{{#!highlight bash numbers=disable scontrol update jobid=<job id> partition=<partition name> account=staff }}} * If you want to send your jobs to nodes in other partitions, make sure to always specify `--account=staff`. Job quotas are calculated per account, by setting the account to `staff` you will make sure not to use up your quota from account `bmic` on nodes in partitions outside of `gpu.bmic`. ==== Time limit for interactive jobs ==== In `gpu.bmic` the time limit for interactive jobs is 8 h. === gpu.medium.long, gpu.high.long === The partitions `gpu.medium.long` and `gpu.high.long` are only accessible to members of the account "long". Membership is temporary and granted on demand by [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL administration|]]. |
Line 73: | Line 95: |
=== GPUs per user === Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs: |
=== GPU availability === Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs: |
Line 76: | Line 99: |
( echo 'User;Account;QOS;GPUs' \ && echo '----;-------;---;----' \ && scontrol -a show jobs \ |grep -E '(UserId|Account|JobState|TRES)=' \ |paste - - - - \ |grep -E 'JobState=RUNNING.*gres/gpu' \ |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \ |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \ |sort \ |tr '_' ';' ) \ |column -s ';' -t |
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt" alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt" |
Line 90: | Line 102: |
For monitoring its content the following aliases can be used: {{{#!highlight bash numbers=disable alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\"" alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\"" }}} |
Contents
CVL Slurm cluster
The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here:
Setting environment
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
Hardware
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Physical cores |
Logical processors |
Memory |
/scratch Size |
GPUs |
GPU architecture |
Operating system |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
251 GB |
701 GB |
4 RTX 2080 Ti (10 GB) |
Turing |
Debian 11 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
503 GB |
701 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
701 GB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
1.1 TB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
503 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
376 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
251 GB |
1.1 TB |
6 TITAN X (12 GB) |
Pascal |
Debian 11 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
692 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu03 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
40 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu04 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu05 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
4 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu06 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
503 GB |
1.8 TB |
4 A100 (40 GB) |
Ampere |
Debian 11 |
bmicgpu07 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu08 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu09 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus01 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus02 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus03 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus04 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
Detailed information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Automatic/default resource assignment and limits
- Jobs not explicitely requesting GPU resources receive the default of 1 GPU
- Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU
- Run time for interactive jobs is limited to 2 hours
Partitions
Partitions group nodes with similar a hardware outfit together. Their defaults and limits are shown in the following table:
Partition |
DefMPG |
MaxMPG |
DefCPG |
MaxCPG |
Time limit |
gpu.medium.normal |
30 GB |
50 GB |
3 |
5 |
2 d |
gpu.medium.long |
30 GB |
50 GB |
3 |
5 |
5 d |
gpu.high.normal |
50 GB |
70 GB |
4 |
4 |
2 d |
gpu.high.long |
50 GB |
70 GB |
4 |
4 |
5 d |
gpu.bmic |
64 GB |
- |
16 |
- |
2 d |
gpu.bmic.long |
64 GB |
- |
16 |
- |
2 w |
Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU
cpu.bmic, gpu.bmic, gpu.bmic.long
Access to these partitions is restricted to members of the BMIC group.
Access to cpu.bmic, gpu.bmic, gpu.bmic.long
Access to the partitions cpu.bmic, gpu.bmic and gpu.bmic.long are available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
If you're a member of the Slurm account staff and have also been added to bmic, your default account is the latter and all your jobs will by default be sent to partition gpu.bmic.
If you do not ask for GPU resources your jobs will be sent to partition cpu.bmic.
If you want to run longer jobs in partition gpu.bmic.long, coordinate this request with your group and request to be added to the account gpu.bmic.long at ISG D-ITET support.
If you want to have your jobs sent to other partitions, you have to specify the account staff (or bmic.long) as in the following example:
sbatch --account=staff job_script.sh
If you already have a PENDING job in the wrong partition you can correct it to partition <partition name> by issuing the following command:
scontrol update jobid=<job id> partition=<partition name> account=staff
If you want to send your jobs to nodes in other partitions, make sure to always specify --account=staff. Job quotas are calculated per account, by setting the account to staff you will make sure not to use up your quota from account bmic on nodes in partitions outside of gpu.bmic.
Time limit for interactive jobs
In gpu.bmic the time limit for interactive jobs is 8 h.
gpu.medium.long, gpu.high.long
The partitions gpu.medium.long and gpu.high.long are only accessible to members of the account "long". Membership is temporary and granted on demand by CVL administration.
Display specific information
The following is a collection of command sequences to quickly extract specific summaries.
GPU availability
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
For monitoring its content the following aliases can be used:
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""