Size: 12665
Comment:
|
← Revision 96 as of 2025-03-06 08:06:29 ⇥
Size: 8994
Comment: Add bmicgpu10
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
#rev 2020-09-10 stroth |
|
Line 3: | Line 5: |
= Slurm Pilot project for Biwi = * The following information is an abbreviated How-To with specific information for the pilot cluster * Our official documentation for slurm is the [[Services/SLURM|Computing wiki article]], you need to read this as well. * Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi. * The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section [[#Accounts_and_limits|Accouns and limits]] has only informative purpose at the moment. |
= CVL Slurm cluster = The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the '''[[Services/SLURM|main Slurm article]]''' in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here: |
Line 9: | Line 9: |
The alpha version of a [[https://people.ee.ethz.ch/~stroth/gpumon_pilot/index.html|GPUMon alternative]] is available. Please don't send feedback yet, use it as it is. == Pilot-specific information == Involved machines are * `biwirender01` for '''CPU computing''' * `biwirender03` for '''GPU-computing''' All available GPU partitions are overlayed on `biwirender03`. They will be available on different nodes in the final cluster. |
* All articles listed under [[Services#Data_Access|Data access]] * Matrix room [[https://element.ee.ethz.ch/#/room/!zPmwFDrehDvrInFNPq:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|for update and maintenance information|]] * Matrix room [[https://element.ee.ethz.ch/#/room/!jIyCiHKGuXIgKLDBYr:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|CVL cluster community help]] |
Line 17: | Line 13: |
/!\ `long` partitions are not yet implemented in the pilot! | == Access == Access to the CVL Slurm cluster is granted by [[https://vision.ee.ethz.ch/people-details.kristine-haberer.html|Kristine Haberer]]. |
Line 19: | Line 16: |
== Initialising slurm == All slurm command read the cluster configuration from the environment variable `SLURM_CONF`, so it needs to be set: |
== Setting environment == The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster: |
Line 24: | Line 22: |
If you're interested, feel free to have a look at the configuration, feedback is welcome! | |
Line 26: | Line 23: |
== Available partitions == The equivalent to SGE's queues is called ''partitions'' in slurm.<<BR>> `sinfo` shows all available partitions: {{{#!highlight bash numbers=disable sinfo }}} ||'''PARTITION'''||'''AVAIL'''||'''TIMELIMIT'''||'''NODES'''||'''STATE'''||'''NODELIST'''|| ||cpu.medium.normal||up||2-00:00:00||38||idle||bender[01-06,39-70]|| ||gpu.low.normal||up||2-00:00:00||1||idle||biwirender[03,04]|| ||gpu.medium.normal||up||2-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]|| ||gpu.medium.long||up||5-00:00:00||15||idle||biwirender[05-12,17,20],bmicgpu[01-05]|| ||gpu.high.normal||up||2-00:00:00||3||idle||biwirender[13-15]|| ||gpu.high.long||up||5-00:00:00||3||idle||biwirender[13-15]|| ||gpu.debug||up||8:00:00||1||idle||biwirender[03,04]|| |
|
Line 41: | Line 24: |
Only the interactive partition `gpu.debug` should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it. | == Hardware == The following tables summarizes node specific information: ||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch Size'''||'''GPUs'''||'''GPU architecture'''||'''Operating system'''|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||661 GB||701 GB||4 RTX 2080 Ti (10 GB) ||Turing||Debian 11|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||701 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||701 GB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||1.1 TB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||1.1 TB||6 TITAN X (12 GB) ||Pascal||Debian 11|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||692 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||4 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB)<<BR>>3 A6000 (48 GB)||Ampere||Debian 11|| ||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu10 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus01 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus02 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus03 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus04 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| |
Line 43: | Line 48: |
=== Interactive jobs === For testing purposes a job with an interactive session with 1 GPU can be started: {{{#!highlight bash numbers=disable srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i }}} * Such jobs are placed in `gpu.debug` by the scheduler == Allocating resources == === GPUs === For a job to have access to a GPU, GPU resources need to be requested with the option `--gres=gpu:<n>`<<BR>> Here's the sample job submission script `primes_1GPU.sh` requesting 1 GPU: {{{#!highlight bash numbers=disable #!/bin/sh # #SBATCH --mail-type=ALL #SBATCH --gres=gpu:1 #SBATCH --output=log/%j.out export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out # env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job # # binary to execute codebin/primes $1 echo "" echo "Job statistics: " sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15 echo "" exit 0; }}} * Make sure the directory wherein to store logfiles exists before submitting a job. * Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to `$SLURM_JOB_ID` anymore). * slurm also sets CUDA_VISIBLE_DEVICES. See the section [[Services/SLURM#GPU_jobs|GPU jobs]] in the main slurm article. * A job requesting more GPUs than allowed by the QOS of the users's account (see [[#Accounts_and_limits|Accouns and limits]]) will stay in "PENDING" state. === Memory === If you omit the `--mem` option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose `gpu.medium.normal`: {{{ sbatch primes_1GPU.sh sbatch: GRES requested : gpu:1 sbatch: GPUs requested : 1 sbatch: Requested Memory : --- sbatch: CPUs requested : --- sbatch: Your job is a gpu job. Submitted batch job 133 }}} {{{#!highlight bash numbers=disable squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10 }}} {{{ JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME 133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02 }}} An explicit `--mem` option selects the partition as follows: ||'''--mem'''||'''Partition'''|| ||< 30G||gpu.low.normal|| ||30G - 50G||gpu.medium.normal|| ||>50G - 70G||gpu.high.normal|| ||>70G||not allowed|| For example with: {{{#!highlight bash numbers=disable sbatch --mem=50G primes_2GPU.sh }}} the above `squeue` command shows: {{{ JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME 136 gpu.high.normal biwirender03 testuser cpu=6,mem=100G,node=1,billing=6,gres/gpu=2 0:28 }}} == Accounts and limits == In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to: {{{#!highlight bash numbers=disable sacctmgr show account }}} ||'''Account'''||'''Descr'''||'''Org'''|| ||deadconf||deadline_conference||biwi|| ||deadline||deadline||biwi|| ||long||longer time limit||biwi|| ||root||default root account||root|| ||staff||staff||biwi|| ||student||student||biwi|| * Accounts `isg` and `root` are not accessible to Biwi GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account. {{{#!highlight bash numbers=disable sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 }}} ||'''Account'''||'''User'''||'''Partition'''||'''!MaxJobs'''||'''QOS'''||'''Def QOS'''|| ||deadconf||........|| || ||gpu_4||gpu_4|| ||deadline||........|| || ||gpu_5||gpu_5|| ||long||........|| || ||gpu_2||gpu_2|| ||staff||........|| || ||gpu_7||gpu_7|| ||student||........|| || ||gpu_3||gpu_3|| The QOS' `gpu_x` only contain a limit for the amount of GPUs per user: {{{#!highlight bash numbers=disable sacctmgr show qos format=name%15,maxtrespu%30 }}} ||'''Name'''||'''MaxTRESPU'''|| ||normal|||| ||gpu_1||gres/gpu=1|| ||gpu_2||gres/gpu=2|| ||gpu_3||gres/gpu=3|| ||gpu_4||gres/gpu=4|| ||gpu_5||gres/gpu=5|| ||gpu_6||gres/gpu=6|| Users with administrative privileges can move a user between accounts `deadline` or `deadconf`.<<BR>> List associations of testuser: {{{#!highlight bash numbers=disable sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 }}} {{{ Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- deadline testuser gpu_3 gpu_3 }}} Move testuser from deadline to staff: {{{#!highlight bash numbers=disable /home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff }}} List associations of testuser again: {{{#!highlight bash numbers=disable sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15 }}} {{{ Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- staff testuser gpu_2 gpu_2 }}} Accounts with administrative privileges can be shown with: {{{#!highlight bash numbers=disable sacctmgr show user format=user%15,defaultaccount%15,admin%15' }}} == Last words == Have fun using SLURM for your jobs! = Content for the final page = Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process. == Nodes == The following tables summarizes node specific information: ||'''Server'''||'''CPU'''||'''Frequency'''||'''Cores'''||'''Memory'''||'''/scratch SSD'''||'''GPUs'''||'''Operating System'''|| ||bender[01-06]||Intel Xeon E5-2670 v2||2.50 GHz||40||125 GB||-||-||Debian 9|| ||bender[39-52]||Intel Xeon X5650||2.67 GHz||24||94 GB||-||-||Debian 9|| ||bender[53-70]||Intel Xeon E5-2665 0||2.40 GHz||32||125 GB||-||-||Debian 9|| ||biwirender03||Intel Xeon E5-2650 v2||2.60 GHz||32||125 GB||-||6 Tesla K40c (11 GB)||Debian 9|| ||biwirender04||Intel Xeon E5-2637 v2||3.50 GHz||8||125 GB||✓||5 Tesla K40c (11 GB)||Debian 9|| ||biwirender[05,06]||Intel Xeon E5-2637 v2||3.50 GHz||8||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender[07-09]||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender10||Intel Xeon E5-2650 v4||2.20 GHz||24||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender11||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 !GeForce GTX TITAN X (12 GB)||Debian 9|| ||biwirender12||Intel Xeon E5-2640 v3||2.60 GHz||32||251 GB||✓||6 !GeForce RTX 2080 Ti (10 GB)||Debian 9|| ||biwirender13||Intel Xeon E5-2680 v3||2.50 GHz||24||503 GB||✓||7 TITAN Xp (12 GB)||Debian 9|| ||biwirender14||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||7 TITAN Xp (12 GB)||Debian 9|| ||biwirender15||Intel Xeon E5-2680 v4||2.40 GHz||28||503 GB||✓||6 TITAN Xp (12 GB)||Debian 9|| ||biwirender17||Intel Xeon E5-2620 v4||2.10 GHz||32||503 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9|| ||biwirender20||Intel Xeon E5-2620 v4||2.10 GHz||32||377 GB||✓||8 !GeForce GTX 1080 Ti (11 GB)||Debian 9|| ||bmicgpu01||Intel Xeon E5-2680 v3||2.50 GHz||24||251 GB||✓||6 TITAN X (12 GB)||Debian 9|| ||bmicgpu02||Intel Xeon E5-2640 v3||2.60 GHz||16||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9|| ||bmicgpu[03]||Intel Xeon E5-2630 v4||2.20 GHz||20||251 GB||✓||6 TITAN Xp (12 GB)||Debian 9|| ||bmicgpu[04,05]||Intel Xeon E5-2630 v4||2.20 GHz||20||251 GB||✓||5 TITAN Xp (12 GB)||Debian 9|| Detailled information about all nodes can be seen by issuing the command |
Detailed information about all nodes can be seen by issuing the command |
Line 218: | Line 59: |
== Partitions == Partitions including their limits are shown in the following table: ||'''Partition'''||'''DefMPG'''||'''MaxMPG'''||'''DefCPG'''||'''MaxCPG'''||'''Time limit'''|| ||cpu.medium.normal||-||-||-||-||2 d|| ||gpu.low.normal||20 GB||25 GB||3||3||2 d|| ||gpu.medium.normal||40 GB||50 GB||3||5||2 d|| ||gpu.medium.long||40 GB||50 GB||3||5||5 d|| ||gpu.high.normal||70 GB||70 GB||4||4||2 d|| ||gpu.high.long||70 GB||70 GB||4||4||5 d|| ||gpu.debug||20 GB||25 GB||3||3||8 h|| |
|
Line 229: | Line 60: |
'''Def''': Default, '''Max''': Maximum, '''MPG''': Memory Per GPU, '''CPG''': CPUs Per GPU === gpu.debug === This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed. |
== Automatic/default resource assignment == * Jobs not explicitely requesting GPU resources receive the default of 1 GPU * Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU |
Line 233: | Line 64: |
=== *.long === The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>. |
== Limits == * Run time for interactive jobs is limited to 2 hours * Run time for batch jobs is limited to 48 hours |
Line 236: | Line 68: |
== Display specific information == The following is a collection of command sequences to quickly extract specific summaries. === GPUs per user === Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs: |
=== Need for longer run time === If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account `long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]]<<BR>> After you've been added to `long`, specify this account as in the following example to run longer jobs: {{{#!highlight bash numbers=disable sbatch --account=long job_script.sh }}} == Display GPU availability == Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs: |
Line 241: | Line 79: |
( echo 'User;Account;QOS;GPUs' \ && echo '----;-------;---;----' \ && scontrol -a show jobs \ |grep -E '(UserId|Account|JobState|TRES)=' \ |paste - - - - \ |grep -E 'JobState=RUNNING.*gres/gpu' \ |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \ |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \ |sort \ |tr '_' ';' ) \ |column -s ';' -t |
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt" alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt" |
Line 255: | Line 82: |
For monitoring its content the following aliases can be used: {{{#!highlight bash numbers=disable alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\"" alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\"" }}} == Access local scratch of diskless clients == Local `/scratch` disks of managed diskless clients are available on a remote host at `/scratch_net/<hostname>` as an ''automount'' (on demand). Typically you set up your personal directory with your username `$USER` on the local `/scratch` of the managed client you work on. * Locally (on the client `<hostname>`) it is accessible under `/scratch/$USER`, resp. `/scratch-second/$USER`.<<BR>>The command `hostname` shows the name of your local client. * Remotely (on a cluster node, from a Slurm job) it is accessible under `/scratch_net/<hostname>/$USER`, resp. `/scratch_net/<hostname>_second/$USER` * ''On demand'' means: The path to a remote `/scratch` will appear at first access, like after issuing `ls /scratch_net/<hostname>` and disappear again when unused. * Mind the difference of `-` used to designate a local additional disk and `_` used in naming remote mounts of such additional disks == BMIC specific information == The [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]] of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in [[Services/SLURM#sinfo_.2BIZI_Show_partition_configuration|partitions]] `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long`.<<BR>> Access to these partitions is available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} }}} === Notable differences === With access to the BMIC resources, the following differences to the common defaults and limits apply: * Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs `cpu.bmic` * Need for longer run time: As [[#Need_for_longer_run_time|above]], but apply to be added to `bmic.long` * Run time for interactive jobs is limited to 8 hours |
Contents
CVL Slurm cluster
The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the main Slurm article in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here:
All articles listed under Data access
Matrix room for update and maintenance information
Matrix room CVL cluster community help
Access
Access to the CVL Slurm cluster is granted by Kristine Haberer.
Setting environment
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
Hardware
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Physical cores |
Logical processors |
Memory |
/scratch Size |
GPUs |
GPU architecture |
Operating system |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
661 GB |
701 GB |
4 RTX 2080 Ti (10 GB) |
Turing |
Debian 11 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
503 GB |
701 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
701 GB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
1.1 TB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
503 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
376 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
251 GB |
1.1 TB |
6 TITAN X (12 GB) |
Pascal |
Debian 11 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
692 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu03 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
40 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu04 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu05 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
4 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu06 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
503 GB |
1.8 TB |
4 A100 (40 GB) |
Ampere |
Debian 11 |
bmicgpu07 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu08 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu09 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu10 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus01 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus02 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus03 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus04 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
Detailed information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Automatic/default resource assignment
- Jobs not explicitely requesting GPU resources receive the default of 1 GPU
- Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU
Limits
- Run time for interactive jobs is limited to 2 hours
- Run time for batch jobs is limited to 48 hours
Need for longer run time
If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account long at ISG D-ITET support
After you've been added to long, specify this account as in the following example to run longer jobs:
sbatch --account=long job_script.sh
Display GPU availability
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
For monitoring its content the following aliases can be used:
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""
Access local scratch of diskless clients
Local /scratch disks of managed diskless clients are available on a remote host at /scratch_net/<hostname> as an automount (on demand). Typically you set up your personal directory with your username $USER on the local /scratch of the managed client you work on.
Locally (on the client <hostname>) it is accessible under /scratch/$USER, resp. /scratch-second/$USER.
The command hostname shows the name of your local client.Remotely (on a cluster node, from a Slurm job) it is accessible under /scratch_net/<hostname>/$USER, resp. /scratch_net/<hostname>_second/$USER
On demand means: The path to a remote /scratch will appear at first access, like after issuing ls /scratch_net/<hostname> and disappear again when unused.
Mind the difference of - used to designate a local additional disk and _ used in naming remote mounts of such additional disks
BMIC specific information
The BMIC group of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in partitions cpu.bmic, gpu.bmic and gpu.bmic.long.
Access to these partitions is available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
Notable differences
With access to the BMIC resources, the following differences to the common defaults and limits apply:
Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs cpu.bmic
Need for longer run time: As above, but apply to be added to bmic.long
- Run time for interactive jobs is limited to 8 hours