#rev 2020-09-10 stroth <> = CVL Slurm cluster = The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the '''[[Services/SLURM|main Slurm article]]''' in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here: * All articles listed under [[Services#Data_Access|Data access]] * Matrix room [[https://element.ee.ethz.ch/#/room/!zPmwFDrehDvrInFNPq:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|for update and maintenance information|]] * Matrix room [[https://element.ee.ethz.ch/#/room/!jIyCiHKGuXIgKLDBYr:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|CVL cluster community help]] == Access == Access to the CVL Slurm cluster is granted by [[https://vision.ee.ethz.ch/people-details.kristine-haberer.html|Kristine Haberer]]. == Setting environment == The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster: {{{#!highlight bash numbers=disable export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf }}} == Hardware == The following tables summarizes node specific information: ||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch Size'''||'''GPUs'''||'''GPU architecture'''||'''Operating system'''|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||661 GB||701 GB||4 RTX 2080 Ti (10 GB) ||Turing||Debian 11|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||701 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||701 GB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||1.1 TB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||1.1 TB||6 TITAN X (12 GB) ||Pascal||Debian 11|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||692 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||4 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||1.8 TB||4 A100 (40 GB)<
>1 A100 (80GB)<
>3 A6000 (48 GB)||Ampere||Debian 11|| ||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus01 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus02 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus03 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus04 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| Detailed information about all nodes can be seen by issuing the command {{{#!highlight bash numbers=disable scontrol show nodes }}} An overview of utilization of individual node's resources can be shown with: {{{#!highlight bash numbers=disable sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10 }}} (Adapt the field length for gres and gresused to your needs) == Automatic/default resource assignment == * Jobs not explicitely requesting GPU resources receive the default of 1 GPU * Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU == Limits == * Run time for interactive jobs is limited to 2 hours * Run time for batch jobs is limited to 48 hours === Need for longer run time === If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account `long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]]<
> After you've been added to `long`, specify this account as in the following example to run longer jobs: {{{#!highlight bash numbers=disable sbatch --account=long job_script.sh }}} == Display GPU availability == Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs: {{{#!highlight bash numbers=disable alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt" alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt" }}} For monitoring its content the following aliases can be used: {{{#!highlight bash numbers=disable alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\"" alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\"" }}} == Access local scratch of diskless clients == Local `/scratch` disks of managed diskless clients are available on a remote host at `/scratch_net/` as an ''automount'' (on demand). Typically you set up your personal directory with your username `$USER` on the local `/scratch` of the managed client you work on. * Locally (on the client ``) it is accessible under `/scratch/$USER`, resp. `/scratch-second/$USER`.<
>The command `hostname` shows the name of your local client. * Remotely (on a cluster node, from a Slurm job) it is accessible under `/scratch_net//$USER`, resp. `/scratch_net/_second/$USER` * ''On demand'' means: The path to a remote `/scratch` will appear at first access, like after issuing `ls /scratch_net/` and disappear again when unused. * Mind the difference of `-` used to designate a local additional disk and `_` used in naming remote mounts of such additional disks == BMIC specific information == The [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]] of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in [[Services/SLURM#sinfo_.2BIZI_Show_partition_configuration|partitions]] `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long`.<
> Access to these partitions is available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} }}} === Notable differences === With access to the BMIC resources, the following differences to the common defaults and limits apply: * Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs `cpu.bmic` * Need for longer run time: As [[#Need_for_longer_run_time|above]], but apply to be added to `bmic.long` * Run time for interactive jobs is limited to 8 hours