Size: 11254
Comment: Remove decomissioned node biwirender07
|
← Revision 96 as of 2025-03-06 08:06:29 ⇥
Size: 8994
Comment: Add bmicgpu10
|
Deletions are marked like this. | Additions are marked like this. |
Line 6: | Line 6: |
The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for usage of the CVL cluster. Furthermore, CVL maintains it's own wiki article to help you getting started and listing frequently asked questions. Consult these two articles if the information you're looking for isn't available here: | The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the '''[[Services/SLURM|main Slurm article]]''' in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here: |
Line 8: | Line 9: |
* [[Services/SLURM|Computing wiki main Slurm article]] * [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL wiki slurm article]] |
* All articles listed under [[Services#Data_Access|Data access]] * Matrix room [[https://element.ee.ethz.ch/#/room/!zPmwFDrehDvrInFNPq:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|for update and maintenance information|]] * Matrix room [[https://element.ee.ethz.ch/#/room/!jIyCiHKGuXIgKLDBYr:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|CVL cluster community help]] == Access == Access to the CVL Slurm cluster is granted by [[https://vision.ee.ethz.ch/people-details.kristine-haberer.html|Kristine Haberer]]. |
Line 17: | Line 23: |
Line 19: | Line 26: |
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch SSD'''||'''/scratch Size'''||'''GPUs'''||'''Operating System'''|| ||bender[59-70] ||Intel Xeon E5-2665 0 ||2.40 GHz ||32 ||64 ||125 GB||-||3.7 TB||-||Debian 10|| ||bmiccomp01 ||Intel Xeon E5-2697 v4||2.30 GHz ||36 ||36 ||251 GB||-||186 GB||-||Debian 10|| ||biwirender03 ||Intel Xeon E5-2650 v2||2.60 GHz ||16 ||32 ||125 GB||-||820 GB||4 Tesla K40c (11 GB)||Debian 10|| ||biwirender04 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 || 8 ||125 GB||✓||6.1 TB||5 Tesla K40c (11 GB)||Debian 10|| ||biwirender05 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||4 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender06 ||Intel Xeon E5-2637 v2||3.50 GHz || 8 ||16 ||251 GB||✓||6.1 TB||5 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender08 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender09 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||3 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender10 ||Intel Xeon E5-2650 v4||2.20 GHz ||24 ||24 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender11 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||701 GB||5 !GeForce GTX TITAN X (12 GB)||Debian 10|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||251 GB||✓||701 GB||4 !GeForce RTX 2080 Ti (10 GB)||Debian 10|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||✓||701 GB||5 TITAN Xp (12 GB)||Debian 10|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||701 GB||7 TITAN Xp (12 GB)||Debian 10|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||✓||1.1 TB||7 TITAN Xp (12 GB)||Debian 10|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 10|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||✓||403 GB||6 !GeForce GTX 1080 Ti (11 GB)||Debian 10|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||✓||1.1 TB||6 TITAN X (12 GB)||Debian 10|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||✓||692 GB||5 TITAN Xp (12 GB)||Debian 10|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 10|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||5 TITAN Xp (12 GB)||Debian 10|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||✓||1.1 TB||4 TITAN Xp (12 GB)||Debian 10|| ||bmicgpu06 ||AMD EPYC 7742 ||1.50 GHz ||128 ||128 ||503 GB||✓||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB) ||Debian 10|| ||bmicgpu07 ||AMD EPYC 7763 ||1.50 GHz ||128 ||128 ||755 GB||✓||6.9 TB||8 A6000 (48 GB)||Debian 11|| |
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch Size'''||'''GPUs'''||'''GPU architecture'''||'''Operating system'''|| ||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||661 GB||701 GB||4 RTX 2080 Ti (10 GB) ||Turing||Debian 11|| ||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||701 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||701 GB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||1.1 TB||7 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11|| ||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||1.1 TB||6 TITAN X (12 GB) ||Pascal||Debian 11|| ||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||692 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||4 TITAN Xp (12 GB) ||Pascal||Debian 11|| ||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB)<<BR>>3 A6000 (48 GB)||Ampere||Debian 11|| ||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||bmicgpu10 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus01 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus02 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus03 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| ||octopus04 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11|| |
Line 55: | Line 59: |
== Automatic resource assignment == As the [[#Hardware|hardware outfit]] of nodes is heterogenous, resource allocation is controlled automatically to maximise utilization and simplify job submission for most use cases: * Jobs not explicitely specifying resource allocations receive defaults * Upper limits on resource allocations are imposed on all jobs These defaults and limits differ by [[#Partitions|partition]]. For details, see the ''job submit script'' `/home/sladmcvl/slurm/job_submit.lua` which is interpreted for each job by the Slurm scheduler to set defaults and enforce limits. |
|
Line 61: | Line 60: |
/!\ Don't use the '''--mem''' and/or '''--cpus-per-task''' options for GPU jobs outside of the defaults and limits. This can create conditions which are impossible to satisfy by the Slurm scheduler and obfuscate the reason why a job cannot be scheduled. Such conditions will results in the following error message: {{{ srun: error: Unable to allocate resources: Requested node configuration is not available }}} To properly warn about impossible conditions, the ''job submit script'' would have to duplicate information about partitions and node outfits, which leads to maintenance overhead and introduces more sources for errors. This would defeat its purpose of simplifying job submission. |
== Automatic/default resource assignment == * Jobs not explicitely requesting GPU resources receive the default of 1 GPU * Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU |
Line 67: | Line 64: |
== Partitions == Partitions group nodes with similar a [[#Hardware|hardware outfit]] together. Their defaults and limits are shown in the following table: ||'''Partition'''||'''DefMPG'''||'''MaxMPG'''||'''DefCPG'''||'''MaxCPG'''||'''Time limit'''|| ||cpu.medium.normal||- ||- ||- ||-||2 d|| ||gpu.low.normal ||30 GB||30 GB||3 ||3||2 d|| ||gpu.medium.normal||30 GB||50 GB||3 ||5||2 d|| ||gpu.medium.long ||30 GB||50 GB||3 ||5||5 d|| ||gpu.high.normal ||50 GB||70 GB||4 ||4||2 d|| ||gpu.high.long ||50 GB||70 GB||4 ||4||5 d|| ||gpu.debug ||30 GB||70 GB||3 ||5||8 h|| ||gpu.bmic ||64 GB||- ||16||-||2 d|| |
== Limits == * Run time for interactive jobs is limited to 2 hours * Run time for batch jobs is limited to 48 hours |
Line 79: | Line 68: |
'''Def''': Default, '''Max''': Maximum, '''MPG''': Memory Per GPU, '''CPG''': CPUs Per GPU === gpu.debug === This partition is reserved to run interactive jobs for debugging purposes. A typical use case is to start an interactive shell for a short time in this partitions to debug a job script. In the following example, the time limit is set to 10 minutes: {{{#!highlight bash numbers=disable srun --time 10 --gres=gpu:1 --partition=gpu.debug --pty bash -i |
=== Need for longer run time === If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account `long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]]<<BR>> After you've been added to `long`, specify this account as in the following example to run longer jobs: {{{#!highlight bash numbers=disable sbatch --account=long job_script.sh |
Line 87: | Line 75: |
=== gpu.bmic === Access to this partition is restricted to members of the [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]]. |
|
Line 90: | Line 76: |
==== Access to gpu.bmic ==== Access to the partition `gpu.bmic` is available for members of the Slurm account `bmic`. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} }}} If you're a member of the Slurm account `staff` and have also been added to `bmic`, your default account is the latter and all your jobs will by default be sent to partition `gpu.bmic`.<<BR>> If you want to have your jobs sent to other partitions, you have to specify the account `staff` as in the following example: {{{#!highlight bash numbers=disable sbatch --account=staff job_script.sh }}} If you already have a PENDING job in the wrong partition you can correct it to partition `<partition name>` by issuing the following command: {{{#!highlight bash numbers=disable scontrol update jobid=<job id> partition=<partition name> account=staff }}} If you want to send your jobs to nodes in other partitions, make sure to always specify `--account=staff`. Job quotas are calculated per account, by setting the account to `staff` you will make sure not to use up your quota from account `bmic` on nodes in partitions outside of `gpu.bmic`. === *.long === The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by [[https://wiki.vision.ee.ethz.ch/itet/gpuclusterslurm|CVL administration|]]. == Display specific information == The following is a collection of command sequences to quickly extract specific summaries. === GPU availability === |
== Display GPU availability == |
Line 124: | Line 88: |
=== GPU quota === A slurm user is member of a so-called slurm account. Accounts are associated with so-called quality of service (qos) rules. The amount of GPUs an account member's jobs can use at the same time, a.k.a a quota is defined in a qos by the same name as the account. These qos can be shown with the following command: {{{#!highlight bash numbers=disable sacctmgr show qos format=name%8,maxtrespu%12 |
== Access local scratch of diskless clients == Local `/scratch` disks of managed diskless clients are available on a remote host at `/scratch_net/<hostname>` as an ''automount'' (on demand). Typically you set up your personal directory with your username `$USER` on the local `/scratch` of the managed client you work on. * Locally (on the client `<hostname>`) it is accessible under `/scratch/$USER`, resp. `/scratch-second/$USER`.<<BR>>The command `hostname` shows the name of your local client. * Remotely (on a cluster node, from a Slurm job) it is accessible under `/scratch_net/<hostname>/$USER`, resp. `/scratch_net/<hostname>_second/$USER` * ''On demand'' means: The path to a remote `/scratch` will appear at first access, like after issuing `ls /scratch_net/<hostname>` and disappear again when unused. * Mind the difference of `-` used to designate a local additional disk and `_` used in naming remote mounts of such additional disks == BMIC specific information == The [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]] of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in [[Services/SLURM#sinfo_.2BIZI_Show_partition_configuration|partitions]] `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long`.<<BR>> Access to these partitions is available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER} |
Line 130: | Line 104: |
=== GPUs per user === Show a sorted list of users, their account an QOS and a summary of the GPU's used by their running jobs: {{{#!highlight bash numbers=disable ( echo 'User;Account;QOS;GPUs' \ && echo '----;-------;---;----' \ && scontrol -a show jobs \ |grep -E '(UserId|Account|JobState|TRES)=' \ |paste - - - - \ |grep -E 'JobState=RUNNING.*gres/gpu' \ |sed -E 's:^\s+UserId=([^\(]+).*Account=(\S+)\s+QOS=(\S+).*gres/gpu=([0-9]+)$:\1_\2_\3;\4:' \ |awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \ |sort \ |tr '_' ';' ) \ |column -s ';' -t }}} |
=== Notable differences === With access to the BMIC resources, the following differences to the common defaults and limits apply: * Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs `cpu.bmic` * Need for longer run time: As [[#Need_for_longer_run_time|above]], but apply to be added to `bmic.long` * Run time for interactive jobs is limited to 8 hours |
Contents
CVL Slurm cluster
The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the main Slurm article in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here:
All articles listed under Data access
Matrix room for update and maintenance information
Matrix room CVL cluster community help
Access
Access to the CVL Slurm cluster is granted by Kristine Haberer.
Setting environment
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
Hardware
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Physical cores |
Logical processors |
Memory |
/scratch Size |
GPUs |
GPU architecture |
Operating system |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
32 |
661 GB |
701 GB |
4 RTX 2080 Ti (10 GB) |
Turing |
Debian 11 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
503 GB |
701 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
701 GB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
28 |
503 GB |
1.1 TB |
7 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
503 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
16 |
32 |
376 GB |
403 GB |
6 GTX 1080 Ti (11 GB) |
Pascal |
Debian 11 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
24 |
251 GB |
1.1 TB |
6 TITAN X (12 GB) |
Pascal |
Debian 11 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
16 |
251 GB |
692 GB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu03 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
40 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu04 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
5 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu05 |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
20 |
251 GB |
1.1 TB |
4 TITAN Xp (12 GB) |
Pascal |
Debian 11 |
bmicgpu06 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
503 GB |
1.8 TB |
4 A100 (40 GB) |
Ampere |
Debian 11 |
bmicgpu07 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu08 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu09 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
bmicgpu10 |
AMD EPYC 7763 |
3.53 GHz |
128 |
128 |
755 GB |
6.9 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus01 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus02 |
AMD EPYC 7H12 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus03 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
octopus04 |
AMD EPYC 7742 |
3.41 GHz |
128 |
128 |
755 GB |
1.8 TB |
8 A6000 (48 GB) |
Ampere |
Debian 11 |
Detailed information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Automatic/default resource assignment
- Jobs not explicitely requesting GPU resources receive the default of 1 GPU
- Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU
Limits
- Run time for interactive jobs is limited to 2 hours
- Run time for batch jobs is limited to 48 hours
Need for longer run time
If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account long at ISG D-ITET support
After you've been added to long, specify this account as in the following example to run longer jobs:
sbatch --account=long job_script.sh
Display GPU availability
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
For monitoring its content the following aliases can be used:
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""
Access local scratch of diskless clients
Local /scratch disks of managed diskless clients are available on a remote host at /scratch_net/<hostname> as an automount (on demand). Typically you set up your personal directory with your username $USER on the local /scratch of the managed client you work on.
Locally (on the client <hostname>) it is accessible under /scratch/$USER, resp. /scratch-second/$USER.
The command hostname shows the name of your local client.Remotely (on a cluster node, from a Slurm job) it is accessible under /scratch_net/<hostname>/$USER, resp. /scratch_net/<hostname>_second/$USER
On demand means: The path to a remote /scratch will appear at first access, like after issuing ls /scratch_net/<hostname> and disappear again when unused.
Mind the difference of - used to designate a local additional disk and _ used in naming remote mounts of such additional disks
BMIC specific information
The BMIC group of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in partitions cpu.bmic, gpu.bmic and gpu.bmic.long.
Access to these partitions is available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
Notable differences
With access to the BMIC resources, the following differences to the common defaults and limits apply:
Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs cpu.bmic
Need for longer run time: As above, but apply to be added to bmic.long
- Run time for interactive jobs is limited to 8 hours