Differences between revisions 1 and 96 (spanning 95 versions)
Revision 1 as of 2020-05-19 11:22:25
Size: 8882
Editor: stroth
Comment:
Revision 96 as of 2025-03-06 08:06:29
Size: 8994
Editor: stroth
Comment: Add bmicgpu10
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= Slurm Pilot project for Biwi =
 * The following information is an abbreviated How-To with specific information for the pilot cluster
 * If something is unclear or seems incomplete, check the [[Services/SLURM|Computing wiki article]] for more information
 * Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi.
#rev 2020-09-10 stroth
Line 6: Line 3:
== Pilot-specific information ==
Involved machines are
 * `biwirender01` for '''CPU computing'''
 * `biwirender03` for '''GPU-computing'''
All available GPU partitions are overlayed on `biwirender03`. They will be available on different nodes in the final cluster.
<<TableOfContents(3)>>
Line 12: Line 5:
/!\ `long` partitions are not yet implemented in the pilot! = CVL Slurm cluster =
The [[https://vision.ee.ethz.ch/|Computer Vision Lab]] (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the '''[[Services/SLURM|main Slurm article]]''' in this wiki, specific for usage of the CVL cluster.
In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here:
Line 14: Line 9:
== Initialising slurm ==
All slurm command read the cluster configuration from the environment variable `SLURM_CONF`, so it needs to be set:
{{{
 * All articles listed under [[Services#Data_Access|Data access]]
 * Matrix room [[https://element.ee.ethz.ch/#/room/!zPmwFDrehDvrInFNPq:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|for update and maintenance information|]]
 * Matrix room [[https://element.ee.ethz.ch/#/room/!jIyCiHKGuXIgKLDBYr:matrix.ee.ethz.ch?via=matrix.ee.ethz.ch|CVL cluster community help]]

== Access ==
Access to the CVL Slurm cluster is granted by [[https://vision.ee.ethz.ch/people-details.kristine-haberer.html|Kristine Haberer]].


== Setting environment ==
The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:
{{{#!highlight bash numbers=disable
Line 20: Line 23:
== Available partitions ==
The equivalent to SGE's queues is called ''partitions'' in slurm.<<BR>>
`sinfo` shows all available partitions:
{{{
sinfo
}}}
{{{
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
cpu.medium.normal up 2-00:00:00 1 idle biwirender01
gpu.low.normal up 2-00:00:00 1 idle biwirender03
gpu.medium.normal up 2-00:00:00 1 idle biwirender03
gpu.medium.long up 5-00:00:00 1 idle biwirender03
gpu.high.normal up 2-00:00:00 1 idle biwirender03
gpu.high.long up 5-00:00:00 1 idle biwirender03
gpu.debug up 6:00:00 1 idle biwirender03
gpu.mon up 6:00:00 1 idle biwirender03

== Hardware ==
The following tables summarizes node specific information:
||'''Server''' ||'''CPU''' ||'''Frequency'''||'''Physical cores'''||'''Logical processors'''||'''Memory'''||'''/scratch Size'''||'''GPUs'''||'''GPU architecture'''||'''Operating system'''||
||biwirender12 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||32 ||661 GB||701 GB||4 RTX 2080 Ti (10 GB) ||Turing||Debian 11||
||biwirender13 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||503 GB||701 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11||
||biwirender14 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||701 GB||7 TITAN Xp (12 GB) ||Pascal||Debian 11||
||biwirender15 ||Intel Xeon E5-2680 v4||2.40 GHz ||28 ||28 ||503 GB||1.1 TB||7 TITAN Xp (12 GB) ||Pascal||Debian 11||
||biwirender17 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||503 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11||
||biwirender20 ||Intel Xeon E5-2620 v4||2.10 GHz ||16 ||32 ||376 GB||403 GB||6 GTX 1080 Ti (11 GB) ||Pascal||Debian 11||
||bmicgpu01 ||Intel Xeon E5-2680 v3||2.50 GHz ||24 ||24 ||251 GB||1.1 TB||6 TITAN X (12 GB) ||Pascal||Debian 11||
||bmicgpu02 ||Intel Xeon E5-2640 v3||2.60 GHz ||16 ||16 ||251 GB||692 GB||5 TITAN Xp (12 GB) ||Pascal||Debian 11||
||bmicgpu03 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||40 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11||
||bmicgpu04 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||5 TITAN Xp (12 GB) ||Pascal||Debian 11||
||bmicgpu05 ||Intel Xeon E5-2630 v4||2.20 GHz ||20 ||20 ||251 GB||1.1 TB||4 TITAN Xp (12 GB) ||Pascal||Debian 11||
||bmicgpu06 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||503 GB||1.8 TB||4 A100 (40 GB)<<BR>>1 A100 (80GB)<<BR>>3 A6000 (48 GB)||Ampere||Debian 11||
||bmicgpu07 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||bmicgpu08 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||bmicgpu09 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||bmicgpu10 ||AMD EPYC 7763 ||3.53 GHz ||128 ||128 ||755 GB||6.9 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||octopus01 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||octopus02 ||AMD EPYC 7H12 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||octopus03 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11||
||octopus04 ||AMD EPYC 7742 ||3.41 GHz ||128 ||128 ||755 GB||1.8 TB||8 A6000 (48 GB) ||Ampere||Debian 11||

Detailed information about all nodes can be seen by issuing the command
{{{#!highlight bash numbers=disable
scontrol show nodes
Line 38: Line 53:
Only interactive partitions `gpu.debug` and `gpu.monitor` can and should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it. An overview of utilization of individual node's resources can be shown with:
{{{#!highlight bash numbers=disable
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
}}}
(Adapt the field length for gres and gresused to your needs)
Line 40: Line 59:
=== Interactive jobs ===
For testing purposes a job with an interactive session with 1 GPU can be started:
{{{
srun --time 10 --gres=gpu:1 --pty bash -i
}}}
 * Such jobs are placed in `gpu.debug` by the scheduler
Line 47: Line 60:
To monitor a running job, an interactive session can be started with explicitly selecting the monitoring partition. The node where the batch job is running needs to be specified as well:
{{{
srun --time 10 --partition=gpu.mon --nodelist=biwirender03 --pty bash -i
}}}
 * Allocating GPU resources is prohibited for such interactive jobs
 * It may be possible to attach an interactive session to an already running job, which will make the above obsolete. This is still under investigation at the moment.
== Automatic/default resource assignment ==
 * Jobs not explicitely requesting GPU resources receive the default of 1 GPU
 * Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU
Line 54: Line 64:
== Allocating resources ==
=== GPUs ===
For a job to have access to a GPU, GPU resources need to be requested with the option `--gres=gpu:<n>`<<BR>>
Here's the sample job submission script `primes_1GPU.sh` requesting 1 GPU:
{{{
#!/bin/sh
#
#SBATCH --mail-type=ALL
#SBATCH --gres=gpu:1
#SBATCH --output=log/%j.out
export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out
# env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job
#
# binary to execute
codebin/primes $1
echo ""
echo "Job statistics: "
sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15
echo ""
exit 0;
}}}
 * Make sure the directory wherein to store logfiles exists before submitting a job.
 * Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to `$SLURM_JOB_ID` anymore).
 * slurm also sets CUDA_VISIBLE_DEVICES. See the section [[Services/SLURM#GPU_jobs|GPU jobs]] in the main slurm article.
== Limits ==
 * Run time for interactive jobs is limited to 2 hours
 * Run time for batch jobs is limited to 48 hours
Line 79: Line 68:
=== Memory ===
If you omit the `--mem` option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose `gpu.medium.normal`:
{{{
sbatch primes_1GPU.sh
sbatch: GRES requested : gpu:1
sbatch: GPUs requested : 1
sbatch: Requested Memory : ---
sbatch: CPUs requested : ---
sbatch: Your job is a gpu job.
Submitted batch job 133
}}}
{{{
squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10
}}}
{{{
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME
133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02
}}}
An explicit `--mem` option selects the partition as follows:
||'''--mem'''||'''Partition'''||
||< 30G||gpu.low.normal||
||30G - 50G||gpu.medium.normal||
||>50G - 70G||gpu.high.normal||
||>70G||not allowed||
=== Need for longer run time ===
Line 104: Line 70:
== Accounts and limits ==
In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to:
{{{
sacctmgr show account
}}}
{{{
   Account Descr Org
---------- -------------------- --------------------
  deadconf deadline_conference biwi
  deadline deadline biwi
       isg isg isg
      root default root account root
     staff staff biwi
   student student biwi
}}}
 * Accounts `isg` and `root` are not accessible to Biwi

GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account.
{{{
sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
           root normal
           root root normal
       deadconf gpu_4 gpu_4
       deadline gpu_3 gpu_3
       deadline ........ gpu_3 gpu_3
            isg normal
            isg sladmall normal
          staff gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
          staff ........ gpu_2 gpu_2
        student gpu_1 gpu_1
If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account `long` at [[mailto:support@ee.ethz.ch|ISG D-ITET support]]<<BR>>
After you've been added to `long`, specify this account as in the following example to run longer jobs: {{{#!highlight bash numbers=disable
sbatch --account=long job_script.sh
Line 144: Line 75:
The QOS' `gpu_x` only contain a limit for the amount of GPUs per user:
{{{
sacctmgr show qos format=name%15,maxtrespu%30

== Display GPU availability ==
Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file `/home/sladmcvl/smon.txt`. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:
{{{#!highlight bash numbers=disable
alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"
Line 148: Line 82:
{{{
          Name MaxTRESPU
--------------- ------------------------------
         normal
          gpu_1 gres/gpu=1
          gpu_2 gres/gpu=2
          gpu_3 gres/gpu=3
          gpu_4 gres/gpu=4
          gpu_5 gres/gpu=5
          gpu_6 gres/gpu=6
For monitoring its content the following aliases can be used:
{{{#!highlight bash numbers=disable
alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""
Line 160: Line 88:
Users with administrative privileges can move a user between accounts `deadline` or `deadconf`.<<BR>> == Access local scratch of diskless clients ==
Line 162: Line 90:
List associations of testuser:
{{{
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
       deadline testuser gpu_3 gpu_3
}}}
Move testuser from deadline to staff:
{{{
/home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff
}}}
List associations of testuser again:
{{{
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
}}}
{{{
        Account User Partition MaxJobs QOS Def QOS
--------------- --------------- --------------- -------- --------------- ---------------
          staff testuser gpu_2 gpu_2
Local `/scratch` disks of managed diskless clients are available on a remote host at `/scratch_net/<hostname>` as an ''automount'' (on demand). Typically you set up your personal directory with your username `$USER` on the local `/scratch` of the managed client you work on.

 * Locally (on the client `<hostname>`) it is accessible under `/scratch/$USER`, resp. `/scratch-second/$USER`.<<BR>>The command `hostname` shows the name of your local client.
 * Remotely (on a cluster node, from a Slurm job) it is accessible under `/scratch_net/<hostname>/$USER`, resp. `/scratch_net/<hostname>_second/$USER`
 * ''On demand'' means: The path to a remote `/scratch` will appear at first access, like after issuing `ls /scratch_net/<hostname>` and disappear again when unused.
 * Mind the difference of `-` used to designate a local additional disk and `_` used in naming remote mounts of such additional disks


== BMIC specific information ==
The [[https://bmic.ee.ethz.ch/the-group.html|BMIC group]] of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in [[Services/SLURM#sinfo_.2BIZI_Show_partition_configuration|partitions]] `cpu.bmic`, `gpu.bmic` and `gpu.bmic.long`.<<BR>>
Access to these partitions is available for members of the Slurm account `bmic` only. You can check your Slurm account membership with the following command: {{{#!highlight bash numbers=disable
sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}
Line 185: Line 104:
== Last words ==
Have fun with using SLURM for your Jobs!



----
interactive queue beschreiben (keine gpu, etc.
lange optionen mit -- benutzen

long noch weglassen, falls zeit noch hinzufügen

dann ins wiki
=== Notable differences ===
With access to the BMIC resources, the following differences to the common defaults and limits apply:
 * Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs `cpu.bmic`
 * Need for longer run time: As [[#Need_for_longer_run_time|above]], but apply to be added to `bmic.long`
 * Run time for interactive jobs is limited to 8 hours

CVL Slurm cluster

The Computer Vision Lab (CVL) owns a Slurm cluster with restricted access. The information in this article is an addendum to the main Slurm article in this wiki, specific for usage of the CVL cluster. In addition to this and the main article, consult the following sources of information if the information you're looking for isn't available here:

Access

Access to the CVL Slurm cluster is granted by Kristine Haberer.

Setting environment

The environment variable SLURM_CONF needs to be adjusted to point to the configuration of the CVL cluster:

export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf

Hardware

The following tables summarizes node specific information:

Server

CPU

Frequency

Physical cores

Logical processors

Memory

/scratch Size

GPUs

GPU architecture

Operating system

biwirender12

Intel Xeon E5-2640 v3

2.60 GHz

16

32

661 GB

701 GB

4 RTX 2080 Ti (10 GB)

Turing

Debian 11

biwirender13

Intel Xeon E5-2680 v3

2.50 GHz

24

24

503 GB

701 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender14

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

701 GB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender15

Intel Xeon E5-2680 v4

2.40 GHz

28

28

503 GB

1.1 TB

7 TITAN Xp (12 GB)

Pascal

Debian 11

biwirender17

Intel Xeon E5-2620 v4

2.10 GHz

16

32

503 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

biwirender20

Intel Xeon E5-2620 v4

2.10 GHz

16

32

376 GB

403 GB

6 GTX 1080 Ti (11 GB)

Pascal

Debian 11

bmicgpu01

Intel Xeon E5-2680 v3

2.50 GHz

24

24

251 GB

1.1 TB

6 TITAN X (12 GB)

Pascal

Debian 11

bmicgpu02

Intel Xeon E5-2640 v3

2.60 GHz

16

16

251 GB

692 GB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu03

Intel Xeon E5-2630 v4

2.20 GHz

20

40

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu04

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

5 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu05

Intel Xeon E5-2630 v4

2.20 GHz

20

20

251 GB

1.1 TB

4 TITAN Xp (12 GB)

Pascal

Debian 11

bmicgpu06

AMD EPYC 7742

3.41 GHz

128

128

503 GB

1.8 TB

4 A100 (40 GB)
1 A100 (80GB)
3 A6000 (48 GB)

Ampere

Debian 11

bmicgpu07

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu08

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu09

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

bmicgpu10

AMD EPYC 7763

3.53 GHz

128

128

755 GB

6.9 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus01

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus02

AMD EPYC 7H12

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus03

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

octopus04

AMD EPYC 7742

3.41 GHz

128

128

755 GB

1.8 TB

8 A6000 (48 GB)

Ampere

Debian 11

Detailed information about all nodes can be seen by issuing the command

scontrol show nodes

An overview of utilization of individual node's resources can be shown with:

sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10

(Adapt the field length for gres and gresused to your needs)

Automatic/default resource assignment

  • Jobs not explicitely requesting GPU resources receive the default of 1 GPU
  • Jobs receive a default of 2 CPUs and 40GB per assigned or requested GPU

Limits

  • Run time for interactive jobs is limited to 2 hours
  • Run time for batch jobs is limited to 48 hours

Need for longer run time

If you need to run longer jobs, coordinate with your administrative contact a request to be added to the account long at ISG D-ITET support
After you've been added to long, specify this account as in the following example to run longer jobs:

sbatch --account=long job_script.sh

Display GPU availability

Information about the GPU nodes and current availability of the installed GPUs is updated every 5 minutes to the file /home/sladmcvl/smon.txt. Here are some convenient aliases to display the file with highlighting of either free GPUs or those running the current user's jobs:

alias smon_free="grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt"
alias smon_mine="grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt"

For monitoring its content the following aliases can be used:

alias watch_smon_free="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp 'free|$' /home/sladmcvl/smon.txt\""
alias watch_smon_mine="watch --interval 300 --no-title --differences --color \"grep --color=always --extended-regexp '${USER}|$' /home/sladmcvl/smon.txt\""

Access local scratch of diskless clients

Local /scratch disks of managed diskless clients are available on a remote host at /scratch_net/<hostname> as an automount (on demand). Typically you set up your personal directory with your username $USER on the local /scratch of the managed client you work on.

  • Locally (on the client <hostname>) it is accessible under /scratch/$USER, resp. /scratch-second/$USER.
    The command hostname shows the name of your local client.

  • Remotely (on a cluster node, from a Slurm job) it is accessible under /scratch_net/<hostname>/$USER, resp. /scratch_net/<hostname>_second/$USER

  • On demand means: The path to a remote /scratch will appear at first access, like after issuing ls /scratch_net/<hostname> and disappear again when unused.

  • Mind the difference of - used to designate a local additional disk and _ used in naming remote mounts of such additional disks

BMIC specific information

The BMIC group of CVL owns dedicated CPU and GPU resources with restricted access. These resources are grouped in partitions cpu.bmic, gpu.bmic and gpu.bmic.long.
Access to these partitions is available for members of the Slurm account bmic only. You can check your Slurm account membership with the following command:

sacctmgr show users WithAssoc Format=User%-15,DefaultAccount%-15,Account%-15 ${USER}

Notable differences

With access to the BMIC resources, the following differences to the common defaults and limits apply:

  • Jobs not explicitely requesting GPU resources will not receive a default of 1 GPU but are sent to the partition dedicated for CPU-only jobs cpu.bmic

  • Need for longer run time: As above, but apply to be added to bmic.long

  • Run time for interactive jobs is limited to 8 hours

Services/SLURM-Biwi (last edited 2025-03-06 08:06:29 by stroth)