Differences between revisions 4 and 37 (spanning 33 versions)
Revision 4 as of 2020-12-23 19:56:47
Size: 2666
Editor: stroth
Comment:
Revision 37 as of 2025-06-10 12:11:26
Size: 32
Editor: stroth
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= TIK Slurm information =

The Computer Engineering and Networks Laboratory (TIK) owns nodes in the Slurm cluster with restricted access. The following information is an addendum to the [[Services/SLURM|main Slurm article]] in this wiki specific for accessing these TIK nodes.<<BR>>
If the information you're looking for isn't available here, please consult the [[Services/SLURM|main Slurm article]].

== Hardware ==
The following GPU nodes are reserved for exclusive use by TIK:
||'''Server'''||'''CPU'''||'''Frequency'''||'''Cores'''||'''Memory'''||'''/scratch SSD'''||'''/scratch Size'''||'''GPUs'''||'''GPU Memory'''||'''Operating System'''||
||tikgpu01||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||&#10003;||1.1 TB||5 Titan Xp<<BR>>2 GTX Titan X||12 GB<<BR>>12 GB||Debian 10||
||tikgpu02||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||&#10003;||1.1 TB||8 Titan Xp||12 GB||Debian 10||
||tikgpu03||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||&#10003;||1.1 TB||8 Titan Xp||12 GB||Debian 10||
||tikgpu04||Dual Hectakaideca-Core Xeon Gold 6242 v4||2.80GHz||32||754 GB||&#10003;||1.8 TB||8 Titan RTX||24 GB||Debian 10||
||tikgpu05||AMD EPYC 7742||1.50 GHz||256||503 GB||&#10003;||7.0 TB||5 Titan RTX<<BR>>2 Tesla V100||24 GB<<BR>>32 GB||Debian 10||

== Accounts and partitions ==
The nodes are grouped in partitions to prioritize access for different accounts:

||'''Partition'''||'''Nodes'''||'''Slurm accounts with access'''||'''Account membership'''||
||tikgpu.medium||tikgpu[01-03]||tik-external||On request* for guests and students||
||tikgpu.all||tikgpu[01-05]||tik-internal||Automatic for staff members||
||tikgpu.all||tikgpu[01-05]||tik-highmem||On request* for guests and students||

* Please contact the person vouching for your guest access - or your supervisor if you're a student - and ask them to have you granted account membership

If you're a member of account ''tik-external'' and have also been added to ''tik-highmem'', your default account is the latter and all your jobs will by default be sent to partition ''tikgpu.all''. So when you want to run jobs in partition ''tikgpu.medium'' you have to specify the account ''tik-external'' as in the following example:
{{{#!highlight bash numbers=disable
sbatch --account=tik-external job_script.sh
}}}

== Rules of conduct ==
There are no limits imposed on resources requested by jobs. Please be polite and share available resources sensibly. If you're in need of above-average resources, please [[https://matrix.ee.ethz.ch/_matrix/client/#/room/#gpu_scheduling:matrix.ee.ethz.ch|coordinate with other TIK Slurm users]].
#REDIRECT Services/SLURM-disco

Services/SLURM-tik (last edited 2025-06-10 12:11:26 by stroth)