Size: 2227
Comment:
|
← Revision 37 as of 2025-06-10 12:11:26 ⇥
Size: 32
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= TIK Slurm information = The Computer Engineering and Networks Laboratory (TIK) owns nodes in the Slurm cluster with restricted access. The following information is an addendum to the [[Services/SLURM|main Slurm article]] in this wiki specific for accessing these TIK nodes.<<BR>> If the information you're looking for isn't available here, please consult the [[Services/SLURM|main Slurm article]]. == Hardware == The following GPU nodes are reserved for exclusive use by TIK: ||'''Server'''||'''CPU'''||'''Frequency'''||'''Cores'''||'''Memory'''||'''/scratch SSD'''||'''/scratch Size'''||'''GPUs'''||'''GPU Memory'''||'''Operating System'''|| ||tikgpu01||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||✓||1.1 TB||5 Titan Xp<<BR>>2 GTX Titan X||12 GB<<BR>>12 GB||Debian 10|| ||tikgpu02||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||✓||1.1 TB||8 Titan Xp||12 GB||Debian 10|| ||tikgpu03||Dual Tetrakaideca-Core Xeon E5-2680 v4||2.40GHz||28||503 GB||✓||1.1 TB||8 Titan Xp||12 GB||Debian 10|| ||tikgpu04||Dual Hectakaideca-Core Xeon Gold 6242 v4||2.80GHz||32||754 GB||✓||1.8 TB||8 Titan RTX||24 GB||Debian 10|| ||tikgpu05||AMD EPYC 7742||1.50 GHz||256||503 GB||✓||7.0 TB||5 Titan RTX<<BR>>2 Tesla V100||24 GB<<BR>>32 GB||Debian 10|| == Partitions == The nodes are grouped in partitions to prioritize access for different accounts: ||'''Partition'''||'''Nodes'''||'''Slurm accounts with access'''||'''Account membership'''|| ||tikgpu.medium||tikgpu[01-03]||tik-external||On request* for guests and students|| ||tikgpu.all||tikgpu[01-05]||tik-internal||Automatic for staff members|| ||tikgpu.all||tikgpu[01-05]||tik-highmem||On request* for guests and students|| * Please contact the person vouching for your guest access - or your supervisor if you're a student - and ask them to have you granted account membership == Rules of conduct == There are no limits imposed on resources requested by jobs. Please be polite and share available resources sensibly. If you're in need of above-average resources, please [[https://matrix.ee.ethz.ch/_matrix/client/#/room/#gpu_scheduling:matrix.ee.ethz.ch|coordinate with other TIK Slurm users]]. |
#REDIRECT Services/SLURM-disco |