Size: 2414
Comment:
|
Size: 2227
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 7: | Line 7: |
The following gpu nodes are reserved for exclusive use by TIK: | The following GPU nodes are reserved for exclusive use by TIK: |
Line 18: | Line 18: |
||'''Partition'''||'''Nodes'''||'''Slurm accounts with access'''||'''Member type'''|| ||tikgpu.all||tikgpu[01-05]||tik-internal||Staff members|| ||tikgpu.medium||tikgpu[01-03]||tik-external||Guests and students|| ||tikgpu.high||tikgpu[04-05]||tik-highmem||On request access to highmem nodes|| |
||'''Partition'''||'''Nodes'''||'''Slurm accounts with access'''||'''Account membership'''|| ||tikgpu.medium||tikgpu[01-03]||tik-external||On request* for guests and students|| ||tikgpu.all||tikgpu[01-05]||tik-internal||Automatic for staff members|| ||tikgpu.all||tikgpu[01-05]||tik-highmem||On request* for guests and students|| |
Line 23: | Line 23: |
== Access == * '''Staff members''' and '''guests''' typically obtain Slurm access when their Linux account is created * '''Students''' need to ask their supervisor to request Slurm access from the TIK technical contact * If you're in need of access to partition `tikgpu.high` please contact the person vouching for your guest access or your supervisor if you're a student |
* Please contact the person vouching for your guest access - or your supervisor if you're a student - and ask them to have you granted account membership |
TIK Slurm information
The Computer Engineering and Networks Laboratory (TIK) owns nodes in the Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for accessing these TIK nodes.
If the information you're looking for isn't available here, please consult the main Slurm article.
Hardware
The following GPU nodes are reserved for exclusive use by TIK:
Server |
CPU |
Frequency |
Cores |
Memory |
/scratch SSD |
/scratch Size |
GPUs |
GPU Memory |
Operating System |
tikgpu01 |
Dual Tetrakaideca-Core Xeon E5-2680 v4 |
2.40GHz |
28 |
503 GB |
✓ |
1.1 TB |
5 Titan Xp |
12 GB |
Debian 10 |
tikgpu02 |
Dual Tetrakaideca-Core Xeon E5-2680 v4 |
2.40GHz |
28 |
503 GB |
✓ |
1.1 TB |
8 Titan Xp |
12 GB |
Debian 10 |
tikgpu03 |
Dual Tetrakaideca-Core Xeon E5-2680 v4 |
2.40GHz |
28 |
503 GB |
✓ |
1.1 TB |
8 Titan Xp |
12 GB |
Debian 10 |
tikgpu04 |
Dual Hectakaideca-Core Xeon Gold 6242 v4 |
2.80GHz |
32 |
754 GB |
✓ |
1.8 TB |
8 Titan RTX |
24 GB |
Debian 10 |
tikgpu05 |
AMD EPYC 7742 |
1.50 GHz |
256 |
503 GB |
✓ |
7.0 TB |
5 Titan RTX |
24 GB |
Debian 10 |
Partitions
The nodes are grouped in partitions to prioritize access for different accounts:
Partition |
Nodes |
Slurm accounts with access |
Account membership |
tikgpu.medium |
tikgpu[01-03] |
tik-external |
On request* for guests and students |
tikgpu.all |
tikgpu[01-05] |
tik-internal |
Automatic for staff members |
tikgpu.all |
tikgpu[01-05] |
tik-highmem |
On request* for guests and students |
* Please contact the person vouching for your guest access - or your supervisor if you're a student - and ask them to have you granted account membership
Rules of conduct
There are no limits imposed on resources requested by jobs. Please be polite and share available resources sensibly. If you're in need of above-average resources, please coordinate with other TIK Slurm users.