Revision 2 as of 2024-09-03 07:47:52

Clear message

LBB Slurm information

The Laboratory of Biosensors and Bioelectronics (LBB) owns nodes in the Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for accessing these LBB nodes.
If the information you're looking for isn't available here, please consult the main Slurm article.

Hardware

The following GPU nodes are reserved for exclusive use by TIK:

Server

CPU

Frequency

Cores

Memory

/scratch SSD

/scratch Size

GPUs

GPU Memory

Operating System

lbbgpu01

Intel Xeon Gold 5120

2.20GHz

56

376 GB

1.1 TB

8 RTX 2080 Ti

11 GB

Debian 11

Shared /scratch_net

Access to local /scratch of each node is available as an automount (on demand) under /scratch_net/tikgpuNM (Replace NM with an existing hostname number) on each node.

Accounts and partitions

The nodes are grouped in partitions to prioritize access for different accounts:

Partition

Nodes

Slurm accounts with access

Account membership

lbbgpu.normal

lbbgpu01

lbb

On request* for staff, guests and students

* Please contact the technical contact of LBB to have you granted account membership

Overflow into gpu.normal

Jobs from LBB users will overflow to partition gpu.normal in case all LBB nodes are busy, as LBB is a löaboratory contributing to the Slurm cluster besides owning nodes.

Rules of conduct

There are no limits imposed on resources requested by jobs. Please be polite and share available resources sensibly. If you're in need of above-average resources, please coordinate with other LBB Slurm users.

Improving the configuration

If you think the current configuration of LBB nodes, partitions etc. could be improved:

The technical contact will streamline your ideas into a concrete change request which we (ISG D-ITET) will implement for you.