XRM Slurm information

The X-ray Tomography Group (XRM) owns nodes in the Slurm cluster with restricted access. The following information is an addendum to the main Slurm article in this wiki specific for accessing these XRM nodes.
If the information you're looking for isn't available here, please consult the main Slurm article.

Hardware

The following GPU nodes are reserved for exclusive use by XRM:

Server

CPU

Frequency

Cores

Memory

/scratch SSD

/scratch Size

GPUs

GPU Memory

Operating System

hardin01

AMD EPYC 7763 64-Core Processor

3.5GHz

128

755 GB

7.0 TB

2 RTX A6000
1 Titan RTX

48 GB
24 GB

Debian 11

Accounts and partitions

The nodes are grouped in partitions to prioritize access for different accounts:

Partition

Nodes

Slurm accounts with access

Account membership

xrmgpu.normal

hardin01

xrm

On request* for staff, guests and students

* Please contact the technical contact of XRM to have you granted account membership

Overflow into gpu.normal

Jobs from XRM users will overflow to partition gpu.normal in case all XRM nodes are busy, as XRM is a laboratory contributing to the Slurm cluster besides owning nodes.

Rules of conduct

There are no limits imposed on resources requested by jobs. Please be polite and share available resources sensibly. If you're in need of above-average resources, please coordinate with other XRM Slurm users.

Improving the configuration

If you think the current configuration of XRM nodes, partitions etc. could be improved:

The technical contact will streamline your ideas into a concrete change request which we (ISG D-ITET) will implement for you.

Services/SLURM-xrm (last edited 2024-09-17 11:09:30 by stroth)