Size: 13203
Comment:
|
Size: 12494
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 115: | Line 115: |
{{{ Account Descr Org -------- -------------------- ---- deadconf deadline_conference biwi deadline deadline biwi long longer time limit biwi root default root account root staff staff biwi student student biwi }}} |
||'''Account'''||'''Descr'''||'''Org'''|| ||deadconf||deadline_conference||biwi|| ||deadline||deadline||biwi|| ||long||longer time limit||biwi|| ||root||default root account||root|| ||staff||staff||biwi|| ||student||student||biwi|| |
Line 131: | Line 128: |
{{{ Account User Partition MaxJobs QOS Def QOS --------- -------- --------- ------- ----- ------- deadconf ........ gpu_4 gpu_4 deadline ........ gpu_5 gpu_5 long ........ gpu_2 gpu_2 staff ........ gpu_7 gpu_7 student ........ gpu_3 gpu_3 }}} |
||'''Account'''||'''User'''||'''Partition'''||'''!MaxJobs'''||'''QOS'''||'''Def QOS'''|| ||deadconf||........|| || ||gpu_4||gpu_4|| ||deadline||........|| || ||gpu_5||gpu_5|| ||long||........|| || ||gpu_2||gpu_2|| ||staff||........|| || ||gpu_7||gpu_7|| ||student||........|| || ||gpu_3||gpu_3|| |
Line 145: | Line 139: |
{{{ Name MaxTRESPU --------------- ------------------------------ normal gpu_1 gres/gpu=1 gpu_2 gres/gpu=2 gpu_3 gres/gpu=3 gpu_4 gres/gpu=4 gpu_5 gres/gpu=5 gpu_6 gres/gpu=6 }}} |
||'''Name'''||'''MaxTRESPU'''|| ||normal|||| ||gpu_1||gres/gpu=1|| ||gpu_2||gres/gpu=2|| ||gpu_3||gres/gpu=3|| ||gpu_4||gres/gpu=4|| ||gpu_5||gres/gpu=5|| ||gpu_6||gres/gpu=6|| |
Line 242: | Line 233: |
=== gpu.mon === This partition is reserved to run interactive jobs for monitoring other running jobs. No GPUs can be allocated, only 1 core per job and 1 job per person is allowed. /!\ This might be replaced by an explanation how to reserve a job step in a job for monitoring and attach to such a job ste. |
Contents
Slurm Pilot project for Biwi
- The following information is an abbreviated How-To with specific information for the pilot cluster
Our official documentation for slurm is the Computing wiki article, you need to read this as well.
- Please bear in mind the final documentation is meant to be the above article. It will be extended with your feedback valid for all slurm users and a specific section or additional page concerning only Biwi.
The goal of this pilot is to have a documentation to enable SGE users to migrate their jobs to Slurm. This also means the section Accouns and limits has only informative purpose at the moment.
The alpha version of a GPUMon alternative is available. Please don't send feedback yet, use it as it is.
Pilot-specific information
Involved machines are
biwirender01 for CPU computing
biwirender03 for GPU-computing
All available GPU partitions are overlayed on biwirender03. They will be available on different nodes in the final cluster.
long partitions are not yet implemented in the pilot!
Initialising slurm
All slurm command read the cluster configuration from the environment variable SLURM_CONF, so it needs to be set:
export SLURM_CONF=/home/sladmcvl/slurm/slurm.conf
If you're interested, feel free to have a look at the configuration, feedback is welcome!
Available partitions
The equivalent to SGE's queues is called partitions in slurm.
sinfo shows all available partitions:
sinfo
PARTITION |
AVAIL |
TIMELIMIT |
NODES |
STATE |
NODELIST |
cpu.medium.normal |
up |
2-00:00:00 |
38 |
idle |
bender[01-06,39-70] |
gpu.low.normal |
up |
2-00:00:00 |
1 |
idle |
biwirender[03,04] |
gpu.medium.normal |
up |
2-00:00:00 |
15 |
idle |
biwirender[05-12,17,20],bmicgpu[01-05] |
gpu.medium.long |
up |
5-00:00:00 |
15 |
idle |
biwirender[05-12,17,20],bmicgpu[01-05] |
gpu.high.normal |
up |
2-00:00:00 |
3 |
idle |
biwirender[13-15] |
gpu.high.long |
up |
5-00:00:00 |
3 |
idle |
biwirender[13-15] |
gpu.debug |
up |
8:00:00 |
1 |
idle |
biwirender[03,04] |
Only the interactive partition gpu.debug should be specified (see below). The scheduler decides in which partition to put a job based on the resources requested by it.
Interactive jobs
For testing purposes a job with an interactive session with 1 GPU can be started:
srun --time 10 --partition=gpu.debug --gres=gpu:1 --pty bash -i
Such jobs are placed in gpu.debug by the scheduler
Allocating resources
GPUs
For a job to have access to a GPU, GPU resources need to be requested with the option --gres=gpu:<n>
Here's the sample job submission script primes_1GPU.sh requesting 1 GPU:
#!/bin/sh
#
#SBATCH --mail-type=ALL
#SBATCH --gres=gpu:1
#SBATCH --output=log/%j.out
export LOGFILE=`pwd`/log/$SLURM_JOB_ID.out
# env | grep SLURM_ #Uncomment this line to show environment variables set by slurm for a job
#
# binary to execute
codebin/primes $1
echo ""
echo "Job statistics: "
sstat -j $SLURM_JOB_ID --format=JobID,AveVMSize%15,MaxRSS%15,AveCPU%15
echo ""
exit 0;
- Make sure the directory wherein to store logfiles exists before submitting a job.
Please keep the environment variable LOGFILE, it is used in the scheduler's epilog script to append information to your logfile after your job ended (and therefore doesn't have access to $SLURM_JOB_ID anymore).
slurm also sets CUDA_VISIBLE_DEVICES. See the section GPU jobs in the main slurm article.
A job requesting more GPUs than allowed by the QOS of the users's account (see Accouns and limits) will stay in "PENDING" state.
Memory
If you omit the --mem option, the default of 30G/GPU memory and 3CPUs/GPU will be allocated to your job, which will make the scheduler choose gpu.medium.normal:
sbatch primes_1GPU.sh sbatch: GRES requested : gpu:1 sbatch: GPUs requested : 1 sbatch: Requested Memory : --- sbatch: CPUs requested : --- sbatch: Your job is a gpu job. Submitted batch job 133
squeue --Format jobarrayid:8,partition:20,reasonlist:20,username:10,tres-alloc:45,timeused:10
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME 133 gpu.medium.normal biwirender03 testuser cpu=3,mem=30G,node=1,billing=3,gres/gpu=1 0:02
An explicit --mem option selects the partition as follows:
--mem |
Partition |
< 30G |
gpu.low.normal |
30G - 50G |
gpu.medium.normal |
>50G - 70G |
gpu.high.normal |
>70G |
not allowed |
For example with:
sbatch --mem=50G primes_2GPU.sh
the above squeue command shows:
JOBID PARTITION NODELIST(REASON) USER TRES_ALLOC TIME 136 gpu.high.normal biwirender03 testuser cpu=6,mem=100G,node=1,billing=6,gres/gpu=2 0:28
Accounts and limits
In slurm lingo an account is equivalent to a user group. The following accounts are configured for users to be added to:
sacctmgr show account
Account |
Descr |
Org |
deadconf |
deadline_conference |
biwi |
deadline |
deadline |
biwi |
long |
longer time limit |
biwi |
root |
default root account |
root |
staff |
staff |
biwi |
student |
student |
biwi |
Accounts isg and root are not accessible to Biwi
GPU limits are stored in so-called QOS, each account is associated with the QOS we want to apply to it. Limits apply to all users added to an account.
sacctmgr show assoc format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
Account |
User |
Partition |
MaxJobs |
QOS |
Def QOS |
deadconf |
........ |
|
|
gpu_4 |
gpu_4 |
deadline |
........ |
|
|
gpu_5 |
gpu_5 |
long |
........ |
|
|
gpu_2 |
gpu_2 |
staff |
........ |
|
|
gpu_7 |
gpu_7 |
student |
........ |
|
|
gpu_3 |
gpu_3 |
The QOS' gpu_x only contain a limit for the amount of GPUs per user:
sacctmgr show qos format=name%15,maxtrespu%30
Name |
MaxTRESPU |
normal |
|
gpu_1 |
gres/gpu=1 |
gpu_2 |
gres/gpu=2 |
gpu_3 |
gres/gpu=3 |
gpu_4 |
gres/gpu=4 |
gpu_5 |
gres/gpu=5 |
gpu_6 |
gres/gpu=6 |
Users with administrative privileges can move a user between accounts deadline or deadconf.
List associations of testuser:
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- deadline testuser gpu_3 gpu_3
Move testuser from deadline to staff:
/home/sladmcvl/slurm/change_account_of_user.sh testuser deadline staff
List associations of testuser again:
sacctmgr show assoc where user=testuser format=account%15,user%15,partition%15,maxjobs%8,qos%15,defaultqos%15
Account User Partition MaxJobs QOS Def QOS --------------- --------------- --------------- -------- --------------- --------------- staff testuser gpu_2 gpu_2
Accounts with administrative privileges can be shown with:
sacctmgr show user format=user%15,defaultaccount%15,admin%15'
Last words
Have fun using SLURM for your jobs!
Content for the final page
Here starts the content which will eventually evolve into the final wiki page. The information won't be available all at once, it is an ongoing process.
Nodes
The following tables summarizes node specific information:
Server |
CPU |
Frequency |
Cores |
Memory |
/scratch SSD |
GPUs |
Operating System |
bender[01-06] |
Intel Xeon E5-2670 v2 |
2.50 GHz |
40 |
125 GB |
- |
- |
Debian 9 |
bender[39-52] |
Intel Xeon X5650 |
2.67 GHz |
24 |
94 GB |
- |
- |
Debian 9 |
bender[53-70] |
Intel Xeon E5-2665 0 |
2.40 GHz |
32 |
125 GB |
- |
- |
Debian 9 |
biwirender03 |
Intel Xeon E5-2650 v2 |
2.60 GHz |
32 |
125 GB |
- |
6 Tesla K40c (11 GB) |
Debian 9 |
biwirender04 |
Intel Xeon E5-2637 v2 |
3.50 GHz |
8 |
125 GB |
✓ |
5 Tesla K40c (11 GB) |
Debian 9 |
biwirender0[5,6] |
Intel Xeon E5-2637 v2 |
3.50 GHz |
8 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender0[7-9] |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender10 |
Intel Xeon E5-2650 v4 |
2.20 GHz |
24 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender11 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
251 GB |
✓ |
5 GeForce GTX TITAN X (12 GB) |
Debian 9 |
biwirender12 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
32 |
251 GB |
✓ |
6 GeForce RTX 2080 Ti (10 GB) |
Debian 9 |
biwirender13 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
503 GB |
✓ |
4 TITAN Xp (12 GB) |
Debian 9 |
biwirender14 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
503 GB |
✓ |
3 TITAN Xp (12 GB) |
Debian 9 |
biwirender15 |
Intel Xeon E5-2680 v4 |
2.40 GHz |
28 |
503 GB |
✓ |
3 TITAN Xp (12 GB) |
Debian 9 |
biwirender17 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
32 |
503 GB |
✓ |
8 GeForce GTX 1080 Ti (11 GB) |
Debian 9 |
biwirender20 |
Intel Xeon E5-2620 v4 |
2.10 GHz |
32 |
377 GB |
✓ |
8 GeForce GTX 1080 Ti (11 GB) |
Debian 9 |
bmicgpu01 |
Intel Xeon E5-2680 v3 |
2.50 GHz |
24 |
251 GB |
✓ |
6 TITAN X (Pascal) (12 GB) |
Debian 9 |
bmicgpu02 |
Intel Xeon E5-2640 v3 |
2.60 GHz |
16 |
251 GB |
✓ |
5 TITAN Xp (12 GB) |
Debian 9 |
bmicgpu0[3-5] |
Intel Xeon E5-2630 v4 |
2.20 GHz |
20 |
251 GB |
✓ |
6 TITAN Xp (12 GB) |
Debian 9 |
Detailled information about all nodes can be seen by issuing the command
scontrol show nodes
An overview of utilization of individual node's resources can be shown with:
sinfo --Format nodehost:14,statecompact:7,cpusstate:16,cpusload:11,memory:8,allocmem:10,gres:55,gresused:62,reason:10
(Adapt the field length for gres and gresused to your needs)
Partitions
Partitions including their limits are shown in the following table:
Partition |
DefMPG |
MaxMPG |
DefCPG |
MaxCPG |
Time limit |
cpu.medium.normal |
- |
- |
- |
- |
2 d |
gpu.low.normal |
20 GB |
25 GB |
3 |
3 |
2 d |
gpu.medium.normal |
40 GB |
50 GB |
3 |
5 |
2 d |
gpu.medium.long |
40 GB |
50 GB |
3 |
5 |
5 d |
gpu.high.normal |
70 GB |
70 GB |
4 |
4 |
2 d |
gpu.high.long |
70 GB |
70 GB |
4 |
4 |
5 d |
gpu.debug |
20 GB |
25 GB |
3 |
3 |
8 h |
gpu.mon |
- |
- |
- |
- |
15 m |
Def: Default, Max: Maximum, MPG: Memory Per GPU, CPG: CPUs Per GPU
gpu.debug
This partition is reserved to run interactive jobs for debugging purposes. If a job doesn't run a process on an allocated GPU after 20 minutes it will be killed.
*.long
The *.long partitions are only accessible to members of the account "long". Membership is temporary and granted on demand by <contact to be filled in>.
Display specific information
The following is a collection of command sequences to quickly extract specific summaries.
GPUs per user
Show a sorted list of users and a summary of the GPU's used by their jobs:
scontrol -a show jobs \
|grep -E '(UserId|TRES)=' \
|paste - - \
|grep 'gres/gpu' \
|sed -E 's:^\s+UserId=([^\(]+).*gres/gpu=([0-9]+)$:\1;\2:' \
|awk -F ';' -v OFS=';' '{a[$1]+=$2}END{for(i in a) print i,a[i]}' \
|sort