Working with GPU or CPU in data sciences

This article is a based on the guide "Setting up a personal python development infrastructure" which is required reading to understand some of the concepts used here. The following information is intended for users of grid computing clusters, i.e. for staff members. It is a collection of hints and explanations to uses tools in the field of data sciences on the D-ITET computing infrastructure.

For an introduction to data sciences have a look at the Guided Data Science Resources. It is a community-sourced repository containing open source learning material about data sciences in general.

Platform information

Information about platform components can be shown by issuing the following commands in a shell:

GPU numbering

The numbering of GPUs can be confusing as it is non-uniform across different sources of information. One source of information is the so-called PCI bus number, the other is the PCI device minor number. They are generated differently and although their order might match, this cannot be taken for granted!


The environment variable CUDA_DEVICE_ORDER controls the numbering of GPUs in a CUDA context. It's default is FASTEST_FIRST, which sets the fastest available GPU to be the number 0 in CUDA_VISIBLE_DEVICES.
For details, see the section CUDA Environment Variables in the CUDA toolkit documentation.
As long as a node only has one type of GPUs installed, this numbering can be identical to the ordering enforced by setting CUDA_DEVICE_ORDER=PCI_BUS_ID.

By PCI device minor number: nvidia-smi/NVML

The command nvidia-smi which uses the Nvidia Management Library (NVML) numbers GPUs based on the enumeration by the kernel driver. As this can change between node reboots it should not be used as a constant value.
For details see the related section in the nvidia-smi man page by issuing the command man --pager='less +/--id=ID' nvidia-smi in your shell.
A GPU can consistently be detected by its UUID or PCI bus ID as follows:

nvidia-smi -q |grep -E '(GPU UUID|Minor Number|Bus Id)\s+:' |paste - - - |column -t

By PCI device minor number: Operating system/Kernel driver

The GPU ID used by the operating system in /dev/nvidia[0..n] is based on the PCI device minor number. This number is generated by the kernel driver in a non-transparent way, it can change after a reboot.
A GPU can consistently be detected by its UUID or PCI bus ID as follows:

grep -h -E '(GPU UUID|Device Minor|Bus Location):' /proc/driver/nvidia/gpus/*/information |paste - - - |column -t


The CUDA toolkit provides a development environment for creating high performance GPU-accelerated applications. It is a necessary software dependency for tools used in GPU computing.

Matching toolkit versions to installed driver

The version of the NVIDIA driver installed on a platform limits the version range of CUDA toolkits working with the driver. The driver version is subject to operating system update policies and cannot be changed by a user with normal privileges. It is typically a lower version on desktop clients and a higher version on Slurm GPU nodes.

CUDA toolkits matching the driver of a system are provided in the SEPP package cuda_toolkit-1x.x-sr. A toolkit command like nvcc is started through a wrapper which selects the command version matching the driver of the system it is invoked from.

If you set up your project in a conda environment, a CUDA toolkit is likely installed as a dependency of another tool you install in your environment.
When you install such a project it is crucial to

Installing a specific toolkit version with conda

The easiest way to install the CUDA toolkit is by using conda. Available versions can be shown with

conda search cudatoolkit

And the version matching the driver can be installed with the following command in an active environment:

conda install cudatoolkit=<version number>

/!\ conda defines virtual packages to solve dependencies of real packages on features installed on the operating system it's running on. They can be shown with

conda info

The virtual package __cuda=<version number> matches the NVIDIA driver installed on the system. In order to force-install an environment depending on a different driver version, the virtual package can be overriden by setting the environment variable

export CONDA_OVERRIDE_CUDA=<version number>

A typical use case is the local preparation on a Linux workstation of a environment to be run on a GPU cluster node:

Missing features

The feature set of the anaconda package cudatoolkit is incomplete compared to a toolkit installed with the official installer by NVIDIA. The NVIDIA Cuda Compiler nvcc is missing for example. At the time of writing this article the alternative was to install the package cudatoolkit-dev which downloads and installs a full CUDA toolkit.
Make sure to set TMPDIR to a location with enough space before installing cudatoolkit-dev.

Installing a specific toolkit version with its official installer

A complete toolkit can be installed outside of a conda virtual environment by using the official installer for the version of choice.

Download the installer

This will show either a download button or a wget command with the URL to download the installer:
Note, the minor versions of the toolkit and driver might not be reflected in NVIDIA's dependency matrix.

Install with normal user privileges

The following script facilitates installation and provides options to the installer in order to install it in a custom location and without elevated privileges. Please adapt the variables containing version numbers to the version of your choice.


# Adapt the following version numbers according to your needs

# Adapt the following directory locations according to your needs


mkdir -p "${cuda_install_dir}" "${TMPDIR}"
if [[ ! -f "${TMPDIR}/${cuda_installer}" ]]; then
    wget "${cuda_version_major}/Prod/local_installers/${cuda_installer}" -O "${TMPDIR}/${cuda_installer}"
if [[ ! -x "${TMPDIR}/${cuda_installer}" ]]; then
    chmod 700 "${TMPDIR}/${cuda_installer}"
echo 'Installing, please be patient.'
if "${TMPDIR}/${cuda_installer}" --silent --override --toolkit --installpath="${cuda_install_dir}" --toolkitpath="${cuda_install_dir}" --no-man-page --tmpdir="${TMPDIR}"; then
    echo 'Done.'
    echo "To use CUDA Toolkit ${cuda_version_major}.${cuda_version_minor}, extend your environment as follows:"
    if [[ -z ${PATH} ]]; then
        echo "export PATH=${cuda_install_dir}/bin"
        echo "export PATH=${cuda_install_dir}/bin:\${PATH}"
    if [[ -z ${LD_LIBRARY_PATH} ]]; then
        echo "export LD_LIBRARY_PATH=${cuda_install_dir}/lib64"
        echo "export LD_LIBRARY_PATH=${cuda_install_dir}/lib64:\${LD_LIBRARY_PATH}"
    cat /tmp/cuda-installer.log

Important reminder about working locally

If you're working locally, meaning on a managed Linux desktop or your private machine, always keep in mind:

cuDNN library

The cuDNN library is a GPU-accelerated library of primitives for deep neural networks. It is another dependency for GPU computing. In order to use it NVIDIA asks you to read the Software Level Agreement for the library. The library is registered by ISG to be used for research at D-ITET. If you use the library differently you are obliged to register it yourself.

conda automatically installs this library if it's a dependency of another package installed.


pytorch is one of the main open source deep learning platforms in use at the time of writing this page. If you haven't done so already, read this installation example.

A good starting point for further information is the official pytorch documentation.

Testing pytorch

To verify the successful installation of pytorch run the following python code in your python interpreter:

import torch
x = torch.rand(5, 3)

The output should be similar to the following:

tensor([[0.4813, 0.8839, 0.1568],
        [0.0485, 0.9338, 0.1582],
        [0.1453, 0.5322, 0.8509],
        [0.2104, 0.4154, 0.9658],
        [0.6050, 0.9571, 0.3570]])

Environment and platform information

The following example shows how to gather information which you can use for example to decide whether to run your code on CPU or GPU:

import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION', torch.version.cuda)
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Device capability:', torch.cuda.get_device_capability())
from subprocess import call
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,,memory.used,"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())


tensorflow is another popular open source platform for machine learning. If you haven't done so already, read this installation example.

Choose from the available tutorials to learn how to use it.

Platform information

The following code prints information about the capabilities of the platform you run your environment on:

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Lines containing device:XLA_ show which CPU/GPU devices are available.

A line containing cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version means the NVIDIA driver installed on the system you run the code on is not compatible with the CUDA toolkit installed in the environment you run the code from.

An extensive list of device information can be shown with:

from tensorflow.python.client import device_lib

The module tf.test contains helpful functions to gather platform information:

Managing GPU resources

If your code is going to run on a GPU cluster you need to make sure you manage your use of GPU resources and use the following recommended configuration:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)

Programming/Languages/GPUCPU (last edited 2022-10-07 11:37:59 by stroth)