Differences between revisions 14 and 15
Revision 14 as of 2019-05-13 08:25:42
Size: 10625
Editor: stroth
Comment:
Revision 15 as of 2019-05-13 08:27:51
Size: 10622
Editor: stroth
Comment:
Deletions are marked like this. Additions are marked like this.
Line 138: Line 138:
conda create --name pytcu9  pytorch torchvision cudatoolkit=9.0 --channel pytorch conda create --name pytcu9 pytorch torchvision cudatoolkit=9.0 --channel pytorch
Line 142: Line 142:
conda create --name pytcpu  pytorch-cpu torchvision-cpu --channel pytorch conda create --name pytcpu pytorch-cpu torchvision-cpu --channel pytorch
Line 150: Line 150:
conda create --name tencu9  tensorflow-gpu cudatoolkit=9.0 conda create --name tencu9 tensorflow-gpu cudatoolkit=9.0

Set up a python development environment for data science

The following procedure shows how to set up a python development environment with the conda packet manager and install pytorch and tensorflow including non-python dependencies like CUDA toolkit and the cuDNN library.

Install conda

  • Time to install: ~1 minute
  • Space required: ~350M

To provide conda, the minimal anaconda distribution miniconda can be installed and configured for the D-ITET infrastructure with the following bash script:

#!/bin/bash

# Locations to store environments
# net_scratch is used as default, local scratch needs to be chosen explicitly
LOCAL_SCRATCH="/scratch/${USER}"
NET_SCRATCH="/itet-stor/${USER}/net_scratch"

# Installer of choice for conda
CONDA_INSTALLER_URL='https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh'

# Unset pre-existing python paths
[[ -z ${PYTHONPATH} ]] || unset PYTHONPATH

# Downlad latest version of miniconda and install it
wget -O miniconda.sh "${CONDA_INSTALLER_URL}" \
    && chmod +x miniconda.sh \
    && ./miniconda.sh -b -p "${NET_SCRATCH}/conda" \
    && rm ./miniconda.sh

# Configure conda
eval "$(${NET_SCRATCH}/conda/bin/conda shell.bash hook)"
conda config --add pkgs_dirs "${NET_SCRATCH}/conda_pkgs" --system
conda config --add envs_dirs "${LOCAL_SCRATCH}/conda_envs" --system
conda config --add envs_dirs "${NET_SCRATCH}/conda_envs" --system
conda config --set auto_activate_base false
conda deactivate

# Show how to initialize conda
echo
echo 'Initialize conda immediately:'
echo "eval \"\$(${NET_SCRATCH}/conda/bin/conda shell.bash hook)\""
echo
echo 'Automatically initialize conda for furure shell sessions:'
echo "echo 'eval \"\$(${NET_SCRATCH}/conda/bin/conda shell.bash hook)\"' >> ${HOME}/.bashrc"

# Show how to remove conda
echo
echo 'Completely remove conda:'
echo "rm -r ${NET_SCRATCH}/conda ${NET_SCRATCH}/conda_pkgs ${NET_SCRATCH}/conda_envs ${LOCAL_SCRATCH}/conda_envs ${HOME}/.conda"

Save this script as install_conda.sh, make it executable with

chmod +x install_conda.sh

and execute the script by issuing

./install_conda.sh

Choose your preferred method of initializing conda as recommended by the script.

Conda storage locations

The directories listed in the command for complete conda removal contain the following data:

/itet-stor/$USER/net_scratch/conda

The miniconda installation

/itet-stor/$USER/net_scratch/conda_pkgs

Downloaded packages

/itet-stor/$USER/net_scratch/conda_envs

Virtual environments on NAS

/scratch/$USER/conda_envs

Virtual environments on local disk

/home/$USER/.conda

Personal conda configuration

The purpose of this configuration is to store reproducible and space consuming data outside of your $HOME to prevent using up your quota.

Using Conda

conda allows to seperate installed software packages from each other by creating so-called environments. Using environments is best practice to generate deterministic and reproducible tools.

conda takes care of dependencies common to the packages it is asked to install. If two packages have a common dependency but define a differing range of version requirements of said dependency, conda chooses the highest common version number. This means the dependency installed in an environment with both packages together might have a lower version number than in environments separating both packages.

It is best practice to seperate packages in different environments if they don't need to interact.

For a complete guide to conda see the official documentation.

Common commands

The official cheat sheet is a compact summary of common commands to get you started. An abbreviated list is shown here:

Create an environment called "my_env" with packages "package1" and "package2" installed

conda create --name my_env package1 package2

Activate the environment called "my_env"

conda activate my_env

Deactivate the current environment

conda deactivate

List available environments

conda env list

Remove the environment called "my_env"

conda remove --name my_env --all

Create a cloned environment named "cloned_env" from "original_env"

conda create --name cloned_env --clone original_env

Export the active environment definition to the file "my_env.yml"

conda env export > my_env.yml

Recreate a previously exported environment

conda env create --file my_env.yml

List packages installed in the active environment

conda list

Creates the environment "my_env" in the specified location

This example is for creating the environment on local scratch for faster disk access

conda create --prefix /scratch/$USER/conda_envs/my_env

Remove index cache, lock files, unused cache packages, and tarballs

conda clean --all

The name of the default environment is base.

Installation examples

  • Time to install: ~5 minutes per environment
  • Space required: ~1.5G packages, ~3G per environment

The following examples show how to install pytorch and tensorflow in an environment intended to be run either on a Linux diskless client or the GPU cluster. The difference in the examples is derived from the version of the NVIDIA driver available. For details see the explanation below.

pytorch on diskless client: CUDA toolkit 9

conda create --name pytcu9 pytorch torchvision cudatoolkit=9.0 --channel pytorch

pytorch on diskless client: CPU-only

conda create --name pytcpu pytorch-cpu torchvision-cpu --channel pytorch

pytorch on GPU cluster: CUDA toolkit 10

conda create --name pytcu10 pytorch torchvision cudatoolkit=10.0 --channel pytorch

tensorflow on diskless client: CUDA toolkit 9

conda create --name tencu9 tensorflow-gpu cudatoolkit=9.0

tensorflow on diskless client: CPU-only

conda create --name tencpu tensorflow

tensorflow on GPU cluster: CUDA toolkit 10

conda create --name tencu10 tensorflow-gpu cudatoolkit=10.0

A CPU version of tensorflow optimized for Intel CPUs exists, which might be a tempting choice. Be aware that this version of tensorflow and installed dependencies will differ from versions installed from the default channel in the examples above.

Maintenance

The cache of installed packages will consume a lot of space over time. The default location set for the package cache resides on NetScratch, the terms of use for this storage area imply to clean up the cache regularly.

Backup

Regular backups of environments are recommended to be able to reproduce an environment used at a certain point in time. Before installing or updating an environment, a backup should always be created in order to be able to revert the changes.

For a simple backup of all environments the following script can be used:

#!/bin/bash

BACKUP_DIR="${HOME}/conda_env_backup"
MY_TIME_FORMAT='%Y-%m-%d_%H-%M-%S'

NOW=$(date "+${MY_TIME_FORMAT}")
[[ ! -d "${BACKUP_DIR}" ]] && mkdir "${BACKUP_DIR}"
ENVS=$(conda env list |grep '^\w' |cut -d' ' -f1)
for env in $ENVS; do
    echo "Exporting ${env} to ${BACKUP_DIR}/${env}_${NOW}.yml"
    conda env export --name "${env}"> "${BACKUP_DIR}/${env}_${NOW}.yml"
done

Testing installations

Testing pytorch

To verify the successful installation of pytorch run the following python code in your python interpreter:

from __future__ import print_function
import torch
x = torch.rand(5, 3)
print(x)

The output should be similar to the following:

tensor([[0.4813, 0.8839, 0.1568],
        [0.0485, 0.9338, 0.1582],
        [0.1453, 0.5322, 0.8509],
        [0.2104, 0.4154, 0.9658],
        [0.6050, 0.9571, 0.3570]])

To verify CUDA availability in pytorch, run the following code:

import torch
torch.cuda.is_available()

It should return True.

Testing TensorFlow

The following code prints information about your tensorflow installation:

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Lines containing device: XLA_ show which CPU/GPU devices are available.

A line containing cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version means the NVIDIA driver installed on the system you run the code is not compatible with the CUDA toolkit installed in the environment you run the code from.

NVIDIA CUDA Toolkit

Which version of the CUDA toolkit is usable depends on the version of the NVIDIA driver installed on the machine you run your programs. The version can be checked by issuing the command nvidia-smi and looking for the number next to the text Driver Version.

The CUDA compatibility document by NVIDIA shows a dependency matrix matching driver and toolkit versions.

Programming/Languages/Conda (last edited 2023-07-11 19:50:53 by stroth)