User:Jeblad/TensorFlow
To make an efficient environment for development it might be necessary to tweak and adapt the setup. That imply diverging versions for the various libraries, and even libraries that should not be defined on the computer as such. To make this possible either ''virtualenv'' or ''vagrant'' can be used. To make this possible either ''virtualenv'' or ''vagrant'' can be used. CUDA and TensorFlow is no different in this respect.
Virtualenv on main computer edit
Of some reason you want to run the CUDA and/or TensorFlow on the bare metal.[1][2] Especially note that PCI passthrough must work vor Vagrant to be used,[3] and if it does not work then this is the option of choice.
Environment at main computer edit
Installation of CUDA:[4]
- Open "System Settings"
- Open "Software and Updates"
- Open "Additional Drivers"
- Select one of the Nvidia Drivers and click "Apply Changes"
- Reboot the system.
- Open a terminal window and type
nvidia-smi
. This will identify the graphics card.
CUDA edit
Pre-install actions:[5]
$ sudo apt-get install linux-headers-$(uname -r)
Installation Instructions:[6]
$ sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb
$ sudo apt-get update
$ sudo apt-get install cuda
Post-install actions:[7]
Add the following to .profile
,[8] but note the versioning, it must match the names in use!
# additional exports
export PATH=/usr/local/cuda/bin:/usr/local/cuda-8.0/bin:${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-8.0/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Note the PATH
, it will generate a double colon, and that is a bit dangerous. Later on that will create a warning.
Install cuDNN, note this is behind a member wall:[9]
$ cd <installpath>
$ export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
Add <installpath> to your build and link process by adding -I<installpath> to your compile line and -L<installpath> -lcudnn to your link line, or simply put it into the cuda folder.
Samples edit
The samples are found at /usr/local/cuda/samples
if the install is the usual deb-package for Ubuntu. It is although created for "nvidia-367" and not "nvidia-375" which is the latest as of this wring (10:38, 8 May 2017 (UTC)). To update the code (preferably on a local copy) do a
$ find . -type f -execdir sed -i 's/UBUNTU_PKG_NAME = "nvidia-367"/UBUNTU_PKG_NAME = "nvidia-375"/g' '{}' \;
and then a simple make
.
Failing to update the samples will give a missing libnvcuvid
.
virtualenv edit
Install virtualenv
$ sudo apt-get install python-pip python-dev python-virtualenv
It is not quite clear if this should be a python 3 specific version, I have no problem with it so far.
Now create a working directory, typically something like tensorflow
, and create the virtualenv
for the directory, and then activate
$ virtualenv --system-site-packages tensorflow
$ source tensorflow/bin/activate
Note that this only works with python 2, not with python 3!
TensorFlow edit
Do a cd tensorflow
.
Play it safe and setup both python 2.7 and 3.5?
$ sudo apt-get install python-pip python-dev
$ sudo apt-get install python3-pip python3-dev
Assuming there is a gpu in the machine
$ pip3 install tensorflow-gpu
Fill in the blanks
$ sudo apt-get update && sudo apt-get upgrade
Virtualization of PCI hardware for Vagrant edit
Your PC must satisfy the following:[3]
- Your motherboard has an IOMMU unit.
- Your CPU supports the IOMMU.
- The IOMMU is enabled in the BIOS.
- The VM must run with VT-x/AMD-V and nested paging enabled.
- Your Linux kernel was compiled with IOMMU support. The PCI stub driver is required as well.
- Your Linux kernel recognizes and uses the IOMMU unit. Search for DMAR and PCI-DMA in kernel boot log.
Environment at host edit
mkdir tensorspace
cd tensorspace
vagrant init bento/ubuntu-16.10
vagrant box update
vagrant up --provider virtualbox
vagrant plugin install vagrant-vbguest
vagrant reload
Now edit the nano Vagrantfile
and add new limits for memory and cpus.
vagrant reload
Environment at client edit
There are additional notes for CUDA at Ubuntus help pages.[10]
CUDA edit
Pre-install actions:[11]
$ sudo apt-get install linux-headers-$(uname -r)
Installation Instructions:[12]
$ sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb
$ sudo apt-get update
$ sudo apt-get install cuda
I put the deb in my vagrant dir on the host, and then referred i as sudo dpkg -i /vagrant/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb
inside the client.
Post-install actions:[13]
Add the following to .profile
,[8] but note the versioning, it must match the names in use!
# additional exports
export PATH=/usr/local/cuda/bin:/usr/local/cuda-8.0/bin:${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-8.0/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Install cuDNN, note this is behind a member wall:[14]
$ cd <installpath>
$ export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH
Add <installpath> to your build and link process by adding -I<installpath> to your compile line and -L<installpath> -lcudnn to your link line.
TensorFlow edit
Play it safe and setup both python 2.7 and 3.5?
$ sudo apt-get install python-pip python-dev
$ sudo apt-get install python3-pip python3-dev
$ pip3 install tensorflow-gpu
$ sudo apt-get update && sudo apt-get upgrade
Manual testing edit
Open the interactive shell with python3
and run
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
References edit
- ↑ AskUbuntu: How can I install CUDA on Ubuntu 16.04? (post 28)
- ↑ Virtualenv
- ↑ 3.0 3.1 9.6. PCI passthrough
- ↑ NVIDIA CUDA Installation Guide for Linux
- ↑ CUDA Toolkit Documentation: Pre-installation actions
- ↑ CUDA Toolkit Download
- ↑ CUDA Toolkit Documentation: Post-installation actions
- ↑ 8.0 8.1 Configuring your login sessions with dot files
- ↑ NVIDIA Developer: cuDNN Download
- ↑ Ubuntu: Cuda
- ↑ CUDA Toolkit Documentation: Pre-installation actions
- ↑ CUDA Toolkit Download
- ↑ CUDA Toolkit Documentation: Post-installation actions
- ↑ NVIDIA Developer: cuDNN Download