Show HN: Dockerized GPU Deep Learning Solution (Code and Blog and TensorFlow Demo)
github.comI don't understand why you need to do that, tensorflow is already dockerized for GPUs, using the nvidia-docker images: https://github.com/tensorflow/tensorflow/tree/master/tensorf...
Interesting - does anyone know if there is a dockerized install of Numpy with the right blas,etc libraries .
I am lost at figuring out the best way to configure all the dependencies for decent performance.
I don't have GPU, but I suppose that would make a difference?
Does docker support gpus or how does this work out?
The "--device" flag allows you to map devices through to a Docker container. It runs in 'privileged' mode though, so isn't suitable for a shared host.
Nvidia make it pretty straight-forward now but we had to branch from that approach a bit for the CoreOS deployment.
https://github.com/NVIDIA/nvidia-docker (Nice pictures)
https://docs.docker.com/engine/reference/run/ (Docker documentation, search for 'privileged')
The approach is a bit different depending on your host operating system. You'll also find there are constraints when you introduce a virtualisation layer, like virtualbox or parallels on your desktop - GPUs can be mapped through, but it's painful(ish).
There's also a blog entry about accessing GPU from Docker at http://marconijr.com/posts/docker-exposing-gpu/ .
Great! Thank you!
Three buzzwords in a single submission! (just joking, looks like a good project!)
Nice. Does the host OS need to have CUDA installed?
The first stage of the process is to take a vanilla CoreOS host and inject the CUDA drivers (one time process). After that, you can reboot the box and still retain the devices, for mapping into docker containers.