Tensorflow 0.6.0 Release
github.comExcellent news. Hopefully with this release we will be able to lift the remaining limitations of the TensorFlow version of Keras (tensor contraction, float<->bool casting, and RNNs over sequences with arbitrary length). https://github.com/fchollet/keras/wiki/Keras,-now-running-on...
Congrats to the TensorFlow team!
Thank you for your work on putting Keras on top!
To answer your questions:
- We don't (yet) have a tensor contraction op -- just a matter of getting some dev time to call the existing Eigen contraction code in an op. Hopefully in the next release!
- More casting between types I think is in this release.
- dynamic RNNs is not yet in this one, but also in our sights.
And with all of that, we still need to work on better performance, memory efficiency. Still lots to do!
FYI, we're building the pip packages today -- we'll send out an announcement to the TensorFlow discussion list and update the website when things are actually done. :)
So that's why I can't install it :)
Okay, after a bunch of last minute python 3 issues, we've updated the documentation and packages, so everything should be ready.
"Open source software library for numerical computation using data flow graphs. http://tensorflow.org"
(hopefully this saves others asking the "what is it?" question some trouble...)
s.b. https://www.tensorflow.org (You managed to include the '"' in the URL)
Does this version support the latest CUDA/CUDNN? I spent hours with the initial release trying to get GPU acceleration working before giving up in frustration (and being a seasoned Arch linux user, I'm not easily discouraged) - I can't remember what ultimately caused me to throw my hands up in defeat but it had to do with the old CUDA version requirement.
I've read some people have been able to do hacks to get it to work, but I agree we need to make it easier. The full configurability for other cudnn versions didn't make 0.6.0 but might be in a patch update sometime soon. It's part of our work on improving GPU performance in general.
Cool, thanks!
Need OpenCL support. Please don't contribute to maintaining CUDA monopoly.
If you compare the effort that AMD spends on OpenCL compared to what NVidia spends on CUDA then you'll see why everyone just used NVidia.
I'm not a big fan of vendor "standards", but I have very limited sympathy for OpenCL here.
I think the best hope for portability is at the higher level programming API layer. For example TensorFlow is careful to make switching between CPU and GPU painless.
It would not have to be like that if Nvidia opened up the source-code for cuFFT/cuDNN/cuBLAS. My guess is they are not doing that because it is fairly trivial to port code from CUDA to OpenCL. It can even be automated.
Unfortunately, that ship sailed years ago. Also, NVIDIA is heavily investing in deep learning (cudnn), so this won't happen unless someone in OpenCL builds something equally/more performant.
Somebody like Google?
That could work, but Google doesn't seem to have a stake in OpenCL (so it's not a priority). Nvidia has a stake in CUDA, so I wouldn't be surprised if it's a top business priority there.
This supports OpenCL (Samsung needed it for TV's).
However the benchmarks for OpenCL look about 5x slower than CUDA
AMD might support CUDA soon - http://www.anandtech.com/show/9792/amd-sc15-boltzmann-initia...
Can they make it run on computers that don't have NVIDIA GPUs, like a Mac Book Pro with Intel Iris?
You can already do that. It's just several orders of magnitude slower if you don't run it on a dedicated GPU.
The link to convnet benchmarks is a 30 day old issue, with numbers that look significantly worse than native CUDNNv2. Are there more up-to-date numbers?
I'm really looking forward to updated CUDNN and CUDA 7.5 support. My machines are all configured for Theano, and I've been sorta waiting to try TensorFlow until I can install it without downgrading everything, as it was tricky to get things working and I'd rather not reconfigure things I don't have to.
There will eventually be more up to date numbers -- we can only ask so much of Soumith's time. In addition, getting on par with cudnnv2 is just the first step. cudnnv3/v4 and cuda7.5 are next up.
Cool, I'm actually happy to hear that the numbers are out of date, I just wasn't sure. Can't wait for the next update, development is moving fast!
There's a few pics of a slide from yesterday at NIPS 2015 showing that TensorFlow 0.6 increases performance between 30 and 40% on AlexNet and Overfeat.
Thanks for the great work! I had no issues installing and getting it working with my nvidia card, I've been having fun with it whenever I get a chance :)
I'm running a Tensorflow Google developer group meeting in boulder every couple weeks if any of the authors/contributors is in town and wants to come and say hi to the group we'd love to have you. gdgboulder.github.io
I can't get the original public release running on Ubuntu 15.04. Obscure install error with no online remedy prescription.
The original public release was great to identify many potential installation issues that we couldn't possibly test ahead of time -- hopefully this next upcoming release will address some of your issues.
Otherwise, please file an issue at github and we'll do our best to help!
Good news: changing mode of /usr/local/bin/f2py to 755 Successfully installed tensorflow six numpy Cleaning up...
Thanks!
Whats the install error?
Im good to go now, thanks!
Python 3 support please
Python 3.3+ support via changes to python codebase and ability to specify python version via ./configure.
excellent!
Anyone got a Dockerfile for this yet? I've no idea what they mean by ./configure
Cool. I'm still looking forward to when I can run TensorFlow on a Windows machine and harness my GPU.
Right now I can only run TensorFlow in a Docker container in a VirtualBox Linux virtual machine running in Windows... so I guess there's that!
How does your Linux VM even get to see your real GPU? I didn't know it was possible with VirtualBox.
It doesn't, which is why I'm hoping for native Windows support soon.