Settings

Theme

Tensorflow 1.5.0

github.com

137 points by connorgreenwell 8 years ago · 29 comments

Reader

minimaxir 8 years ago

The big feature is CUDA 9 and cuDNN 7 support, which promises double-speed training on Volta GPUs/FP16. (it should be noted that TF 1.5 does not support CUDA 9.1 yet, which I found out the hard way)

I updated my Keras container with the TF 1.5 RC, CUDA 9, and cuDNN 7 (https://github.com/minimaxir/keras-cntk-docker), but did not notice a significant speed increase on a K80 GPU (I'm unsure if Keras makes use of FP16 yet either).

  • jorgemf 8 years ago

    Speed in FP16 under k80 is almost as FP32, the architecture doesn't work well with FP16. As you said, you need Volta to notice improvements with FP16.

    The other two main features are: Eager execution and TensorFlow Lite

  • puzzle 8 years ago

    You need Pascal hardware or later for FP16. Amazon's G3 Maxwell instances, which are newer than K80s, don't support it, either.

  • make3 8 years ago

    k80 is pretty old now, 1080ti have x5 perf of k80 for one tenth of the price

    • minimaxir 8 years ago

      When I say K80, I mean running a GPU in the cloud, not running it locally (and as an aside, due to the crypto boom, buying a physical GPU for cheap is more difficult)

      • zeptomu 8 years ago

        A nice compromise is the GTX1080 that you can rent at Hetzner for 100e/month [0]. I am not sure how it compares to other GPUs but for experimenting it works quite well.

        [0] https://www.hetzner.de/dedicated-rootserver/matrix-ex

        • TheAlchemist 8 years ago

          It's actually 118e/month (the price on the webiste are excluding VAT).

          I'm using it, and I can only recommend - 0 problems, pretty decent box for a low price. Great solution, and waiting for the crypto bubble to burst and GPU prices to fall !

        • make3 8 years ago

          ah, 100e. it's still really good

        • make3 8 years ago

          that sounds really nice, thanks for the link

      • make3 8 years ago

        oh yeah. what i meant was that as k80 s are old, they don't have the fancy features like being fast with fp16, which came later. k80 come from slightly before the deep learning explosion afaik

NelsonMinar 8 years ago

Eager execution is appealing for folks new to learning TensorFlow. The deferred execution style is powerful, but if you just want to tinker in a REPL it's nice to have imperative programming. https://github.com/tensorflow/tensorflow/tree/r1.5/tensorflo...

  • pacala 8 years ago

    What if I have data-dependent computational graphs? For example, recursive NNs.

    • akshayka 8 years ago

      I'm a member of the team that works on eager execution.

      When eager execution is enabled, you no longer need to worry about graphs: operations are executed immediately. The upshot is that eager execution lets you implement dynamic models, like recursive NNs, using Python control flow. We've published some example implementations of such models on Github:

      https://github.com/tensorflow/tensorflow/tree/master/tensorf...

      I'd be happy to answer other questions about eager execution, and feedback is welcome.

      EDIT: Just because you don't have to worry about graphs doesn't mean that graph construction and eager execution aren't related; take a look at our research blog post for more information if you're curious about the ways in which they relate to each other (https://research.googleblog.com/2017/10/eager-execution-impe...).

yolobey 8 years ago

I've been dreading version updates ever since they dropped Mac binary support. There are always obscure things to patch I have to find out by myself, the build easily wastes a whole day.

I think I'm either going to change my workflow and use another OS or switch fully to PyTorch.

wmf 8 years ago

And zero mention of AMD support.

arunmandal53 8 years ago

For anyone who is having trouble with the installation, here's a tutorial to install TensorFlow 1.5.0 official pre-built pip package for both CPU and GPU version on Windows and ubuntu also there is tutorial to build tensorflow from source for cuda 9.1. http://www.python36.com

zitterbewegung 8 years ago

So is it an easy process to convert a tensorflow program from 1.4 to 1.5? When I tried converting something from 0.9 to 1.0 I couldn't figure it out.

  • connorgreenwellOP 8 years ago

    For most cases it should just be a drop in replacement. IIRC they promise not to break the API between point releases (except tf.contrib.* which may change or disappear entirely...)

  • jorgemf 8 years ago

    0.9 wasn't production ready and they didn't guarantee backward compatibility until 1.0. So nothing to change from 1.4 to 1.5, maybe you will have some warnings about features that will change in the future, but it will work.

blueyes 8 years ago

> Starting from 1.6 release, our prebuilt binaries will use AVX instructions. This may break TF on older CPUs.

  • htsh 8 years ago

    This is primarily pre-2011 CPUs, though? Looks like everything since Sandy Bridge and Bulldozer will be okay.

    I guess we never know what's running on our cloud instances.

    • jabl 8 years ago

      IIRC since the beginning tensorflow has required at least sm 3.0 support (Kepler or newer). I imagine the combination of a pre-AVX cpu and Kepler or newer gpu is uncommon.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection