Deep Learning on ROCm
rocm.github.ioI wonder if it will have a positive impact on the development of this library that the new Mac Pro will feature a dedicated AMD Vega GPU.
EDIT: there is more discussion going on at /r/MachineLearning than here: https://www.reddit.com/r/MachineLearning/comments/6kv3rs/mop...
Are there any benchmarks comparing performance with CuDNN?
Not that I know of, but at their analyst day in May 2017 AMD showed slides in which they beat Nvidias GP100 in Baidus Deep Bench with their new GPU Vega. [0][1] Since AMD recently released the Vega Frontier Edition [3][4] I posted this here in the hopes to see some benchmarks from users here on HN.
[0] http://i.imgur.com/twhTpcC.jpg
[1] http://i.imgur.com/1peXVnq.png
[3] tldr: 16 GByte HBM2, 25 TFlop FP16, ~1000-1500 dollar.
[4] https://www.newegg.com/Product/Product.aspx?Item=N82E1681410...
When you take unknown code and datasets under OpenCL it isn't hard to make any delta you want.
Using OpenCL code with batch size which favors one or the the other is enough to cause this (and much) higher delta.
DeepBench isn't a benchmark, it's a benchmarking tool overall there is very little chance that given the current state of NVIDIAs BLAST libraries and the rest of their eco system that Vega is going to be beating it's hardware.
> the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent
Weird website, took me a while to classify its purpose. So, is this like an AMD version of NVIDIA's CuDNN? Can we run TensorFlow on AMD GPUs?
Looks like the title was changed. Yesterday it was called "AMD deep learning library ROCm 1.6 released (Caffee support and more)"
It seems at the moment only Caffe is public and it's a port of the CUDA version to OpenCL [0][1] to allow it to run on AMD GPUs like the recently released AMD Vega Frontier Edition. From the wording on the page it looks like it could be an equivalent to CuDNN.
[0] through AMDs tool called HIP which can convert CUDA