Settings

Theme

Cuda.jl v3.3: union types, debug info, graph APIs

juliagpu.org

156 points by ViralBShah 5 years ago · 34 comments

Reader

xvilka 5 years ago

I wish more attention would be towards open source alternatives for CUDA, such as AMD's ROCm[1][2] and Julia framework using it - AMDGPU.jl[3]. It's sad to see so many people praise NVIDIA which is the enemy of open source, openly hostile to anything except their oversized proprietary binary blobs.

[1] https://rocmdocs.amd.com/en/latest/index.html

[2] https://github.com/RadeonOpenCompute/ROCm

[3] https://github.com/JuliaGPU/AMDGPU.jl

  • jpsamaroo 5 years ago

    AMD has done great work in a very short amount of time, but let's not forget that they're still very new to the GPU compute game. The ROCm stack is overall still pretty buggy, and definitely hard to build in ways other than what AMD deems officially supported.

    As AMDGPU.jl's maintainer, I do certainly appreciate more users using AMDGPU.jl if they have the ability to, but I don't want people to think that it's anywhere close in terms of maturity, overall performance, and feature-richness compared to CUDA.jl. If you already have access to an NVIDIA GPU, it's painless to setup and should work really well for basically anything you want to with it. I can't say the same about AMDGPU.jl right now (although we are definitely getting there).

  • pjmlp 5 years ago

    This is not a charity.

    If the competition to CUDA wants to be taken serioulsy then provide the tooling and polyglot support to do so.

    OpenCL was stuck in their "C only with printf debugging" mentality for too long, now it is too late.

    AMD ROCm still isn't available in Windows.

    If I learned anything from wasting my time with Khronos stuff, was that I should have switched to DirectX much earlier.

  • krapht 5 years ago

    Most people hack on this stuff for work, and time is money. OpenCL is just a lot less productive than CUDA for most tasks. The NVidia price premium isn't big enough to make people switch over.

    • andi999 5 years ago

      Agree, been doing some cuda as part of my job since 2009, and it works very smooth, you could write non trivial programs after less than one week of learning (if you know C before). When opencl got a little hype, I had a look, I didn't manage to run a simple example, and I was asking myself: do I really want to have to use all this boilerplate code? Also at least a while back there is no good fft outside of cuda, maybe that changed though.

  • mixedCase 5 years ago

    Maybe if AMD starts caring about ROCm, users might. To this day Navi and newer cards are unsupported.

    • kwertzzz 5 years ago

      This is really the main problem. As far as I know, ROCm requires a quite expensive GPU for data centers (if you want to have a current GPU) which makes it quite difficult to build a community around ROCm.

  • dvfjsdhgfv 5 years ago

    I would love to, however for practically all tasks I need only the CUDA backend is available, not ROCm - an that is of course bad not just for open source, but also for the owners of AMD GPUs. The popularity of Nvidia's solution is so overwhelming AMD would need to do something revolutionary in order to catch up (supporting Windows and macOS wouldn't hurt either).

up6w6 5 years ago

A fun fact is that the GPUCompiler, which compiles the code to run in GPU's, is the current way to generate binaries without hiding the whole ~200mb of julia runtime in the binary.

https://github.com/JuliaGPU/GPUCompiler.jl/ https://github.com/tshort/StaticCompiler.jl/

aardvarkr 5 years ago

Who knew that Julia could do CUDA work too? Every day I grow more and more impressed with the language.

  • eigenspace 5 years ago

    It also happens to be one of the most easy and reliable ways I know of to install CUDA on your machine. Everything is handled through the artifact system so you don't have to mess with downloading it yourself and making sure you have the right versions and such.

    (Before someone complains, you can also opt out of this and direct the library to a version you installed yourself)

    • krastanov 5 years ago

      Could you elaborate a bit on this? I know that a gigantic bunch of libraries are involved that is usually terrible to install but Julia does it for you in the equivalent of a python virtual-env. However, aren't there also Linux kernel components that are necessary? How are those installed?

      • KenoFischer 5 years ago

        Yes, there's a kernel component which needs to be installed, but that's usually pretty easy these days, because it's usually one of

        1) You're using a container-ish environment where the host kernel has the CUDA drivers installed anyway (but your base container image probably doesn't have the userspace libraries)

        2) The kernel driver comes with your OS distribution, but the userspace libraries are outdated (userspace libraries here includes things like JIT compilers, which have lots of bugs and need frequent updates) or don't have some of the optional components that have restrictive redistribution clauses

        3) Your sysadmin installed everything, but then helpfully moved the CUDA libraries into some obscure system specific directory where no software can find it.

        4) You need to install the kernel driver yourself, so you find it on the NVIDIA website, but don't realize there's another 5 separate installers you need for all the optional libraries.

        5) Maybe you have the NVIDIA-provided libraries, but then you need to figure out how to get the third-party libraries that depend on them installed. Given the variety of ways to install CUDA, this is a pretty hard problem to solve for other ecosystems.

        In Julia, as long as you have the kernel driver, everything else will get automatically set up and installed for you. As a result, people are usually up and running with GPUs in a few minutes in Julia.

      • ChrisRackauckas 5 years ago

        CUDA.jl installs all of the CUDA drivers and associated libraries like cudnn for you if you don't have them. Those are all vendered via the Yggdrasil system so that users don't have to deal with it.

        • krastanov 5 years ago

          CUDA.jl does not install the actual kernel driver, right? I do not really see how it can do that and the sibling comment does confirm that the kernel driver is not managed by Julia.

          • kwertzzz 5 years ago

            Yes, you would still need to the NVIDIA kernel driver (preferably the most current one). Desktop users typically have it already installed. But the main difficulty in my opinion is to install CUDA (with CuDNN,...). Even the TensorFlow documentation [0] is outdated in this regards as it covers only Ubuntu 18.04. The installation process of CUDA.jl is really quite good and reliable. Per default it downloads it own version of CUDA and CuDNN, or you can use a system-wide CUDA installation by setting some environment variables [1].

            [0] https://www.tensorflow.org/install/gpu [1] https://cuda.juliagpu.org/stable/installation/overview/

  • version_five 5 years ago

    I last experimented with CUDA.jl a year ago, and it was very useable then. This is a good reminder to re-evaluate the Julia deep learning ecosystem. If I were working for myself I would definitely try to do more with Julia (for machine learning). Realistically, python has such an established base that it will take some time to get orgs that are already all in on python to come over.

    • dnautics 5 years ago

      I think it's not dumb to target Greenfield users: just installing python gpu wheels is often difficult enough that several companies exist (indirectly) because it's so difficult to do right (e.g. selling a gpu PC with that stuff preinstalled)

    • queuebert 5 years ago

      I just finished setting up a new machine to run some Kaggle stuff. Both Tensorflow and PyTorch had issues with CUDA versions and dependencies that weren't immediately fixed with a clean virtualenv, while both Knet.jl and Flux.jl installed flawlessly.

      • wdroz 5 years ago

        For Pytorch and Tensorflow, you can use conda to install them with the right CUDA and cudnn versions.

        • kwertzzz 5 years ago

          For Pytorch, I had no issues with conda. But with Tensorflow from conda, the training process just hangs (consuming 100% of CPU but no GPU resources, despite my GPUs are recognized). I got more luck with installing Tensorflow with pip. Given the fact that Tensorflow documentation does not mention conda, I wondering how well this is supported.

          • wdroz 5 years ago

            You install cudatoolkit from conda then tensorflow with pip.

pabs3 5 years ago

Are there similar things for other types of GPUs?

Edit: the site has one project per GPU type, shame there isn't one interface that works with every GPU type instead.

Karrot_Kream 5 years ago

Really looking forward to Turing.jl gaining CUDA support

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection