Ask HN: How do I run PyTorch with GPU on a windows machine with AMD Radeon?
I feel I'm ready to leave the Colab environment, and I wanted to start developing PyTorch projects on my local machine. The problem: I have a Windows machine with AMD Radeon GPU (cries in wsl)
What is the preferred way to get started? I've tried using WSL, ROCm, and PyTorch-DirectML, but I'm always running into problems when I run `to(device)`. Look at Microsoft's Antares. They use a wierd combination of LoadLibrary and WSL to run ROCM workloads. TL;DR not a good use of your time if AMD doesn't want to ship ROCM on Windows for all customers. Read this if you want to do something like it: https://github.com/microsoft/antares/blob/v0.3.x/backends/c-... Source: been on this path before and moved on.