Setting up a Python project that relies on PyTorch, so that it works across different accelerators and operating systems, is a nightmare.
Most PyTorch projects I work on are internal or run on a server that my colleagues and I can control. In these situations, I am fine with combining optional dependencies with indices as recommended in the UV documentation.
However, I recently started working on FileChat — my own opinionated AI coding assistant (stay tuned). I plan to distribute FileChat as a Python package. As it turns out, once you build a wheel for distribution, any custom indices go out of the window — they are not included in the metadata. It would be up to the user to configure them properly during installation.
But this isn’t what I want. I want a single-command install, no matter what hardware or OS you are using. So, what’s the solution?
PEP 508 — the standard for dependency specification — offers two neat tricks that, when combined, tackle this issue pretty well.
First, you can specify a wheel URL for each dependency like this:
torch @ <https://url.to/the.wheel>
Second, you can set constraints that tell whatever tool you are using to manage your virtual environment when to install a certain dependency. For example, you might want to install a package only if the user is running Python 3.12 (and not some other version):
torch ; python_version == '3.12'
I can combine these two features and specify dependencies in the following format:
[package-name] @ [wheel-url] ; python_version == '[python-version]'
So, how am I using this to solve the PyTorch problem? On Windows and Mac, I simply install the default version of PyTorch (on Windows, I later want to add support for CUDA and other hardware accelerators). Since I am interested in PyTorch only because it’s a dependency for sentence-transformers I don’t even need to add it as a dependency explicitly. Here is how the dependencies are defined in the FileChat’s pyproject.toml file:
dependencies = [
"einops>=0.8.1",
"faiss-cpu>=1.11.0.post1",
"mistralai>=1.9.3",
"pydantic>=2.11.7",
"sentence-transformers>=5.0.0",
"textual>=5.3.0",
"watchdog>=6.0.0",
]
On Linux, I rely on three mutually exclusive optional dependency groups. Each installs PyTorch with support for different hardware. To achieve this without setting up custom Package indices, I specify the wheel URLs. However, since PyTorch has a different wheel for each Python version, I need to add a separate dependency for each version I want to support. Each has a different Python version constraint. Here is the result:
[project.optional-dependencies]
cpu = [
"torch @ <https://download.pytorch.org/whl/cpu/torch-2.7.1%2Bcpu-cp312-cp312-manylinux_2_28_x86_64.whl> ; python_version == '3.12'",
"torch @ <https://download.pytorch.org/whl/cpu/torch-2.7.1%2Bcpu-cp313-cp313-manylinux_2_28_x86_64.whl> ; python_version == '3.13'",
]
xpu = [
"pytorch-triton-xpu @ <https://download.pytorch.org/whl/pytorch_triton_xpu-3.3.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl> ; python_version == '3.12'",
"pytorch-triton-xpu @ <https://download.pytorch.org/whl/pytorch_triton_xpu-3.3.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl> ; python_version == '3.13'",
"torch @ <https://download.pytorch.org/whl/xpu/torch-2.7.1%2Bxpu-cp312-cp312-linux_x86_64.whl> ; python_version == '3.12'",
"torch @ <https://download.pytorch.org/whl/xpu/torch-2.7.1%2Bxpu-cp313-cp313-linux_x86_64.whl> ; python_version == '3.13'",
]
cuda = [
"torch @ <https://download.pytorch.org/whl/cu126/torch-2.7.1%2Bcu126-cp312-cp312-manylinux_2_28_x86_64.whl> ; python_version == '3.12'",
"torch @ <https://download.pytorch.org/whl/cu126/torch-2.7.1%2Bcu126-cp313-cp313-manylinux_2_28_x86_64.whl> ; python_version == '3.13'",
]
[tool.uv]
conflicts = [[{ extra = "cpu" }, { extra = "xpu" }, { extra = "cuda" }]]
This setup still requires some work on the user’s side. When installing FileChat, they need to specify the correct optional dependency group, depending on which hardware they want to use. For example:
pip install filechat[xpu]
However, I think this is much simpler than setting up proper Package indices during installation.
The downside of this setup is a bit more work on my side. If I decide to switch to a different version of PyTorch, add support for a new Python version, or a new type of hardware, it will be up to me to properly update the wheel URLs.
I wonder if PYX, the new Python Package registry from Astral, will solve this problem in a much graceful manner. That’s one of their promises. But until this tool is adopted en masse, I’ll stick to updating the wheel URLs.
Update as of September 18, 2025
As it turns out, you cannot use this approach if you want to publish the package on PyPI. In the end, I had to replace PyTorch with ONNX Runtime, which has packages for different hardware accelerators published directly on PyPI; no additional indices or wheel URLs are required.