Overview of Python dependency management tools
modelpredict.comPeople always get up in arms about this, but as someone who has used Python as her daily driver for years it's really... never been this serious of an issue for me?
I have used virtualenv/venv and pip to install dependencies for years and years, since I was a teen hacking around with Python. Packaging files with setup.py doesn't really seem that hard. I've published a few packages on pypi for my own personal use and it's not been too frustrating.
A lot of the issues people have with Python packaging seem like they can get replaced with a couple shell aliases. Dependency hell with too many dependencies becomes unruly in any package manager I've tried.
Is the "silent majority" just productive with the status quo and getting work done with Python behind the scenes? Why is my experience apparently so atypical?
I think it depends on the use case. If I'm developing my own stuff my peen package management is fine.
If trying to run various existing python programs to analyze biology data, I soon run into various problems. Is this a Conda?/ or can I use my Python environment? which version of python? will let me run the thing and what libraries do I need? This breaks in that version?
Sometimes I feel that one kinda ok way of doing things, would be better than having 6 ways , one of which will suit my use case perfectly.
This problem is not unique to python.
> Is this a Conda?/ or can I use my Python environment?
Can you elaborate a bit there? I use conda because I like some of their features over standard virtualenv (being able to specify a python version when i create my venv) - but I've never had a problem running code in env's created by one vs. the other.
Sometimes developers distribute their software via Conda installs, in those cases they sometimes don't provide instructions on running other ways (eg using the pyenv environment which is my default. ). I'm ok with this, but when conda fails as sometimes happens, it can mean some digging to get the install to work.
I was thinking of my latest install, which was CRISPRESSO2, which installs via docker or bioconda... I was able to get it going, but it took a bit on some systems.. (Python 2.7 old libraries.. etc..) Docker didn't seem to work.
I like virtual env, but sometimes I feel I have to have a new environment for each piece of software I'm running, which feels weird.
This sort of attitude is the reason why the world doesn't move away from awful solutions. It is a testament to the lack of ability to see beyond your own nose.
A lot of people who use Python, don't have the luxury of it being their "daily driver for years", so the conflicting documentation, decision paralysis and other problems that come with it end up being a huge time sink.
A lot of non-programmers are being forced to use Python for various automation tasks. A lot of the CAD-software that construction engineers use, support Python-plugins. Network admins that have been configuring switches and routers on CLI for decades now have to configure them using Python.
Look at "cargo" to see what the world could be like.
You're right of course.
Still, it's worth keeping in mind that Rust was born 20 years after Python was. Python was being written before Mosaic, Netscape, and Yahoo! were around. I think it can be forgiven for failing to conceive of a perfect package management system in 1990s. There were bigger fish to fry back then, so to speak.
Over the decades (!) there have been many, well-documented attempts at coming up with a package management story. pip and virtualenv have been the obvious winners here for years.
So, in conclusion, again you're right. But 30 years of history produces a lot of "conflicting documentation". It's only the last 10 years or so, that people have fought over the superiority of one language's package management ecosystem or another.
This comment is rewriting Python history quite a bit.
First of all, Python was created around 1989 yet Python 1.0 was released in 1994. Secondly, Python was a pretty obscure language until Python 2.0 (and even long after that...), released in 2000. So realistically, Python had "only" about 15 years of historical baggage :-)
Also, cargo can be ignored because it's "new", but there was a lot of prior art in the area of good programming language specific package managers. CPAN (Perl) was launched in 1993. Maven (Java) was launched in 2004.
Python just botched its package management story, that's it. Sometimes stuff happens just because it happens, there's no good excuse for how things are. Sad, but true.
Python's first posting to Usenet (v 0.9) was in 1991. The 1.0 Misc/ACKS from 1994 includes 50 or so external contributions to that point, showing that "1.0" is a somewhat artificial point.
Rust's 1.0 was 2015, which is indeed "20 years after Python was" at 1.0, so how is gen220's comment a rewrite?
I started using Python around 1.3, and advocating for its broader use (instead of Perl) by 1997. In 1998 I had a job using Python full-time. It was made easier because tools like SWIG already supported Python. Here's a talk I gave in 1999 - https://www.daylight.com/meetings/mug99/Dalke/index.html - and a writeup I did for Dr. Dobb's - https://www.drdobbs.com/cpp/making-c-extensions-more-pythoni... .
In 2000 I helped a company with the minor work to port their 1.5 code base to 2.x.
So I certainly didn't see it as obscure in the 1.x days.
But sure, I'm part of that environment so have a different view on things. If I use your definition, I'll argue that Rust is still "a pretty obscure language".
Rust is a pretty obscure language, when compared to Java, C#, C/C++, Javascript, Python, etc.
Rust is a lot better known because of internet fame, which didn't really exist to this magnitude until well after the dotcom crash.
My point is, Python's popularity took off in recent years, basically the last 10 years: https://insights.stackoverflow.com/trends?tags=python%2Cjava, primarily due to data science, machine learning, science in general.
Around until 2005 at least, it was known as a friendly scripting language with a few web frameworks which were not that popular (Django was first released in 2005), as the language that was starting to be adopted by distributions for scripting tools (the first Ubuntu version was launched in 2004 and was one of the first distros to use it extensively). It wasn't really present for development work in most cases, DevOps was the domain of bash/Perl (for older stuff) or Ruby (for newer stuff).
People tend to forget how obscure Python was before 2000, compared to the mainstream language it is today. And I say that as someone who likes Python ;-)
Chiming back in to say that, while everything you're saying is correct (i.e. Python was not a ubiquitous language until "relatively" recently), it doesn't change the point: the best packaging solutions, done right, need to be done early in a language's history.
To illustrate the point with an example, you could invent cargo for python yesterday or in 2005, but it wouldn't have solved the problem, because you would still have decades-worth of third-party libraries that wouldn't comply to py-cargo's packaging requirements.
In contexts like these, it's the package manager with the fewest hard-asks (i.e. pip, or npm for node) that wins.
Go, for example, endured major controversies over migrating away from GOPATH-managed-with-third-party-dep-managers to go modules. Even though `go mod` would have been the best solution to start with from scratch, inertia and breaking changes are a real thing.
Where is gen220 rewriting Python history?
Rust is a pretty obscure language now in pretty much the same way that Python was an obscure language then.
Of course the world of programmers was smaller in the 1990s. But if your baseline is the entire world, then probably every programming language outside of Basic, C/C++, and Pascal was obscure in the 1990s. Just like Rust is now.
It feels very much like you have shifted baselines to determine what "obscure" means.
From my view, Python's popularity took off around 2000. That's when I no longer had to tell people what Python was, and when people in my field (cheminformatics) started shifting new code development from Perl to Python. It's also about when I co-founded the Biopython project for bioinformatics. And SWIG in the mid-1990s included Python support because Python was being used to steer supercomputing calculations at LANL.
So your statement that Python's popularity and use in science in general started only in 2010 sounds like revisionism which distorts the actual history with an artificial baseline.
You wrote "with a few web frameworks which were not that popular".
Ummm.... what? Zope was quite popular. The 2001 Python conference had its own Zope track, and the 2002 conferences felt like it was 50% Zope programmers.
Quoting its Wikipedia entry, "Zope has been called a Python killer app, an application that helped put Python in the spotlight". One of the citations is from 2000, at https://web.archive.org/web/20000302033606/http://www.byte.c... , with "there's no killer app that leads people to Perl in the same way that Zope leads people to Python."
As an individual that probably works just fine. In a team setup, it takes a lot of training and effort for everyone to consistently follow a manual pip/venv workflow, so it becomes valuable to minimize and standardize it.
Especially if you have to deploy to production and you want fast, reproducible builds, or you don't want to run a bunch of tests for things that haven't changed.
Let me start by saying: I love python, and I love developing in it. It's the "a pleasure to have in class" of languages: phenomenal library support, not too painful to develop in, nice and lightweight so it's easy to throw together test scripts in the shell (contrast that with Java!), easy to specify simple dependencies + install them. (contrast that with C!).
That said... if you work on software that is distributed to less-technical users and have any number of dependencies, python package management is a nightmare. Specifying dependencies is just a minefield of bad results.
- If you specify a version that's too unbounded, users will often find themselves unable to install previous versions of your software with a simple `pip install foo==version`, because some dependency has revved in some incompatible way, or even worse specified a different dependency version that conflicts with another dependency. pip does a breadth-first search on dependencies and will happily resolve totally incompatible dependencies when a valid satisfying dependency exists.[1]
- If you specify a version with strict version bounds to avoid that problem, users will whine about not getting the newest version/conflicting packages that they also want to install. Obviously you just ignore them or explain it, but it's much more of a time sink than anyone wants.
- In theory you can use virtualenvs to solve that problem, but explaining how those work to a frustrated Windows user who just spent hours struggling to get Python installed and into their `PATH` is no fun for anyone. Python's made great strides here with their Windows installers, but it's frankly still amateur hour over there.
- Binary packages are hell. Wheels were supposed to make Conda obsolete but as a packager, it's no fun at all to have to build binary wheels for every Python version/OS/bitness combination. `manylinux` and the decline of 32-bit OSes has helped here, but it's still super painful. Having a hard time tracking down a Windows machine in your CI env that supports Python 3.9? Too bad, no wheels for them. When a user installs with the wrong version, Python spits out a big ugly error message about compilers because it found the sdist instead of a wheel. It's super easy as a maintainer to just make a mistake and not get a wheel uploaded and cut out some part of your user base from getting a valid update, and screw over everyone downstream.
- Heaven help you if you have to link with any C libraries you don't have control over and have shitty stability policies (looking at you, OpenSSL[2]). Users will experience your package breaking because of simple OS updates. Catalina made this about a million times worse on macos.
- Python has two setup libraries (`distutils` and `setuptools`) and on a project of any real complexity you'll find yourself importing both of them in your setup.py file. I guess I should be grateful it's just the two of them.
- Optional dependencies are very poorly implemented. It still isn't possible to say "users can opt-in to just a specific dependency, but by default get all options". This is such an obvious feature, instead you're supposed to write a post-install hook or something into distutils.
- Sometimes it feels like nobody in the python packaging ecosystem has ever written a project using PEP420 namespaces. It's been, what, 8 years now? and we're just starting to get real support. Ridiculous.
I could go on about this for days. Nothing makes me feel more like finding a new job in a language with a functioning dependency manager than finding out that someone updated a dependency's dependency's dependency and therefore I have to spend half my day tracking down obscure OS-specific build issues to add version bounds instead of adding actual features or fixing real bugs. I have to put tons of dependencies' dependencies into my package's setup.py, not because I care about the version, but because otherwise pip will just fuck it up every time for some percentage of my users.
[1] I am told that this is "in progress", and if you look at pip's codebase the current code is indeed in a folder marked "legacy".
[2] I 100% understand the OpenSSL team's opinion on this and as an open source maintainer I even support it to some degree, but man oh man is it a frustrating situation to be in from a user perspective. Similarly, as someone who cares about security, I understand Apple's perspective on the versioned dylib matter, but that doesn't make it suck any less to develop against.
> struggling to get Python installed and into their `PATH` ... it's frankly still amateur hour over there
But that has been solved on Windows for quite a while hasn't it?
Python installs the "py" launcher on the path, which allows you to run whichever version you want of those you have installed. Just type "py" instead of "python". Or "py -3.5-32" to specifically run 32-bit Python 3.5, or "py -0" to list the available versions.
It's gotten a lot better, but we still hit tons of issues with users who don't know what Python version they installed their application in. Oh and of course our "binaries" in Scripts/bin don't seem to show up in the PATH by default. So I get to tell people "py -3.8-64 -m foo" on windows, "foo" everywhere else.
This gets much much worse when a new version of Python comes out and we don't support it yet (because of the build system issues I mentioned). I spent several weeks teaching people how to uninstall 3.8 and install 3.7 before we finally got a functioning package out for 3.8.
I like Mozilla build system on Windows, you click "start-shell.bat" and it runs console. Python, mercurial, rust - just works, never checked PATH.
https://firefox-source-docs.mozilla.org/setup/windows_build....
Sure, but telling people to run "py -3.7" seems a lot easier than walking them through uninstalling and reinstalling Python, as you would have had to in the bad old days. It's reliable and consistent and doesn't depend of what's installed where or how it's configured. If you run "py -3.7 -m venv my_env", it just works, always, with no special context required.
Although I don't handle user support for Python packages, if I did, that would be my go-to approach.
If only there was some graphical tool that allows the user to see conflicts, relax version dependencies, and of course rollback changes if things didn't work out.
Or an error message like:
And then you would want to haveThere's a version conflict. In order to resolve, try one of the following: pip relax-dep package1 >= 1.0 pip relax-dep package2 >= 2.0 pip remove package3
(Just brainstorming here.)pip undoLooks like any other package manager:
* developers install with language packager
* in between install with OS package manager
* users install bundle
Those who have troubles with pip, gems, cabal, etc should check over options first.
Wait, bundlers Gemfile.lock lists installed versions at least ten years, what is "too unbounded" in pip?
It depends! Sometimes I have to lock a dependency at minor releases because every.single.release from the author breaks something new, and I've already worked around the locked version's failings. Sometimes I have to lock a dependency at a major version and everything is fine after that. Usually when the latter happens, eventually the developer releases something that fits within the version bounds and breaks. Sometimes they fix it in the next release, but then I have to deal with a week of bug reports from users that "I couldn't pip install the latest release!". A big complaint I'll with flask/werkzeug app is that something or other broke because they installed something else with strict version requirements alongside it (because the authors of that program have experienced the same bullshit, I assume).
Maybe I'm spoiled from working with cargo and npm (I have almost no ruby experience so I can't comment there), but both of them have way fewer such version conflicts in my experience. Obviously there are tradeoffs and I don't want the node_modules experience for my users, but often it seems that would be a much better experience than pip for everyone. With either of those, I just "npm install" or "cargo install" and all my dependencies end up there working.
You can generate a requirements.txt file using "pip freeze" on a functioning system, but then you have to figure out a way to point users at it instead of using "pip install myapp". Also you might have to do it for each OS since windows vs mac vs linux can have different package dependencies specified, and even if you don't do that, a dependency doing it means you have to account for it.
You can copy+paste the "pip freeze" output into your setup.py and add quotes+commas, but then you're back to breaking side-by-side packages.
So what am I, a developer trying to distribute my command-line application to less-technical users, supposed to do? Distribute two entirely different packages, "myapp-locked" and "myapp"? Tell people to install from a copy+pasted "requirements.txt" file? I've started distributing docker containers that have the application installed via the requirements.txt method, which is fucking stupid but at least the users of that complain less about versioning issues... until the day someone yanks a package I guess.
I've recently reported bug on Xmonad github, they have
### Checklist
I think it is brilliant idea, immediately checked latest git versions, I assume you may add- [ ] I've read [CONTRIBUTING.md](https://github.com/xmonad/xmonad/blob/master/CONTRIBUTING.md) - [ ] I tested my configuration with [xmonad-testing](https://github.com/xmonad/xmonad-testing)
And something about triangulation and reporting to another repo too.- [ ] I tested my application with [latest stable requirements.txt](...)Sorry to hear about breaks on major version. Ruby gems (libraries) freeze dependencies on major, sometimes minor, example [0]. But applications shipped with Gemfile and Gemfile.lock [1], [2]. So `bundle install` is reproducible [3]:
> The presence of a `Gemfile.lock` in a gem's repository ensures that a fresh checkout of the repository uses the exact same set of dependencies every time. We believe this makes repositories more friendly towards new and existing contributors. Ideally, anyone should be able to clone the repo, run `bundle install`, and have passing tests. If you don't check in your `Gemfile.lock`, new contributors can get different versions of your dependencies, and run into failing tests that they don't know how to fix.
Yes, docker, msi, Flatpack, AppImage - whatever works for you and your users. It is sad we can't easily statically compile in one file on scripting languages.
[0] https://github.com/teamcapybara/capybara/blob/master/capybar...
[1] https://github.com/Shopify/example-ruby-app/blob/master/Gemf...
[2] https://github.com/Shopify/example-ruby-app/blob/master/Gemf...
When it comes to shipping Python server-side apps, Pipenv is a godsend. Before discovering it, I had 3 requirements.txt files (common, dev, prod) which I had to edit manually. This often meant forgetting to include something that I just installed and only finding out after a full round of QA. It also meant a separate couple of steps for full-tree dependency freezing which never worked quite properly anyways. Pipenv just....works. Dependencies are saved as I install them, I only have to deal with the top-level ones, but the whole tree is locked.
To be blunt, maybe you just don't know what you're missing out on? Of course, Python's package management system works and is merely an annoyance to those of us who are used to more modern package managers. By the way, your comment reminded me a of this classic: https://news.ycombinator.com/item?id=9224 :)
I mean, possibly? What's considered the gold standard in package management these days?
I use yarn for managing javascript dependencies and do a lot of work with Cargo too. The community seems to love both these tools outside of slow compile and install times.
Cargo is my ideal, but really anything that doesn't make me manage virtualenvs or take 30 minutes to resolve dependencies. Note that "managing my own virtualenvs" is tricky because you have to make sure everyone has all of the same versions of the same dependencies in their virtualenv across your entire team (including production). I'm sure there are workflows that allow for this (probably with some tradeoffs), but we haven't figured it out. For a while we used Docker, but performance degraded exponentially as our test base grew (Docker for Mac filesystem problems, probably). Eventually we settled on pantsbuild.org which has a lot of problems, is super buggy, no one can figure out its plugin architecture, etc but as long as you stay on the happy path it generally works okay which puts it in one of the ballparks between any other Python dependency management scheme I've tried and Go/Rust/etc package management.
Great experience report: thank you! Wanted to point out that the Pants project has been focusing on widening that happy path recently (...by narrowing its focus to Python-only in the short term), and is ramping up to ship a 2.0.
This page covers some of the differences between v1 and v2 of the engine, and particularly its impact on Python: https://pants.readme.io/docs/pants-v1-vs-v2 ... We're using Rust and haven't bootstrapped yet, so we also appreciate Cargo and think that there is a lot to learn there.
We'd love feedback (via any of these channels: https://pants.readme.io/docs/community) on how to make it even better. Thanks!
That’s great to hear. Is there any page that documents the architecture of pants? I understand build systems of various kinds quite well, but I can’t tease out the design philosophy behind pants, especially how the different target types / plugins end and the “core” begins.
I’m going to dig into that 2.0 link though!
There isn't, but that's a good idea. At a very high level, the v2 engine is "Skyframe but using async Rust". All filesystem access, snapshotting, UI, and networking is in Rust, and we use the Bazel remote execution API to allow for remote execution. The v2 plugin API is side-effect free Python 3 async functions (called `@rules`) with a statically checked, monomorphized, dependency-injected graph connecting plugins (ie, we know before executing whether the combination of plugins forms a valid graph). Unlike plugins in Bazel, `@rules` are monadic, so they can inspect the outputs of other `@rules` in order to decide what to do.
This file contains a few good examples of `@rule`s that collectively partition python targets to generate `setup.py` files for them automatically: https://github.com/pantsbuild/pants/blob/5e4f123a1dbc47313fe...
yarn. fast and correct. every package.json is a virtualenv, but you still have a global cache and it just symlinks. (sort of like the global wheels cache for pip, but even more deduplication.)
cargo is great because it manages the build flow, it's extensible (clippy) - the global cache thing is a bit harder, because of rust package features (and other knobs like RUSTFLAGS), and it's not done by default, but it's as easy as setting RUST_TARGET_DIR as far as I know.
I've used a few (modern and not). Any package management system becomes uglier the more use cases it needs to support.
The only time I run into problems is when someone else is trying to use Conda. Then it can be hell trying to get their code running in standard pip/venv or vice versa.
I'm sure Anacona filled a niche at some point, but we have wheels now, can we all just agree to stop using Conda? What value does it actually bring now that makes it worth screwing up the standard distribution tools?
Isn't a conda environment just python installed into an isolated directory where someone can run pip? One can just run pip and pretend it isn't a conda environment.
It's way more than that. Firstly, most Anaconda installations come shipped with libraries like Matplot, numpy, etc. So a lot of people that use conda write software that assumes those libraries are always available e.g leaving them out of requirements.txt or setup.py.
Then there's the issue of Anaconda using it's own package repos, so even if you do manage to figure out what packages an Anaconda developed piece of software needs, you're getting a subtlety or maybe not so subtlety different version of it using standard pip, which creates the worst kind of hard to trace bugs.
Lastly, certain installations of Anaconda overwrite the system python version with it's own (so you can just use numpy or whatever anywhere) causing a huge headache with other system software and making using the standard distribution tools even harder.
I get that it's convenient for scientists that just want to write scripts and have them work, but if you're creating any kind of collaborative software, especially if you'll be working with SW engineers down the line, avoid Conda at all costs.
> So a lot of people that use conda write software that assumes those libraries are always available e.g leaving them out of requirements.txt or setup.py.
How is that any different than using python.org python? You'd still be unaware of what versions to use.
> you're getting a subtlety or maybe not so subtlety different version of it using standard pip, which creates the worst kind of hard to trace bugs.
That's way more of a problem with pip. You have no idea what versions a pip package is pulling in until install and then what binary actually gets installed depends on your compilers.
> certain installations of Anaconda overwrite the system python version with it's own (so you can just use numpy or whatever anywhere) causing a huge headache with other system software and making using the standard distribution tools even harder.
That's impossible unless one is actually copying binaries manually overtop of system binaries. You'd have to be root or use sudo to overwrite the system python manually. The whole point of isolation is to keep system python isolated and stable for system stability. That can happen if someone installs python from python.org and copies it into place.
> but if you're creating any kind of collaborative software, especially if you'll be working with SW engineers down the line, avoid Conda at all costs.
If you are working with SW engineers, you better know what versions you are pulling in, because you are going to be in serious pain using pip and trying to understand the provenance of your packages. Conda is way more powerful here for serious engineers to specify exact versions and reproducible and exact builds.
> How is that any different than using python.org python? You'd still be unaware of what versions to use.
Because python.org doesn't ship with numpy, matplotlib, or any of those other packages. Anaconda does, which makes it possible to import those libraries in projects without explicitly listing them as dependencies.
> That's way more of a problem with pip. You have no idea what versions a pip package is pulling in until install and then what binary actually gets installed depends on your compilers.
What? The problem here is that conda has it's own repos, which contains different packages than are contained in PyPi. What exactly do you mean by "no idea what versions a pip package is pulling". You realize you can set versions, right? numpy==1.13.2. The problem is numpy 1.13 on Anaconda can be different than numpy 1.13 on PyPi.
> That's impossible unless one is actually copying binaries manually overtop of system binaries. You'd have to be root or use sudo to overwrite the system python manually. The whole point of isolation is to keep system python isolated and stable for system stability. That can happen if someone installs python from python.org and copies it into place.
This is just wrong. Anaconda overwrites the system python by messing with the user's $PATH regardless if you are in a conda environment or not (probably easy to disable this "feature" but I've seen a lot of people with this setup). This causes major headaches.
> If you are working with SW engineers, you better know what versions you are pulling in, because you are going to be in serious pain using pip and trying to understand the provenance of your packages. Conda is way more powerful here for serious engineers to specify exact versions and reproducible and exact builds.
I'm not sure why you think you can't specify exact versions with pip. Projects like pipfile take it even further. The issue with conda is it's different package repos, not the ability to lock package versions.
> Anaconda does, which makes it possible to import those libraries in projects without explicitly listing them as dependencies.
I think your main problems are very naive users of conda. If you bring years of experience using pip, but use conda thoughtlessly, I can see your point.
If you don't want packages included, just use miniconda and install the ones you like. You could just create a new empty environment: `conda create -n py36 python=3.6`
Either way, it's completely reproducible.
When not using wheels, pip can be pulling in various versions of dependencies. Conda makes it easy to see all of them before they are dumped into your environment.
> Anaconda overwrites the system python by messing with the user's $PATH regardless
I understand what you are saying now. It's covering up system python in the PATH, but it isn't overwritten. Using `type python` (or which python will be correct 99% of the time).
> The issue with conda is it's different package repos, not the ability to lock package versions.
I thought this was your major argument. "Collaboration is difficult" when in fact it is much, much easier. You are getting the same binary everytime without slight differences in how it ends up compiled on the user's system.
I don't think your experience is atypical but I do think your acceptance of something that is quite awful is fairly atypical.
It's also possible that you use Python on Linux, where it is at least tolerable. Try again on Windows.
I develop Python exclusively on Windows (that then is deployed on Linux) and my experience is identical to the original poster. It's not a perfect system, but it's good enough and I have dealt with dependency management systems.
pip-tools is almost never mentioned because it's boring but great. I always default to it.
Yes.
In particular, the way it effectively gets you to "I control the lock file; it will be altered when I explicitly request it and never as a side-effect of any other action".
For some reason many other languages' systems (which have had opportunities to learn from others' mistakes) don't seem to treat this as a requirement.
Yes !! I just create a Makefile target and pip-tools is all I need. I create a requirements.in and that is all. So far never feel that has to be more complicated than that. And when I want to upgrade a package I update the requirements.in if I need to and run `make -B` for this:
I so nice to just write `make` than doing all the Poetry, Pipenv stuff, that honestly I feel is not adding nothing really really useful to the workflow.default: requirements-develop.txt pip install -r requirements-develop.txt requirements.txt: pip-compile -v requirements.in requirements-develop.txt: requirements.txt pip-compile -v requirements-develop.inDoes it pin versions of 2nd degree dependencies too? Like pip freeze would do? Also, when you remove a package, does it know to clear packages that were its deps and are not needed anymore?
pip-tools can do both of those, yes.
For the second, pip-compile computes the new requirements.txt (which is effectively the lockfile) from scratch, and pip-sync (not shown in that Makefile fragment) removes packages that are no longer listed there.
Thanks a lot for the reply. I'll include it some time soon in the article update
Yes, and yes
I came to the comments to say exactly this.
There's a decent summary of why someone might still prefer pip-tools even in a world where pipenv and poetry exist here: https://hynek.me/articles/python-app-deps-2018/
For my purposes, the primary downside of this approach is that adding dependencies takes slightly more effort, because you have to edit a file and then execute a shell command, rather than just executing a shell command. But managing dependencies takes up about 0.001% of my time, so this is not an area where I have much to gain by micro-optimizing my workflow.
I do, on the other hand, have a lot to lose by switching to something that's newer and shinier and less stable.
I fully agree. But I see editing the file manually an advantage. I can pip install whatever I want and then I only need to worry about having a clean requirements.in file.
With that, I know the compiled requirements.txt will only have what I need. Now it is just pip install -r requirements.txt or pip-sync.
That's a very good point. I don't think it had even occurred to me that it would be more difficult with pipenv or poetry. But, admittedly, I haven't made it far into either of them - a little noodling around, just enough to figure out that they don't really address any pain points for me.
Thanks for mentioning it! How would you compare it to the other tools from the original post?
Pipenv is almost exactly pip-tools + venv + a custom UI over the two. And slightly easier / more automatic venv management.
From the times I've looked at it (almost a year ago and older): pip-tools is / was the core of Pipenv... but it has been a fair distance ahead in terms of bug fixes and usable output for diagnosing conflicts. It seemed like pipenv forked from pip-tools a couple years prior, and didn't keep up.
Given that, I've been a happy user of v(irtual)env and pip-tools. Pipenv has remained on the "maybe some day, when it's better" side of things, since I think it does have potential.
Every attempt to solve this problem in Python seems to eventually end up in a pretty terrible place. Pipenv got off to a great start but got slower and slower to the point that it was more painful to use than not. Poetry (which is still my preferred option) started off with something seemingly beautifully thought through, and very fast too. But after only a few version updates, it seems to be hitting the same problems Pipenv did. On one project I was working on recently I managed to screw up the Poetry.lock file, so I ran `poetry lock` and it took 18 minutes. I still have high hopes for poetry, but I spend way more time trying to work around its shortcomings now (v1.0.5) than I did when it was at version 0.10.0 two years ago.
Poetry is better than Pipenv by a mile. It solves almost all of the problems, and the remaining ones are already on the Poetry roadmap.
Oh, totally agree. But it also seems to have a lot more problems than it did two years ago.
I'd chalk this up to dependency management and resolution being a hard problem.
Ruby's bundler had these exact same issues 5 or so years ago. I remember attending a talk by on Bundler run by it's core devs and asking about how they make dep resolution faster. Turns out that it was never really a solved problem there either, Bundler just uses a bunch of heuristics to avoid cases like the 18 minute `Pipefile.lock` described above.
I get that resolving dependencies is a SAT problem and inherently intensive; however, I don't understand why it's so much slower in Python. Is it just that all of these resolvers are implemented in Python (and Python is really that much slower than other languages?), or does Python require you to download an entire package just to determine its dependencies? In the latter case, that seems pretty dumb, right? Like as bad as exposing the entire interpreter as the extension interface, rendering optimizations and competing interpreters virtually impossible.
> does Python require you to download an entire package just to determine its dependencies?
yes - the standard way of defining dependencies in Python is in setup.py, which has to be invoked as a Python script in order to work. this script may also need to read files from the rest of the project, so you do indeed need to download the whole package to determine its dependencies.
even if the Python community were to agree on a new configuration format tomorrow, there would still be a ton of packages out there that wouldn't migrate for years.
It seems that this information should cacheable after an invocation of setup.py, at least for an installation without any extras. And even with extras requested, perhaps.
Or is there any even greater hidden challenge from using setup.py?
setup.py can check the OS and pick necessary requirements. so it can have different dependencies in different OSes.
I've used it like this - https://github.com/JaDogg/pydoro/blob/b1b3de38ac15b9254ef1be...
Does this imply that poetry will "solve" dependencies quicker the more of the dependencies use `pyproject.toml`? Or is that "hidden" once something is sent to PyPI anyways?
With several asterisks attached, but yes, the move would make dependency resolution faster.
That's why conda stores the metadata in a repodata file. Solving dependencies happens first; then binary packages are downloaded.
The missing ingredient to really, REALLY solve these problems once and for all is an authoritative decision to switch package formats and run the whole dependency resolution stack by the core python language contributor team.
I get backwards compatibility and open-source governance and bla-bla, but the reality is that this cannot be done by a third-party library author and needs to become part of the core stack, including proper support rather than just shipping a tool which covers 90% of cases. It's crazy that apart from venv and pip, nothing else comes with python and you're left on your own.
npm + the registry is part of node
apt-get + registry is part of a normal linux distro
budnler comes with ruby
This is a solved problem elsewhere. What we lack is a fully-supported, agreed upon, working DEFAULT choice, so people don't have to make their own choices. I don't know if not having that DEFAULT is a function of how the python community thinks or its diversity, but it's painful to watch. I've almost given up myself and seen many newcomers give up because of a trivial problem like this.
While npm comes with node, and bundler comes with Ruby, the governance of these projects/tools are separate from the language.
Yet someone has made the decision to do bundle them with the language and provide a default. I'm not suggesting we have common governance, but these decisions need to be made.
The default is pip + python3 -m venv.
Then I guess they are not good enough?! I use them (without other tools) and I'm happy, but do fairly simple python development. There's a reason many people choose other tools if these don't cover common cases.
Dependency management can be pretty overwhelming for a lot of people entering Python. This is especially true in the data science realm, where many don't have a SWE background. Even after you have selected a tool, it can be easy to use it in a poor way. I have recently written a short article on how I use conda in a disciplined way to manage dependencies safely: https://haveagreatdata.com/posts/data-science-python-depende...
Python dependency management of packages using C or C++ behind the scenes is really problematic and sometimes, the installation may fail. In this case, a solution is to use Conda or mini conda which provide many pre-compiled packages and also Clang C++ compiler.
An alternative way to allow people without software engineering background to play with Python data science and machine learning tool may be providing pre built Docker images with everything pre-installed which may save one from configuration trouble.
Docker is also useful for learning about new programming languages without installing anything. With just one command $ docker "run --rm -it julia-image", one can get a Docker image containing a GOLang compiler; a Julia language installation; a Rust development environment and everything else. Docker is really a wonderful tool.
Docker is definitely an interesting tool for that, but my biggest problems is that I have to teach them Docker, which is a totally new layer of abstraction they haven't seen before.
How do you approach this? How technical are people you prepare Docker images for?
You don't need to teach docker. All you need is providing a docker image with everything pre-installed such as Julia, R language, Python, numpy, pandas, Tensorflow and maybe Vscode. And also any Linux distribution, then one can just type "$ docker --rm -it -v $PWD:/cwd -w /cwd my-image ipython" For better convenience, it is better creating a command line wrapper or shell script that saves one from typing that such as $ ./run-my-image ipython. I don't prepare anyone, but I guess that if I knew anything about docker and was given a docker image with everything ready and pre-configured and also a shell script or command line encapsulating all docker command line switches, I would find it more convenient than installing everything myself or fighting some dependency conflict or dependency hell. So, docker can be used as a portable environment development. VScode, aka visual studio code, also supports remote development within docker containers with extensions installed per container. I am a mechanical engineer by training, but I found docker pretty convenient for getting Julia, Octave, R language, Python, Jupyter Notebook server without installing anything or fighting with package manager of my Linux distribution when attempting to install a different version of R, Julia or Python. This approach makes easier for getting bleeding edge development tools without breaking anything that is already installed. I even created a command line wrapper tool for using docker in this way that simplifies all those case: $ mytool bash jupyter-image; $ mytool daemon jupyter-notebook ...
> Pipenv or poetry?
If you used pipenv for a complex project with huge dependency tree, or used it for a long time, you definitely run into a blocker issue with it. That is the worst package manager of all, and probably the reason why Python has such a bad reputation in this area. It's because it's fundamentals are terrible.
Just go with Poetry. It's very stable, easy to use, has a superior dependency resolver and way faster than Pipenv.
Am I misunderstanding Poetry? Because it seems to me more suited as something for packaging your python code up ready to be pushed to Pypi? As in, starting up a project creates an init.py, and a python file referencing distutils: neither of which I need or want to do if I'm writing an app to go into a docker container.
The first use case is actually handling project dependencies. If I remember correctly, it couldn't build packages at the first time, so the "build" subcommand was introduced only later. It's the same type of package manager (with lock file) as other languages already had like Cargo, Bundler or NPM.
Pipenv has had a couple releases recently, but I've had an easier time with Poetry. Poetry is almost always[0] faster than Pipenv, and I find its commands more intuitive.
I've been meaning to take another look at Pipenv, but the huge pause without a release makes me nervous that it could happen again.
[0] https://johnfraney.ca/posts/2019/11/19/pipenv-poetry-benchma...
Agreed. A large part of Pipenv's raison d'être is resolving dependencies. Yet it seems to install packages in random order. This sometimes fails, so it retries failures at the end.
We actually used this in production before switching to poetry.
The last time I tried poetry, it had dependency problems. The maintainers acknowledged it and resolved it with a patch, but somehow it was not working. This was at poetry 1.0.0b3
If you're interested in the technical issues behind Python packaging, a recent Podcast.__init__ episode features three people working on improving Pip's dependency resolution algorithm. My use cases are simple enough that I've gotten by for years just using pip and venv with requirements.txt files, but it was still fascinating to listen to how package management is approached in more complex situations.
Dependency Management in Pip's Resolver: https://www.pythonpodcast.com/pip-resolver-dependency-manage...
I've spent far too many hours fighting with these tools in two completely different scenarios
* Developing and deploying production Python solutions
* Helping beginners run their first script
While it's great for beginners to use the same tools that are used in industry, I strongly believe that the problem nearly all of these tools face is that they can't decide whether they want to _manage_ complexity or _hide_ complexity.
You can't do both.
Some of them do a fairly good job at managing complexity. None of them do a good job of hiding it. The dream of getting Python to "just work" on any OS is close to impossible (online tools like repl.it are the closest I've found but introduce their own limtiations). I recently saw a place force their beginner students onto Conda in Docker because getting people started with Conda was too hard. If you're battling with the complexity of your current layer of abstraction, sometimes it's better to start removing abstraction rather than adding more.
That said, I'm also a happy user of `pip` and `virtualenv` and while I'm sure that many people can use the others for more specific needs, I think defaulting to them because they aim to be "simpler" is nearly always a mistake. I still teach beginners to install packages system wide without touching venv at first - it's enough to get you through your first 2-3 years of programming usually.
This is a good point about complexity. I started with pip + virtualenv, and I'd recommend pip + venv to anyone learning Python. venv is in the standard library, so there's official documentation for it.
I picked up Pipenv when a point-point release of a dependency broke a production deployment. Pipenv's dependency locking meant that I wouldn't get surprised like that again.
Part of why this topic comes up so much is the desire to run with a language before learning to walk with it, perhaps. I'm a big fan of Poetry, but I like it because I know what it gives me compared to vanilla pip and a setup.py file.
Installing dependencies at the OS level will get you far as a beginner. And when the time comes that you need a virtual environment, you'll probably know.
I've been working in python roles for some years now and I never understood why the python dependency tooling is so poor.
Pip feels like an outdated package manager, lacking essential functionality that package managers of other languages have implemented for years. For example, credential redacting in pip was only introduced in 2019, 8 years after its initial release!
Not to mention the global-first nature of pip (package is installed globally unless the user explicitly requests for a local installation). You can still install packages locally, but this only shows that pip was not built with environment reproducibility in mind. As a consequence, the need for additional environment tooling (like venv) arose, which increased the complexity of the local python setup.
Tools wrapped around pip are also under par. I cannot see why Pipenv is that resource intensive, leading to long and noisy builds (my machine gets close to exploding on a pipenv lock), with very fragile lock files. Debugging an unsuccessful locking in the CI of an enterprise project is a mystery that could take an entire week to solve. Its javascript counter-part (npm) does the exact same thing, faster and with less CPU usage.
Trusting the OS community, I understand that there would be very good reasons for Pipenv to perform like this, but as the consumer of a package managing tool all I see is the same generation of file hashes I see on npm, but with npm doing it way more efficiently. I really see value in the principles that Pipenv is promoting, but to me the developer experience of using it is suboptimal.
Serious question: What is the difference between virtual environments and just having several Python installs like:
/home/foo/a/usr/bin/python3
/home/foo/b/usr/bin/python2
Python is so fast to compile and install that I just install as many throwaway Pythons as needed.I do not recall any isolation issues between those installs, unlike with conda or venv, which are both subtly broken on occasion.
But I dislike opaque automation in general.
That basically is what a venv is, an entirely separate Python install. Some files are linked rather than being copied, but it looks the same. venv gets you a couple extra conveniences, like the activation script.
I wouldn't call venv "opaque automation," there's not much magic going on there.
The trouble is these tools all do different things and aren't really comparable. I wouldn't even include Docker in this kind of thing as it doesn't really do anything on its own.
For me, there are two main choices today:
* An ensemble of single-purpose tools: pip, venv, pip-tools, setuptools, twine, tox,
* An all-in-one tool, for example Poetry, Pipenv or Anaconda (or Miniconda).
I prefer the former approach, but if I had to choose an all-in-one tool it would be poetry.
Besides the dependency management, another major problem of Python is the deployment. Although Docker is not a dependency management tool, it can be used as a deployment tool which encapsulates the application to be deployed, python runtime and all other shared libraries dependencies alongside the configuration. Another deployment tool that is worth mentioning is Pyinstaller. It can pack the python runtime, the application and all dependencies into a single native executable. Pyinstaller is better for Desktop applications and for building single file executable applications.
Docker is an invaluable tool, but it's a general purpose one. It's good for Python because it's good for everything.
I have used Pyinstaller. It's good but it's a bit too magical for my tastes.
Unless you’re on macOS where you have to manually edit the pyinstaller package to comment out or point to an updated Ntlk data dump.
Well docker locally can be used an expensive isolation tool. In some sense it plays the same role as a venv.
I wouldn't use it over a clean venv management, but I have seen people who prefer using docker containers because they find venvs to not be clean/elegant.
I agree with you that Docker should not be there, but the reality is that people us it to replace some other tools (like venv).
I wonder why you prefer the former approach.
On the contrary, thanks for having included Docker in that list. It's the obvious answer to so many problems (developing, running and deploying apps, replicating deterministic Python environments, not installing linux dependencies required by Python packages directly on your machine, and so on).
BTW, to comment one of the point you made in the article, it's not that hard to run CUDA inside a container. It's less straightforward but quite well documented. You basically need nvidia-docker [1] on the host and start your containers with the runtime 'nvidia'. docker-compose still doesn't support it officially but there are workarounds. [2] I'm running it on ~50 instances in production and automated all the setup with ansible successfully.
Why thank him for including Docker if it's the "obvious answer"? You're already using it, that's great.
Docker doesn't do anything Python specific on its own. It can be part of a pipeline but only with support from the Python specific tools which is what should be discussed in this kind of article.
The former approach is more like the Unix way: each tool does one thing and does it well. I prefer that because I can then assemble a workflow that works for me. It's easier to build pipelines when you can drop in each piece one by one.
All-in-one tools almost never do things exactly the way you want. They have a higher barrier to entry as well as a stronger lock in effect than smaller tools. If I fall out of love with venv, I can replace it with Docker. I can't just do that with an all-in-one tool.
Having said that, poetry is quite well designed and I do encourage junior developers to explore it for themselves instead of just doing what I do. If I was a junior developer today I might be quite glad for a single all-in-one tool that gets me on my feet with good practices from day one.
I think docker is great for providing isolated environment, venv has similar goals. pip-tools + docker is powerful combination, but the article doesnt mention pip-tools for some reason.
I think this is a good basic overview of the dependency management landscape. I have a few things to add.
One is that because Python has been around for so long, it's easy to find outdated or conflicting advice about how to manage Python packages.
I think it's important to stress that pyenv isn't strictly a dependency manager, too, and depending on your OS, isn't necessary. (Supported Python versions are in the AUR[0].)
A lot of pain from Python 2 -> 3 is that many operating systems were so slow to switch their default Python version to 3. Unless something has changed in the last month or so, Mac OS _still_ uses Python 2 as the default.
It's a shame to see Python take a beating for OS-level decisions.
> [Pipenv] loads packages from PyPI so it does not suffer from the same problem as Conda does.
False. Conda manages packages installed from PyPI. This is discussed under the Conda section, so I'm surprised the quoted line wound up in the article.
Hey xapata, thanks for pointing this out.
Any chance you could give me some reference so I can fix it in the original article?
https://docs.conda.io/projects/conda/en/latest/user-guide/ta...
Basically, use Conda to manage environments, use Pip to install packages. If you're using Conda to install anything, do that first.
pipenv is terrible. poetry ain't there yet. seems author forgot to mention the problems with pipenv and poetry. virtualenv + pip will take you far. then to reproduce pipe to requirements.txt. poetry etc are still using pip under the hood
I've alwyas used conda since I use the scipy stack. Can anyone clue me in if I can instead use pipenv and it will download all the requisite binaries etc. ?
Could someone summarize the issues with Pipenv (and by Extension Poetry). Been using them happily for the last few years, didn't know people disliked them.
With Pipenv, last year ownership switched from the Request's lib owner to the Pypa, so more or less an officially blessed solution.
The only downside on this thread that I could understand so far is that it might be slow to install dependencies on larger projects, can't think of anything else.
I've been using pipenv happily for a few years now, but on projects that don't have a huge number of dependencies (Django, DRF, MySQL/Postgres, AWS, Kubernetes, a few other random libraries), and haven't seen too much slowness. I suppose data-science projects with large dependencies that pull in many other dependencies might have more issues.
I've been the person to document setting up development environments for others in macOS (and Homebrew) with a view to deploying in Linux, and pipenv (and pyenv, and Docker/docker-compose for setting up software context/datasets) definitely overall minimized the complexity for those configuring their dev environments.
(EDIT: documenting dev enviroments)
After hitting some weird PyInstaller bugs, I gave up and started compiling Python myself. One interpreter for every project. Shell scripts to set the paths. All libraries go directly in site-packages, not some other layer. A little more complicated at the outset, but this approach has yet to let me down. And compared to the nightmares I was trying to fix, building Python is dead easy.
I can get PyInstaller working with venv, but not conda.
Reading this makes me grateful that I can get away with just apt-get. I wonder how prevalent this is, since not every project needs specific or latest versions of the runtime and libraries, only a minimum. Some are just plumbing tools that stick to the stable core, and the Python 3 ecosystem has been mature for enough years that older distro packages are still useful and capable.
Personally I just try to avoid Python development because I hate feeling like I'm dealing with what should be a solved problem. Recently I had to work with an outdated Python Tensorflow framework and the only way we could get it to work correctly across different dev and deployment machines was with a fat Docker image that took hours of head scratching to build. It was miserable.
Using conda for environment and dependency management works really well.
Conda is a great tool.
But it forks the ecosystem, twice:
First, Conda packages have to be maintained separately from PyPI packages.
Second, the "default" repo is maintained by Anaconda, but the community maintained Conda Forge repo is also separate, and officially the packages in one are not compatible with the packages in the other. (In practice they usually play nice).
Having three incompatible package repos is not ideal.
pip-tools should really be included here. That's the single-purpose tool that handles environment reproducibility, if you're going with the single-purpose-tool route of pyenv + pip + venv instead of the all-in-one route of poetry/pipenv.
Tbh, this is one of the reasons why I moved away from Python to Ruby for my side projects.
Are bundler, rvm, and rbenv not as confusing?
Definitely not. In Ruby I can immediately understand how to run a correct copy of any project because they all use bundler. Furthermore rbenv makes switching between specific Ruby versions for those apps trivial.
IMO they are much cleaner—plus you only need one of rvm and rbenv anyway.
You know how pathlib does for paths and files? Python needs something like that for distribution/versions/import hacking, eh?
Glyph (of Twisted fame, whence pathlib IIRC) pointed this out ages ago: model your domain [in Python] with objects.
Oof, that footnote:
> It’s 2020, but reliably compiling software from source, on different computer setups, is still an unsolved problem. There are no good ways to manage different versions of compilers, different versions of libraries needed to compile the main program etc.
I wonder how much stuff like this has to do with python's popularity. When I have opaque issues like "libaslkdjfasf.so is angry with you and/or out to lunch and/or not doing expected things," it's the most frustrating part of programming. I'd pay devops people infinite money to not have to deal with installation/setup issues anymore.
I think this is not a problem specific to python packages, but a general problem of how we compile C/C++ software. There is no concept of packages and compiling one thing often requires installing a -dev package of some other library.
The issue is that lack of packaging C/C++ world spreads to all other communities that depend on them.
Everytime I read an article about all these tools I really can't help but think what would happened if Linus would have taken over the desktop. All the tools really largely seem to try to poorly replicate Linux package management and the fact that because of this devs now don't care anymore about api stability and not always building against the latest and greatest.
I admit a pyenv is nice for testing against different python versions if necessary. But on my Linux systems generally fine with just installing system packages and doing pip install --user for the odd package that is not in the repositories
I think that works when you use Python cli tools, but not when you're working on 5 different projects, each running different python version.
Outside of Python 2/3 differences, are Python interpreters not backwards compatible?
In other words, while obviously a program written for 3.3 won't work in 2.7, but will a program written for 3.3 fail to run in 3.8?
If it runs fine, why the need for multiple interpreters? I'd think you'd get by just fine by having the latest 2.x and 3.x installed.
Because underscore-functions aren't truly private, I have once seen an upgrade from 2.7.8 to 2.7.13 fail. A commonly-used package was importing one from a core python module.
It's just about backward compatibility. If you don't run the exact version of python that's running in production, how do you know that you're not using some method that does not exist in production yet (because you run older version there)?
Also, libraries with binary component often have to be compiled against specific version of python.
Sometimes! A very simple example is code that uses "async" as a variable name. It became a keyword in 3.5, which was an enormous pain in the ass.
I just use Pyenv and pip-tools. Create a requirements.in and Makefile targets to build the requirements.txt based on it. So far I haven't find an sceneario where that combination is detrimental.
This is a good overview of something that took me an annoyingly long time to learn. My personal preference is to keep things simple with pyenv, venv, and pip.
Tangentially related is the tool tox [1], which is often used to run a test suite inside of virtual environments created by venv, on multiple versions of Python managed by pyenv.
Now if only setuptools could work well without hackery...
Poetry does a good job of managing/publishing a package without having to custom-code a setup.py. I have a blog post about it but I don't want to spam. Poetry works well for a package of average complexity, and it's configured entirely in pypyroject.toml. I don't have experience publishing something complicated with it, though.
Wouldn’t you still need something like pip-tools to lock down subdependencies and handle conflicts?
Plain old pip and venv can do that. just "pip freeze >requirements.txt" and elsewhere "pip install -r requirements.txt", inside venvs.
I think that will end up installing the subdependency version of whatever is last in the requirements.txt. You need a dependency resolver to deal with problems with conflicting versions.
More details here: https://medium.com/knerd/the-nine-circles-of-python-dependen...
pip handles the simple cases: if you install a new pkgA that depends on 'pkgB<3', it installs the latest appropriate version of that, e.g. 'pkgB==2.5.6'. This works even if you already installed 'pkgB==3.0.2', it will uninstall that first. The problem is if some 'pkgC' depends on 'pkgB>=3'. You probably want for pip (or similar) to figure out that an older version of 'pkgC' is compatible with 'pkgB>2'.
But I actually don't want it to be too smart. Better to keep your dependencies minimal and explicit, and manually specify older 'pkgC' if you need to. I have a few non-trivial services in production, the most complex one with 16 total dependencies + sub-dependencies. That is quite manageable.
So, I strongly recommend manually curating the most appropriate versions of the few tastefully chosen dependencies you really need. Then, pip+venv can easily reproduce that exact set of dependencies anytime. I also do something very similar to this with C applications, and Go. Sub-dependencies should be a big factor in how you choose your direct dependencies.
The problem with doing things this way is that you’re not going to know there’s a problem until there’s an issue in your tests (hopefully) or production. You’ll eventually install something new, it’ll update some subdependencies to a version that another library doesn’t support, and then things get broken. Pip-tools is easy to use and it tells you there’s a problem before it’s too late.
It's worth noting that on Linux it's slightly different because most of the popular libraries can be installed with the system package manager (no problem of dependency management, updates, ...), I rely on alternative solutions only when I want to use a version of a library different from the one shipped with the package manager (which is not that frequent with fast paced distros like Fedora) or when the library is not packaged.
No. Don't mess with os packages and your dev setup. On a very small scale I can adapt to use the os package version. But when you start to work on 4, 5, 15 projects, each of them that need to work with some specific version of some package, you need to detach from the os
And my point is, use the system packages whenever you can, they are there for a reason, while developing atbswp[0], I faced a situation with wxPython, where the only package available on Linux was the one provided by the package manager, it's a known situation[1], the workaround I used was, instead of "detaching from the os" to work with the OS, more specifically, install the package with the OS, then copy the wx folder from the system's site-package to the venv's site-package[2].
0: https://github.com/rmpr/atbswp 1: https://wxpython.org/pages/downloads/ 2: https://github.com/RMPR/atbswp/blob/master/Makefile
For some of the problems that Node.js and JS at large have with a centralized package manager, I for one am very happy that it's not in the python situation. 100% of the packages I've tried to install in the last 3+ years are simply `npm install PKG`.
pip install <package>
works even on iPhone (for pure Python packages in Pythonista for iOS)
Do you mean pip3 install <package>? Also tried that, didn't work. Had to learn about Python versions, pip vs pip3 versions, pipenv, conda, how an old python package doesn't work with a modern Python package, etc.
All I was trying is to combine tensorflow lite with opencv IIRC. Just look at the installation instructions:
- https://www.tensorflow.org/install/pip
- https://www.tensorflow.org/lite/guide/python
- https://docs.opencv.org/3.4/d2/de6/tutorial_py_setup_in_ubun... vs https://stackoverflow.com/a/52880211/938236
The instructions that you've link use `pip install tensorflow`
Yes, but those instructions did not work for me. I am not sure if they didn't work because the required previous libraries were wrong somehow (which would just be `dependencies` in package.json), a different version (non-issue with `dependencies`), my python version was wrong (non-issue with `engines`), etc.
I'd love to see how npm would deal with tensorflow. It's a tough package to compile from scratch.
DONT FORGET sudo pip
Do popular Node packages rely on C and Fortran?
I don’t recall any popular Node package relying on Fortran, but there are two popular packages that rely on C: fsevents and node-sass
It works on macOS and Linux without any issues. Windows usually requires some extra steps to setup node-gyp
Even worse, they sometimes rely on C++
Python dependency management is much like Python 2 to 3, a mess. It's shocking to think pip and pipenv are so widely used and still such terrible tools.
Great article Mario! Was a pleasant surprise to open HN and find this
Thank you :)
pipenv also loads any .env file it finds in the directory, so it is a little more convenient to use than poetry, so I didn't make the switch.
I like this overview. However, it points to a fundamental problem with Python environment, going much against its own credo:
"There should be one - and preferably only one - obvious way to do it." - The Zen of Python; see also https://xkcd.com/1987/.
When it comes to the package, environment and dependency management, I think that ironically JavaScript environment is light years head, vide: https://p.migdal.pl/2020/03/02/types-tests-typescript.html
Anyone installs Conda on my shit I hit the roof... I sympathize with the individual but cannot tolerate the act.
Can you elaborate?
Perhaps I can guess some causes of the irritation. Let me start by saying that on Windows conda is probably an improvement.
On Linux, however, I do not see much benefit, unless you frequently install large binary C library based packages.
To me it feels cleaner to compile these packages from source. You are sure to have no glibc mismatches etc.
Conda, despite its advertising, does have library issues. C Libraries are shared between environments, compiling inside an environment can lead to surprising results when stale libraries are in the miniconda path.
All in all, it feels like a second OS shoehorned into the user's home directory. Compared to apt-get it is really slow and bloated.
It feels too intrusive on a Unix system.
Also, I'm not sure if the repositories are secured in any meaningful way.
This is a perfect summary which exactly captures my frustrations.
I should have said that on Windows Conda is a much more understandable choice.
anyone with insight into the Debian/Ubuntu packaging care to comment ?
It's 2020, but the python community still has not converged to a small set of sane solutions.
It seems to me that Ruby, PHP, JS, and Rust communities have solved the problem.
I'm only familiar with Python, Javascript and Rust. It seems to me that Rust is the only one that has "solved" this problem.
I dont think there are any real Python devs who thinks dependency management is solved. However, why would you claim Javascript has a good solution? The inconsistencies between node and web dev is odd at best. Babel compilation is annoying and slow. Are we even standardized on webpack yet?
Can anyone say with a straight face that getting a new JS dev caught up on what all these different parts to compile a JS program is a solved problem?
I dont fault either Python or the JS ecosystems. As pioneers in dependency management, there was a lot of trial and error. New languages like Rust benefited from it and that's ok.
> Can anyone say with a straight face that getting a new JS dev caught up on what all these different parts to compile a JS program is a solved problem?
You are intermixing dependency management with build tools. Webpack and Babel have very little to do with dependency management.
I loosely included Babel and Webpack as part of dependency management since Python does not have a similar compile step. I don't think it is unfair since they fall into the same category of wtf when trying to get your dev environment working.
With that said, I should have included Yarn and Npm with their own problems. I can't remember how often I solved my dependency problems with rm -rf node_modules.
Python does have a compile step if your packages contain C extensions and aren't bundled into binary wheels. This shit tends to only work because of a whole lot of effort by package maintainers to make sure it compiles on a handful of supported systems, but if you aren't running such a system you get to debug a bunch of C dependency errors buried under a mile of gcc warnings and other make output.
One thing node.js got right is the module resolution logic.
Node doesn't even need something like venv since module lookup is always local.
Also no problem with dependency hell. Each dependency can have its own private dependencies, even different versions of dependencies shared by sibling modules. Tools like yarn/npm can remove duplicates across a project.
It's a lot more of a pressing concern when you have an average of 1,200 dependencies per project.
(I'm actually not sure if it is the average but my anecdotal experience is that it's an order of magnitude higher than python, and 1200 wouldn't be unusual).
That's probably quite accurate for front-end projects which pull in a ton of packages for 1. A compatibility layer wih older browsers. 2. Build tooling.
Node projects tend to have a lot smaller dependency graphs.
Yes we all learned that virtual envs are not the right way to do it. Node definitely got this portion right.
I wish Python can abandon virtual envs. This is the most annoying part of setting up a project.
How come other tools don't just replicate successful models?
That's very true. Starting fresh definitely helps.
I would not say JS solved it, but yarn (and even npm these days) seem to be superior to all python dependency management tools before pipenv. None of them had a proper lockfile, for example.
Agreed. I think pipenv had a really good start but sputtered. Poetry does look hopeful to solve the dependency resolution.
> As pioneers in dependency management
Maybe a noob question, but how come they are pioneers? Weren't there languages with package systems before Python and JS?
Actually no. As diegof79 pointed out, Perl was the first one with a centralized package management system with CPAN. Perl was first released in 1987. Python was first released in 1990. Javascript was first released in 1995. The internet was the launch pad for these open source languages, which allowed the sharing of code written in these languages.
Another thing to note is that dependencies were not that complex in the 90s. The idea of dependency management started when there was too many dependencies to manage and the existing tools could not reliably reproduce builds. My guess is that all these package use took off after the dotcom boom in 2000 and everyone started building websites with Ruby, Python and eventually Nodejs.
My first encounter with something like a package manager was CPAN for Perl... maybe they were the pioneers
.NET has it’s issues mostly due to the historical mess of frameworks, but with development converging around .NET Core things have been getting a lot better.
I think the environment system in python is a confusing design flaw that could have been avoided with project specific installations. I vastly prefer installing packages on a project-by-project basis. Python introduces dependency nightmares because two projects with different needs end up using the same central local package source unless you set up different environments. So when you install an package foo for project B, package bar might stop working for project A due a dependency on an earlier version of foo.
Hasn't it?
pypi is really the de-facto package index, Pipenv/Poetry/Conda are all venv handlers (using the standard venv tools) + dependency graph, and using pip which is standard as well.
I would call this a small set of solutions (3), and they are all sane (any will do, just pick one).
Inviting the question: why don't the core devs have a serious bakeoff and bring that functionality into the core distro?
poetry and pipenv are really just layers of abstraction of what is part of the Python core (pip, pypi, venv, wheels).
I don't even see what problems they solve, it seems like you end up with more problems using them.
pip, PyPI access, venv, and wheel have all been included for a while.
I meant poetry/pipenv.
Perhaps poetry after it matures.
In the node world I see half of projects telling you how to install it with npm and half with yarn. In Python at least pip is a standard that always works to install, even if it doesn't solve the other problems.
Unless you're in data science. Then you'll be split between conda and pip, which is worse than npm and yarn. You can always swap npm and yarn, but can't do that with conda and pip.
Those are literally equivalent though, anything you can install using yarn you can install using npm. They connect to the same registry and use the same package.json format. Also, pip is relatively recent. Before that, it was a mix of setuptools, easy_install and various manual procedures.
I wouldn't call nine years relatively recent