Settings

Theme

Python extensions should be lazy

gauge.sh

115 points by 0x63_Problems a year ago · 64 comments

Reader

raymondh a year ago

This is an impressive post showing some nice investigative work that isolates a pain point and produces a performant work-around.

However, the conclusion is debatable. Not everyone has this problem. Not everyone would benefit from the same solution.

Sure, if your data can be loaded, manipulated, and summarized outside of Python land, then lazy object creation is a good way to go. But then you're giving up all of the Python tooling that likely drove you to Python in the first place.

Most of the Python ecosystem from sets and dicts to the standard library is focused on manipulating native Python objects. While the syntax supports method calls to data encapsulated elsewhere, it can be costly to constantly "box and unbox" data to move back and forth between the two worlds.

  • 0x63_ProblemsOP a year ago

    First off, thank you for all your contributions to Python!

    I completely take your point that there are many places where this approach won't fit. It was a surprise for me to trace the performance issue to allocations and GC, specifically because it is rare.

    WRT boxing and unboxing, I'd imagine it depends on access patterns primarily - given I was extracting a small portion of data from the AST only once each, it was a good fit. But I can imagine that the boxing and unboxing could be a net loss for more read-heavy use cases.

    • jhylton a year ago

      You could create a custom C type that wrapped an arbitrary AST node and dynamically created values for attributes when you accessed them. The values would also be wrappers around the next AST node, and they could generate new AST nodes on writes. Python objects would be created on traversal, but each one would be smaller. It wouldn’t use Python lists to handle repeated fields It seems like a non-trivial implementation, but not fundamentally hard.

      The analogy with numpy doesn’t seem quite right, as Raymond observes, because numpy depends on lots of builtin operations that operate on the underlying data representation. We don’t have any such code for the AST. You’ll still want to write Python code to traverse, inspect, and modify the AST.

      • 0x63_ProblemsOP a year ago

        Very fair points. For general purpose ASTs from Python your design should be more efficient while essentially keeping the existing interface.

        When I referenced numpy, I was thinking of a query layer which could push traversal into the extension as well. Something that could have given me “.select(ast.Import).all()”, which in my head is kind of like doing a filtered sum in numpy.

        Very cool to get your thoughts on this, thanks for making an account :)

  • coldtea a year ago

    >However, the conclusion is debatable. Not everyone has this problem. Not everyone would benefit from the same solution.

    Everyone would benefit from developers being more performance minded and not doing uneccesarry work though! Especially Python who has long suffered with performance issues.

    Love your work btw!

    • BiteCode_dev a year ago

      No. Days only 24h. If you focus on perfs, you leave something else.

      Python is python because people cared about other things for many years.

      • coldtea a year ago

        >If you focus on perfs, you leave something else

        That's the second benefit of focus on perfs.

        We could do with less, better tested, and faster, features in most apps I know of.

      • sgarland a year ago

        You can focus on multiple things, you know. There is some low-hanging fruit in Python for performance in certain circumstances (mostly hot loops, at least, in my experiments). For example, if you need to extract a string from a datetime object, doing so manually with f-strings is about 20% faster than strftime. If you use the string mini-format language instead, it’s 40% faster.

        • BiteCode_dev a year ago

          That's literally the opposite of focusing.

          • sgarland a year ago

            What you’re describing is myopia. Focusing purely on performance at the expense of anything else would probably result in highly unreadable code, yes. Being aware of and caring about performance, and choosing to prioritize it when reasonable is not the same thing.

        • halfcat a year ago

          ”You can focus on multiple things”

          You can, but each added focus degrades the quality of the others.

          The key principle is thinking with a mindset of cost.

          Even if it’s low hanging fruit, there’s a world of difference between assuming we can work it in, and saying, “this is what it will cost, and this is what we won’t work on result”.

          And similarly, saying ”that’s impossible” is not in the same universe as “the cost is extremely high and it’s not important enough to us to pay that cost”.

          • sgarland a year ago

            IME from the perspective of an SRE / DBRE, performance is nearly always given up (if it’s ever even considered) in favor of making things easier, and this tends to have large consequences later on.

            People seem to have taken the quote, “premature optimization is the root of all evil” to mean “don’t focus on performance until you absolutely have to,” but when you push them to prove that they don’t have to, they often haven’t even profiled their code! Cloud compute – and especially K8s – has made it such that it’s expected that you simply over-provision and scale as necessary to get the throughput you need. I don’t personally see running through a profiler (ideally as part of CI) as being particularly difficult or onerous.

          • coldtea a year ago

            >You can, but each added focus degrades the quality of the others.

            Only if the same people do both.

            Or if dev that would rather focus on perf find it equaly motivating and fun to work on something else instead.

jay-barronville a year ago

Evan, just a tip…

When linking to code on GitHub in an article like this, for posterity, it’s a good idea to link based on a specific commit instead of a branch.

It might be a good idea to change your link to the `Py_CompileStringObject()` function in CPython’s `Python/pythonrun.c` [0] to a commit-based link [1].

[0]: https://github.com/python/cpython/blob/main/Python/pythonrun...

[1]: https://github.com/python/cpython/blob/967a4f1d180d4cd669d5c...

formerly_proven a year ago

> In the case of ASTs, one could imagine a kind of ‘query language’ API for Python that operates on data that is owned by the extension - analogous to SQL over the highly specialized binary representations that a database would use. This would let the extension own the memory, and would lazily create Python objects when necessary.

You could make the API transparently lazy, i.e. ast.parse creates only one AstNode object or whatever and when you ask that object for e.g. its children those are created lazily from the underlying C struct. To preserve identity (which I assume is something users of ast are more likely to rely on than usual) you'd have to add some extra book-keeping to make it not generate new objects for each access, but memoize them.

  • 0x63_ProblemsOP a year ago

    This seems like it could be implemented without much trouble for consumers, but I actually think for the common case of full AST traversal you'd still want to avoid building objects for the nodes while traversing.

    That is to say, ast.NodeVisitor living in Python is part of the problem for use cases like mine. I need the extension to own the traversal as well so that I can avoid building objects except for the result set (which is typically a very small subset). That was what led me to imagine a query-like interface instead, so that Python can give concise traversal instructions.

pdhborges a year ago

How much time did the PyAST_mod2obj actually take? The rewritte is 16x faster but the article doesn't make it clear if most of the speedup came from switching to the ruff parser (specially because it puts the GC overhead at only 35% of the runtime).

  • 0x63_ProblemsOP a year ago

    That's a good question. I don't have an easy way to rerun the comparison since this happened actually a while ago, but I do remember some relevant numbers.

    In the first iteration of the Rust extension, I actually used the parser from RustPython. Although I can't find it at the moment, I think the RustPython parser was actually benchmarked as worse than the builtin ast parse (when both returned Python objects).

    Even with this parser, IIRC the relevant code was around 8-11x faster when it avoided the Python objects. Apart from just the 35% spent in GC itself, the memory pressure appeared to be causing CPU cache thrashing (`perf` showed much poorer cache hit rates). I'll admit though that I am far from a Valgrind expert, and there may have been another consequence of the allocations that I missed!

lalaland1125 a year ago

Optimizing Python extensions is becoming increasingly important as Python is used in more and more compute intensive environments.

The key for optimizing a Python extension is to minimize the number of times you have to interact with Python.

A couple of other tips in addition to what this article provides:

1. Object pooling is quite useful as it can significantly cut down on the number of allocations.

2. Be very careful about tools like pybind11 that make it easier to write extensions for Python. They come with a significant amount of overhead. For critical hotspots, always use the raw Python C extension API.

3. Use numpy arrays whenever possible when returning large lists to Python. A python list of python integers is amazingly inefficient compared to a numpy array of integers.

  • RhysU a year ago

    > Optimizing Python extensions is becoming increasingly important as Python is used in more and more compute intensive environments.

    I have always loved how the trick to making Python better eventually comes down to not writing Python.

    • Spivak a year ago

      Is this not expected? You're never going to have any language with the kind of dynamism that Python/Ruby/JS have while also having performant number crunching simply because Python has to do more significantly more work for the same line of code. You could envision a world where a JIT could recognize cases where all that dynamism falls away and you can generate code similar to what you would get in the equivalent C but that's just a fancy way to not write Python again. You would be writing in this informal not super well defined restricted subset of Python that JITs cleanly.

      The problem time immortal is language complexity vs the ability to hint to your compiler that it can make much stronger assumptions about your code than it has to assume naturally which is where we got __slots__. And there's lots of easy wins you could get in Python that eliminate a significant amount of dynamism-- you could tell your compiler that you'll never shadow names for example, that this list is fixed size, that you don't want number promotion but they all require adding stuff to the language to indicate that.

      When you're looking from the bottom up you end up making different trade-offs. Because while you get nice primitives that generate very tight assembly when you need that dynamism you end up having this object model that exists in the abstract that you orchestrate and poke at but don't really see, like gobject. Ironically, HN's love-to-hate language C++ gives you both simultaneously but at the cost of a very complicated language.

      • rurban a year ago

        > You're never going to have any language with the kind of dynamism that Python/Ruby/JS have while also having performant number crunching simply because Python has to do more significantly more work for the same line of code.

        Wrong. With strong types you do have ability to tell the compiler that most of the dynamic checks and hooks can be omitted, and values can stay unboxed. Python, ruby, perl choose to ignore types so far. And Javascript, PHP did at least dynamic optimizations with their type hints.

        • halfcat a year ago

          >> You're never going to have any language with the kind of dynamism that Python/Ruby/JS have while also having performant number crunching

          > Wrong

          Can you give examples of languages that achieve both? Or is this all just on a spectrum? Like if we say C# is performant, it’s still the case that (eventually) the way to make it faster is “stop writing C#”.

          • neonsunset a year ago

            > Like if we say C# is performant, it’s still the case that (eventually) the way to make it faster is “stop writing C#”.

            That has stopped being true a few years ago and in some cases was never true.

            The way for a faster C# codebase is writing faster C#. .NET CoreLib including all performance-sensitive paths like memmove is written in pure C#, and the VM (by that I mean all reflection bits, TypeLoader, etc.) itself, excluding GC, is also pure C# when you are using NativeAOT.

            The optimization techniques to achieve this are but not limited to using monomorphized struct generics, stack buffers, arenas and memory pooling, using SIMD API (which has the same performance characteristics as intrinsics in C/C++), not allocating by using structs or making object lifetime GC friendly if allocations cannot be avoided, making otherwise safe code bounds check elision friendly, reducing indirection, etc. Many of these are exact same as what you would do in languages like C, C++ or Rust.

            As a result, the use of FFI to call into C/C++/Rust/ObjC/Swift(upcoming native Swift ABI support)/etc. today is predominantly relegated to accessing necessary OS APIs and libraries.

            https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

            Of course most of these optimizations are at odds with "dynamism" and yield the speed-up by making the dispatch static and inlining-friendly, and giving the compiler more information it can prove and make use of. Not to mention C# (together with Swift) sits the closest to the metal among otherwise high-level languages by virtue of what .NET is and what its ILC and JIT compile IL to.

            • halfcat a year ago

              So, no examples? :)

              That’s my gut feeling, that writing fast C# basically ends up looking like C++/Rust/etc where the niceties of C# are no longer present.

              Which is the same as rewriting Python in cython. It’s way faster, and it doesn’t look like Python, or have the benefits of Python, and now just looks like weird C.

              • neonsunset a year ago

                Most common optimization path is simply removing the junk and making the code drastically simpler, using appropriate CoreLib APIs and similar. A C family language that looks like other C family languages, very surprising.

                Is there something specific you would like to see an example for?

  • stabbles a year ago

    Why not go all the way and limit the times you have to interact with Python to zero ;)

    • mkoubaa a year ago

      Because if your users want python, you have to convince them they don't want it. If you fail, you'll have made optimized code that nobody uses.

      Another strategy is to actually serve your users

    • coldtea a year ago

      Because then you have the warts of the new language, and the pain of migrating code, which could be 100s or millions of lines, to worry about...

      • rbanffy a year ago

        People have been trying to move away from COBOL since I was a kid and I’m not even young.

  • 0x63_ProblemsOP a year ago

    Totally agree, keeping the interface with the extension as thin as possible makes sense.

    I hadn't considered object pooling in this context, it might be more involved since each node has distinct data but for my use case it might still be a performance win.

    Have you ever used pyo3 for rust bindings? I haven't measured the overhead but I have been assuming that it's worth the tradeoff vs. rolling my own.

    (I'm the author)

    • hansvm a year ago

      My last workplace used pyo3 for a project. It was slower than vanilla Python, and you picked up all the normal compiled-language problems like slow builds and cross-compilation toolchains.

      I wouldn't take away from that observation that pyo3 is slow (it was just a poor fit; FFI for miniscule amounts of work), but the fact that the binding costs were higher than vanilla Python computations suggests that the overhead is (was?) meaningful. I don't know how it compares to a hand-written extension.

      • lifthrasiir a year ago

        That's pretty surprising, because I have also extensively used PyO3 in my daily job and it was quite performant. Your comment does seem to suggest that you were also using the `numpy` crate or similar in addition to `pyo3`, which performance might be more variable than I would expect for PyO3 though. (I personally minimized the use of `numpy` for that reason, but didn't have a particular performance issue with it anyway.)

      • ameliaquining a year ago

        I'd definitely be curious to know what specific runtime operations PyO3 inserts that you don't have to do with the C API. Naively it doesn't seem like there should be any, since Rust has zero-overhead C FFI.

        • hansvm a year ago

          Sorry, "FFI" was a shorthand for "mixing and matching two languages' GC expectations, memory layouts, ..." and all the overhead associated with merging something opinionated, like Rust, with something dynamic, like Python. You almost certainly _can_ reduce that overhead further, but unless somebody has gone out of their way to do so, the default expectation for cross-language calls like that should be that somebody opted for maintainable code that has actually shipped instead of shaving off every last theoretical bit of overhead.

          It's been a few years, so I really can't tell you exactly what the problem was (other than the general observation that you should try to do nontrivial amounts of work in your python extensions rather than trivial amounts), but PyO3 agrees with the general sentiment [0] [1], or at least did at roughly the same time I was working there.

          [0] https://github.com/PyO3/pyo3/issues/679

          [1] https://github.com/PyO3/pyo3/issues/1470

  • tomjakubowski a year ago

    re: 3, Python has a native numeric array type https://docs.python.org/3/library/array.html

    • raymondh a year ago

      We should probably get rid of that. It is old (predating numpy) and has limited functionality. In almost every case I can think of, you would be better off with numpy.

      • jph00 a year ago

        If you don't want to add a dep on numpy (which is a big complex module) then it's nice to have a stdlib option. So there are certainly at least some cases where you're not better off with numpy.

        • coldtea a year ago

          Even better if Python adds a mainline pandas/numpy like C-based table structure, with a very small subset of the pandas/numpy functionality, that's also convertable to pandas/numpy/etc.

          • ameliaquining a year ago

            What kind of subset would you have in mind? I think that any kind of numeric operation would be off the table, for the reasons given in PEP 465:

            "Providing a quality implementation of matrix multiplication is highly non-trivial. Naive nested loop implementations are very slow and shipping such an implementation in CPython would just create a trap for users. But the alternative – providing a modern, competitive matrix multiply – would require that CPython link to a BLAS library, which brings a set of new complications. In particular, several popular BLAS libraries (including the one that ships by default on OS X) currently break the use of multiprocessing."

            • im3w1l a year ago

              Numpy is incredibly widespread and basically a standard so I would propose: It should have exactly the same layout in memory as a numpy array. It's fine if it has a very limited set of operations out-of-the-box. Maybe something like get, set, elementwise-arithmetic. Work with numpy project to make it possible to cast it to numpy array to help the common case where someone is fine with a dep on numpy and wants the full set of numpy operations.

            • coldtea a year ago

              The best they can do without BLAS. Doesn't have to be as fast as numpy, just faster and more memory efficient than doing it in native Python, without the dependency.

          • KptMarchewa a year ago

            It just should be native support for Apache Arrow.

          • dumah a year ago

            A performant table data structure with powerful and convenient syntax for interaction is one great feature Q has that Python lacks.

    • f33d5173 a year ago

      array is for serializing to/from binary data. It isn't useful for returning from a library because the only way a python programmer can consume it is by converting into python objects, at which point there is no efficiency benefit. numpy has a library of functions for operating directly on the referenced data, as well as a cottage industry of libraries that will take a numpy array as input. Obviously someone might end up casting it to a list anyways, but there is at least the opportunity for them to not do that.

      • sgarland a year ago

        multiprocessing.shared_memory.ShareableList can be useful in some circumstances, even if you don’t intend on sharing it across processes. It allows direct access to the data, elements are mutable (to an extent; you can’t increase the size of either the overall list or its elements once built), and since the underlying shm is exposed, you can get memoryviews for zero-copy.

        The downside is they’re on the more esoteric side of Python, so people may not be as familiar with them as other structures.

      • dumah a year ago

        Cython implements a C api for accessing the underlying data structure.

        Arrays implement the buffer interface so they can be used efficiently with tools like numpy.

  • sgarland a year ago

    Re: 3, you can also use Python’s array.array in some circumstances. If you have heterogeneous types, don’t need multiple dimensions, and don’t need Fortran memory layout, they’re a good choice IMO, and one that doesn’t require pulling in 3rd party packages.

    • jmkr a year ago

      I have thought Python's arrays have been overlooked for years. So much so that people call a list an array.

  • alkh a year ago

    Re: 2, is there any good repo with raw C Python API that can be used as a reference for someone who is not too proficient in C? I took a look at numpy but it seems too complicated for me

    • maxmorlocke a year ago

      I've found rapidfuzz to be a good, digestable C/Python integration. It's especially nice as the algorithms implemented in C frequently have good pseudocode or other language representations, so you can reference really well. The docs are in reasonable shape as well:

      https://github.com/rapidfuzz/RapidFuzz

    • jay-barronville a year ago

      You mind elaborating on what exactly you’re looking for? Maybe I can help point you in the right direction, but right now, it’s not clear given your description.

  • lifthrasiir a year ago

    > 2. Be very careful about tools like pybind11 that make it easier to write extensions for Python. They come with a significant amount of overhead. For critical hotspots, always use the raw Python C extension API.

    Agrees on the broader point (and I don't like pybind11 that much anyway), but the raw Python C extension API is often hard to use correctly. I would suggest that you should at least have a rough idea about how higher-level libraries like pybind11 would translate to the C API, so that you can recognize performance pitfalls in advance.

    > 3. Use numpy arrays whenever possible when returning large lists to Python. A python list of python integers is amazingly inefficient compared to a numpy array of integers.

    Or use the `array` module in the standard library if that matters. numpy is not a small library and has quite a large impact on the initial startup time. (Libraries like PyTorch are even much worse to be fair, though.)

  • dumah a year ago

    3. The memoryview interface is often a good solution.

    • lalaland1125 a year ago

      I think you mean the buffer interface?

      I think the buffer interface is too complex to provide directly to users. I think an API that returns numpy arrays is simpler and easier to understand.

      • sgarland a year ago

        Memoryviews abstract the buffer interface as an object, so perhaps that’s what was meant.

        I disagree with the inclination to jump to numpy. I much prefer minimizing 3rd party libraries, especially if the performance is equivalent or nearly so.

    • jmkr a year ago

      I discovered memoryview when looking at the JACK Python library. It is pretty neat. But also one of those things I wouldn't have known to look for.

truth_seeker a year ago

Preloading jemalloc binary can help here. Of course it wont be as efficient as using numpy 2.x especially for dealing with larger datasets.

jemalloc also gave good results with NodeJS and Ruby projects i did.

hackan a year ago

Nice article!

But I couldn't help but notice that when `_PyCompile_AstOptimize` fails (<0), then `arena` is never freed. I think this is bug :thinking:.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection