Introducing Conjure, Palantir’s Toolchain for HTTP/JSON APIs
medium.comThis looks interesting. How do you (if at all) tackle API versioning, or do you see that as orthogonal?
We require all teams to use semver and usually define the Conjure API in the same repo as the server that implements it (so they are versioned together). Most of the time teams want to preserve wire-format back compat (i.e. only add optional fields to requests and don’t remove things from responses). Devs are pretty good at this, but we actually run a conjure-backcompat checker as part of CI to catch any unintended breaks they might have missed.
When it comes to intentional breaks, teams save up a bunch of carefully documented changes and then tag the next semver major version (i.e. 2.0.0). This is safe to do because all microservices and frontends we build actually contain embedded minimum and maximum version constraints on the backend services they require. Our deployment infrastructure knows about all these constraints and can then ensure that new major releases will only be rolled out in production when all their callers have explicitly declared support.
The deployment infra constraints mentioned in parent post are generally managed by tooling we also publish[1]. The encoding ends up similar to Helm’s, but happens through this tooling just by virtue of taking a dependency on a published jar, npm package or conda package, and that removes a lot of programmer error/guess work/maintenance. (I think we’d be open to also emitting Helm’s formats in our packaging tools, if that’s interesting to someone reading this feel free to open an issue on our repo and reference my post.)
I'm one of the authors of this project -- happy to answer questions!
At a first glance it seems like a step backwards from already established solutions like Thrift or Protobufs? What are some of the reasons you decided to go this direction?
Two big motivations for us: (1) we had a large footprint of JAX-RS annotated Java services and a correspondingly large footprint of Typescript frontends communicating with those services, both hand-maintained; (2) we wanted something that felt just as native and ergonomic in browsers as it did on the backend.
Migrating from that setup required first having a declarative API format to translate through, and Conjure was our answer (starting in 2016) to generate human-quality code that would drop-in-replace our hand-maintained, language-specific client and server definitions.
That said, we're fans of gRPC/Protobuf and heavy Cassandra users (so also have a good breadth of experience with Thrift), and gave both serious looks before getting going, and again before deciding to open source our work today.
When we started on this tooling there really weren't great gRPC options for the browser, and the balance of our developer pain was around frontend/backend rather than backend/backend RPC. We also took a long look at Swagger/OpenAPI, but ultimately moved on because it focused more on the full coverage of any kind of HTTP/JSON API and as a result was too general to end up with consistent APIs across many services.
Over the last two years of development (and conversion of all our hand-maintained clients) we found that Conjure held a lot of value as an easy-to-adapt declarative definition format, that it applied strong enough constraints so as to make API development focused on the semantics and behaviors rather than the syntax or specification. And, we thought that that had sufficient value we should open it up to others.
Beyond that, we've got some work underway to use protobufs as the wire format, and enough flexibility built into the framework that we can use that or other non-JSON wire formats along-side JSON with the same client and server interfaces and code implementations.
Thanks for your answer. I think I initially misunderstood the intended purpose of this tool (probably because of how many times the word RPC appears in the text) and assumed it was used for backend-to-backend communication, for which HTTP/JSON seemed extremely sub-optimal given the existing open-source solutions.
While it does make a lot more sense in the context of FE<->BE, the second part of your answer, where you speak about consistency of APIs across many services is still a little confusing, as it suggests BE<->BE communication again. Unless you have a huge, monolithic, internet/browser-facing service (which sounds rather undesirable) it's hard to imagine how browser<->middleware API could get out of hand to the point where it requires a dedicated unification framework.
Rephrasing my question - even though Conjure is not a commercial product (respect to Palantir for contributing to opensource) you must have had a target audience in mind for it: who is it? What exactly is the problem that Conjure is solving better than its existing alternatives?
Our systems look more wide than deep, and the breadth happens both in FE and in BE, so our diverse FE apps communicate with a relatively diverse set of BE services — we’d like those interactions to look unified no matter where you hit the system. In other words: we have a large number of microservices without a consolidating middleware.
Compared to other frameworks, Conjure more easily retrofits into HTTP/JSON service boundaries (no surprise based on its origins) and, IMO, provides great ergonomics for mixed FE/BE teams, especially where FEs are big and complex with lots of BE interaction.
In terms of audience, we’d hope this helps FE/BE teams with easy to use and ergonomic API defs, and think, as above, it’d especially help others who have existing surface area to convert to declarative APIs, even if only as a stepping stone.
On the BE/BE comms point: we use Conjure for all RPCs in our systems — while JSON is obviously not as compact as Protobuf or Thrift we’ve found serialization and transmission are rarely bottlenecks, and that, instead, a unified format and common treatment for clients is a boon for operability and stability — and often times also for aggregate system performance.