Settings

Theme

REST vs GraphQL vs gRPC

danhacks.com

168 points by gibbonsd1 5 years ago · 167 comments

Reader

stickfigure 5 years ago

No mention of what I see as the biggest con of GraphQL: You must build a lot of rate limiting and security logic, or your APIs are easily abused.

A naive GraphQL implementation makes it trivial to fetch giant swaths of your database. That's fine with a 100% trusted client, but if you're using this for a public API or web clients, you can easily be DOSed. Even accidentally!

Shopify's API is a pretty good example of the lengths you have to go to in order to harden a GraphQL API. It's ugly:

https://shopify.dev/concepts/about-apis/rate-limits

You have to limit not just number of calls, but quantity of data fetched. And pagination is gross, with `edges` and `node`. This is is straight from their examples:

    {
      shop {
        id
        name
      }
      products(first: 3) {
        edges {
          node {
            handle
          }
        }
      }
    }
Once you fetch a few layers of edges and nodes, queries become practically unreadable.

The more rigid fetching behavior of REST & gRPC provides more predictable performance and security behavior.

  • tshaddox 5 years ago

    I'm not convinced that GraphQL is any more difficult to implement robust rate limiting or other performance guarantees than a REST API with comparable functionality. As soon as you start implementing field/resource customizability in a REST API you have roughly the same problems guaranteeing performance. JSON:API, for example, specifies how to request fields on related objects with a syntax like `/articles/1?include=author,comments.author`, which is comparable to the extensibility you get by default in GraphQL. Different libraries which help you implement JSON:API or GraphQL may differ in how you opt in or opt out of this sort of extensibility, and perhaps in practice GraphQL libraries tend to require opting out (and GraphQL consumers might tend to expect a lot of this extensibility), but at the end of the day there's little difference in principle for two APIs with comparable functionality. And, as others have noted, the popular GraphQL implementations I've seen all make it fairly straightforward to limit things like the query depth or total number of entities requested.

    Of course, if the argument is simply that it tends to be more challenging to manage performance of GraphQL APIs simply because GraphQL APIs tend to offer a lot more functionality than REST APIs, then of course I agree, but that's not a particularly useful observation. Indeed having no API at all would further reduce the challenge!

    [0] https://jsonapi.org/format/#fetching-includes

    • ucarion 5 years ago

      > Of course, if the argument is simply that it tends to be more challenging to manage performance of GraphQL APIs simply because GraphQL APIs tend to offer a lot more functionality than REST APIs, then of course I agree, but that's not a particularly useful observation. Indeed having no API at all would further reduce the challenge!

      On their own, such arguments are indeed not useful. But if you can further point out that GraphQL has more functionality than is required, then you can basically make a YAGNI-style argument against GraphQL.

    • jkoudys 5 years ago

      Often the rates I'll end up limiting in rest aren't even bottlenecks at all in graphql. like if I wanted to grab a relationship that hasn't been implemented with its own resource endpoint.

      e.g. get all the comments in every article written by one author, I might say `/author/john smith` that returns all their articles, then run an `/articles/{}?include=comments` for each one. That'll run a separate query server-side for each one, which can get very heavy if I'm doing thousands of queries. On the gql this is trivial as `{ author(name: "john smith") { articles { comments `, but because it's one request the server-side fetch can be run _way_ more efficiently. We have dataloaders for the SQL written that'll collapse every big query like this into (often) a `IN (?, ?`... query, or sometimes subselects. Same concept works on any sql or nosql approach. So yeah it might be "a lot" data were it RESTful, but we're not going to bottleneck on a single indexed query and a ~10MB payload.

      The real advantage I see for REST in that scenario is that it can _feel_ faster to the end-user, since you'll get some data back earlier. Running a small query on thousands of requests is slower, but you can display the first little one's result to the user faster than a big gql payload,.

  • hn_throwaway_99 5 years ago

    This is what I see as a huge misconception of GraphQL, and unfortunately proliferates due to lots of simple "Just expose your whole DB as a GraphQL API!" type tools.

    It's quite simple (easier in my opinion than in REST) to build a targeted set of GraphQL endpoints that fit end-user needs while being secure and performant. Also, as the other user posted, "edges" and "nodes" has nothing to do with the core GraphQL spec itself.

    • slow_donkey 5 years ago

      I don't disagree with you, but graphql just lends itself well to bad decisions and many times when I've poked at graphql endpoints they share these issues (missing auth after first later, exposing schema by accident, no depth/cost limit). I think a combination of new technology w/o standardized best practices and startups being resource constrained proliferates poor security with graphql.

      Of course, the same could happen for standard REST as well, but I think the foot guns are more limited.

      • hn_throwaway_99 5 years ago

        I think I would agree. I'm a huge GraphQL fanboy, but one of the things I've posted many many times that I hate about GraphQL is that it has "QL" in the name, so a lot people think it is somehow analogous to SQL or some other generic query language.

        So you get these very generic GraphQL APIs that map closely to the DB, when the exact opposite should be the case, that the APIs map as close as possible to the front-end use cases, and data is presented so that the front ends should need to have little, if any, customized view display logic. It even says so at the beginning of the spec:

        > Product‐centric: GraphQL is unapologetically driven by the requirements of views and the front‐end engineers that write them. GraphQL starts with their way of thinking and requirements and builds the language and runtime necessary to enable that.

      • Aeolun 5 years ago

        > no depth/cost limit

        Or you can do like us, there’s no depth at all, since our types do not have any possible subqueries.

  • andrew_ 5 years ago

    Rate limiting and security are trivial these days, with an abundance of directive libs available, ready to use out of the box, and every major third party auth provider boasting ease of use with common GraphQL patterns. I'd argue what you see as the biggest con is actually a strength now.

    > And pagination is gross, with `edges` and `node`

    This just reads like an allergic reaction to "the new" and towards change. Edges and Nodes are elegant, less error prone and limits and skips, and most importantly - datasource independent.

    • verdverm 5 years ago

      I'd be interested to see a graphql library that makes security trivial. Could you add some links?

      In my experience, securing nested assets based on owner/editor/reader/anon was rather difficult and required inspecting the schema stack. I was using the Apollo stack.

      This was in the context of apps in projects in accounts (common pattern for SaaS where one email can have permissions in multiple orgs or projects)

      • 5Qn8mNbc2FNCiVV 5 years ago

        Hasura makes that pretty easy as can be seen here: https://github.com/firatoezcan/hasura-cms

        This is also easy to do with self-written servers, maybe take a look at the metadata folder to get a gist of what Hasura would be doing behind the scenes (running a query and then checking the claim for the condition for the given field that permission wants to be requested for)

        (Just a repo I started one evening, it doesn't do much but the concept of projects with owners and collaborators should work)

        • verdverm 5 years ago

          That's an end user experience on a platform. A library is something I can import into my own code to implement auth, without having to adopt a given stack. I wrote one, it's not simple (https://www.npmjs.com/package/graphql-autharoo)

          Looking at the SQL and metadata, does not look all that simple for such a simple case. The complex part is behind all that, written by Hasura.

          Imaging what that would look like with Orgs, Groups, and User permissions all existing on a single object, or even resource type, and how a single email (user) could have permissions at all of these levels on any object. Then consider that GraphQL allows nested query objects, so am I listing the objects as a top-level query, or is the list from a 1 to many relation nested under another query, where the query parsing system now batches these subqueries and presents them to the resolver in a big log. You have to understand the context of the incoming queries in each resolver, and then make auth decisions about it.

          Think about using Hasure vs writing the auth systems in Hasura. Or how complex things get when you want to implement auth for multi-tenant SaaS.

  • QuinnWilton 5 years ago

    I'm a huge fan of GraphQL, and work full-time on a security scanner for GraphQL APIs, but denial of service is a huge (but easily mitigated) risk of GraphQL APIs, simply because of the lack of education and resources surrounding the topic.

    One fairly interesting denial of service vector that I've found on nearly every API I've scanned has to do with error messages. Many APIs don't bound the number of error messages that are returned, so you can query for a huge number of fields that aren't in the schema, and then each of those will translate to an error message in the response.

    If the server supports fragments, you can also sometimes construct a recursive payload that expands, like the billion laughs attack, into a massive response that can take down the server, or eat up their egress costs.

    • Aeolun 5 years ago

      I kind of feel that the server itself should protect against attacks like that. Of course it isn’t inherent in the specification, but I don’t think it’s something that an implementer should have to think about either (beyond, ‘have I enabled DOS mitigation ‘ anyway)

  • stevebmark 5 years ago

    edges and node come from Relay, not from the core GraphQL spec. They're just one way to do pagination.

    I like edges and node, it gives you a place to encode information about the relationship between the two objects, if you want to. And if all your endpoints standardize on this Relay pagination, you get standard cursor/offset fetching, along with the option to add relationship metadata in the future if you want, without breaking your schema or clients.

    edit: the page you linked to has similar rate limiting behavior for both REST and GraphQL lol

    • andrewingram 5 years ago

      Technically the spec is part of GraphQL itself now, but an optional recommendation, not something you’re obliged to do.

      That said, like you I am a fan.

      It’s a pretty defensible pattern, more here for those interested: https://andrewingram.net/posts/demystifying-graphql-connecti...

      The overall verbosity of a GraphQL queue tends to not be a huge issue either, because in practice individual components are only concerning themselves with small subsets of it (i.e fragments). I’m a firm believer that people will have a better time with GraphQL if they adopt Relay’s bottom-up fragment-oriented pattern, rather than a top-down query-oriented pattern - which you often see in codebases by people who’ve never heard of Relay.

      • Aeolun 5 years ago

        Also by people that have heard of relay but already have an existing codebase. It’s not something that’s very simple to adopt out of hand.

    • wyattjoh 5 years ago

      Seconded. I feel that the pagination style that Relay offers is typically better than 99% of the custom pagination implementations out there. There's no reason why the cursor impl can just do limit/skip under the hood (if that's what you want to do), but it unlocks you to change that to cursor based _easily_.

          {
            products(first: 3) {
              pageInfo {
                hasNextPage
                endCursor
              }
              edges {
                cursor
                node {
                  handle
                }
              }
            }
          }
mumblemumble 5 years ago

I feel like this is rather shallow, and, by focusing so heavily on just the transport protocol, misses a lot of more important details.

For starters, REST and "JSON over HTTP/1.1" are not necessarily synonyms. This description conflates them, when really there are three distinct ways to use JSON over HTTP/1.1: Actual REST (including HATEOAS), the "openAPI style" (still resource-oriented, but without HATEOAS), and JSON-RPC. For most users, the relative merits of these three considerations are going to be a much bigger deal than the question of whether or not to use JSON as the serialization format.

Similarly, for gRPC, you have a few questions: Do you want to do a resource-oriented API that can easily be reverse proxied into a JSON-over-HTTP1.1 API? If so then you gain the ability to access it from Web clients, but may have to limit your use of some of gRPC's most distinctive features. How much do you want to lean toward resource-orientation compared to RPC? gRPC has good support for mixing and matching the two, and making an intentional decision about how you do or do not want to mix them is again probably a much bigger deal in the long run than the simple fact of using protocol buffers over HTTP/2.

GraphQL gives clients a lot of flexibility, and that's great, but it also puts a lot of responsibility on the server. With GraphQL, clients get a lot of latitude to construct queries however they want, and the people constructing them won't have any knowledge about which kinds of querying patterns the server is prepared to handle efficiently. So there's a certain art to making sure you don't accidentally DOS attack yourself. Guarding against this with the other two API styles can be a bit more straightforward, because you can simply not create endpoints that translate into inefficient queries.

  • stevebmark 5 years ago

    This post is just a brief summary of each protocol, I don't understand how it made it to the front page.

    • applecrazy 5 years ago

      I think it’s because of the robust discussion in these comments.

      • lazysheepherd 5 years ago

        I almost never read articles in HN.

        If I see title "something something GraphQL" in HN, I do not think "ooh someone wrote an article about GraphQL".

        I rather see it as "folks in HN discussing about GraphQL today!"

        I can google and get tens or even hundreds of articles about GraphQL, but I won't get the "points and counter points: a discussion that I get here.

        • applecrazy 5 years ago

          Wholeheartedly agree. The articles are often of average quality with a few gems here and there.

          What I derive value from (once you tune out the pedantry) is the discussion by people knowledgeable on the topic. As a computer science student who tries to keep up with best practices, corporate adoption of technologies, and general trends in the industry, this is something I can't really get anywhere else.

          I credit HN for giving me a balanced insight on things and indirect feedback on what's important to learn in order to make myself marketable when I graduate.

      • ZephyrBlu 5 years ago

        Story of HN. Come for the content, stay for the comments.

  • mkoubaa 5 years ago

    A counterpoint here is that if someone was seriously considering these three theyre likely choosing between HATEOAS, resource-oriented gRPC, and GraphQL.

TeeWEE 5 years ago

I think for people who didnt try GRPC yet, this is for me the winner feature:

"Generates client and server code in your programming language. This can save engineering time from writing service calling code"

It saves around 30% development time on features with lots of API calls. And it grows better since there is a strict contract.

Human readability is over-rated for API's.

  • adamkl 5 years ago

    Code generation and strong contracts are good (and C#/Java developers have been doing this forever with SOAP/XML), but they do place some serious restrictions on flexibility.

    I’m not sure how gRPC handles this, but adding an additional field to a SOAP interface meant regenerating code across all the clients else they would fail at runtime while deserializing payloads.

    A plus for GraphQL is that because each client request is a custom query, new fields added to the server have no impact on existing client code. Facebook famously said in one of their earlier talks on GraphQL that they didn’t version their API, and have never had a breaking change.

    Really, I don’t gRPC and GraphQL should even be compared since they support radically different use cases.

    • mattnewton 5 years ago

      > I’m not sure how gRPC handles this, but adding an additional field to a SOAP interface meant regenerating code across all the clients else they would fail at runtime while deserializing payloads.

      This is basically the reason every field in proto2 will be marked optional and proto3 is “optional” by default. IIRC the spec will just ignore these fields if they aren’t set or if they are present but it doesn’t know how to use them (but won’t delete them, if it needs to be forwarded. Of course this only works if you don’t reserialize it. Edit: this is not true see below).

      • jeffbee 5 years ago

        Even if a process re-serializes a message, unknown fields will be preserved, if using the official protobuf libraries proto2 or 3.5 and later. Only in 3.0 did they drop unknown fields, which was complete lunacy. That decision was reverted for proto 3.5.

        Some of the APIs (C++ for example) provide methods to access unknown fields, in case they were only mostly unknown.

      • malkia 5 years ago

        I do remember some debate around required/optional in proto2, and how the consensus was to not have required in proto3 - parties had good arguments on both sides, but I think ultimately the backward-forward compatibility argument won. With required, you can never retire a field, and older service/client would not work correctly. I haven't used proto2 since then, been using proto3 - but was not aware of what another poster here mentioned about proto 3.5 - so now have to read...

    • malkia 5 years ago

      Learned that at Google, for each service method, introduce individual Request and Response messages, even if you can reuse. This way you can extend without breaking.

      Also never reuse old/deleted field (number), and be very careful if you change the type (better not).

      • fatnoah 5 years ago

        These were basic principles I put in place at a previous company that had a number of API-only customers. Request/Response messages. Point releases could only contain additive changes. Any changes to existing behaviors, removal of fields, or type changes required incrementing the API version, with support for current and previous major versions.

    • AlexITC 5 years ago

      Protobuf was designed to keep backwards/forwards compatibility, you can easily add any fields to your messages without breaking old clients at the network layer (unless you intend to break them at the application layer);

  • mumblemumble 5 years ago

    Also, gRPC's human readability challenges are overblown. A naked protocol buffer datagram, divorced from all context, is difficult to interpret. But gRPC isn't just protocol buffers.

    There's a simple, universal two-step process to render it more-or-less a non-issue. First, enable the introspection API on all servers. This is another spot where I find gRPC beats Swagger at its own game: any server can be made self-describing, for free, with one line of code. Second, use tools that understand gRPC's introspection API. For example, grpcurl is a command-line tool that automatically translates protobuf messages to/from JSON.

  • dtech 5 years ago

    There's solutions like that for GraphQL [1] and REST too. For REST OpenAPI/Swagger has a very large ecosystem, but does depend on the API author making one.

    [1] https://graphql-code-generator.com/

    • atraac 5 years ago

      In C# one can have a decent documentation just based on a few attributes so you don't even have to write that much to have that schema available to your clients.

    • TeeWEE 5 years ago

      We are trying opeapi-generator and the experience is that the generated code for server stubs is either non existent, requires certain frameworks or is just not working.

      So we had to write our own code generator templates. It was a pain compared to GRPC.

      But yes, in theory it can be done.

    • mumblemumble 5 years ago

      I can't speak to GraphQL, but, when I was doing a detailed comparison, I found that OpenAPI's code generation facilities weren't even in in the same league as gRPC's.

      Buckle up, this is going to be a long comparison. Also, disclaimer, this was OpenAPI 2 I was looking at. I don't know what has changed in 3.

      gRPC has its own dedicated specification language, and it's vastly superior to OpenAPI's JSON-based format. Being not JSON means you can include comments, which opens up the possibility of using .proto files as a one stop shop for completely documenting your protocols. And the gRPC code generators I played with even automatically incorporate these comments into the docstrings/javadoc/whatever of the generated client libraries, so people developing against gRPC APIs can even get the documentation in their editor's pop-up help.

      Speaking of editor support, using its own language means that the IDEs I tried (vscode and intellij) offer much better editor assistance for .proto files than they do for OpenAPI specs. And it's just more concise and readable.

      Finally, gRPC's proto files can import each other, which offers a great code reuse story. And it has a standard library for handling a lot of common patterns, which helps to eliminate a lot of uncertainty around things like, "How do we represent dates?"

      Next is the code generation itself. gRPC spits out a library that you import, and it's a single library for both clients and servers. The server side stuff is typically an abstract class that you extend with your own implementation. I personally like that, since it helps keep a cleaner separation between "my code" and "generated code", and also makes life easier if you want to more than one service publishing some of the same APIs.

      OpenAPI doesn't really have one way of doing it, because all of the code generators are community contributions (with varying levels of documentation), but the two most common ways to do it are to generate only client code, or to generate an entire server stub application whose implementation you fill in. Both are terrible, IMO. The first option means you need to manually ensure that the client and server remain 100% in sync, which eliminates one of the major potential benefits of using code generation in the first place. And the second makes API evolution more awkward, and renders the code generation all but useless for adding an API to an existing application.

      gRPC's core team rules the code generation with an iron fist, which is both a pro and a con. On the upside, it means that things tend to behave very consistently, and all the official code generators meet a very high standard for maturity. On the downside, they tend to all be coded to the gRPC core team's standards, which are very enterprisey, and designed to try and be as consistent as possible across target languages. Meaning they tend to feel awkward and unidiomatic for every single target platform. For example, they place a high premium on minimizing breaking changes, which means that the Java edition, which has been around for a long time, continues to have a very Java 7 feel to it. That seems to rub most people (including me) the wrong way nowadays.

      OpenAPI is much more, well, open. Some target platforms even have multiple code generators representing different people's vision for what the code should look like. Levels of completeness and documentation vary wildly. So, you're more likely to find a library that meets your own aesthetic standards, possibly at the cost of it being less-than-perfect from a technical perspective.

      For my part, I came away with the impression that, at least if you're already using Envoy, anyway, gRPC + gRPC-web may be the least-fuss and most maintainable way to get a REST-y (no HATEOAS) API, too.

      • onei 5 years ago

        I don't entirely disagree that gRPC tooling is nicer and more complete in some areas, but there's some misconceptions here.

        You can specify openapi v2/3 as YAML and get comments that way. However, the idea is that you add descriptions to the properties, models, etc. It's almost self-documenting in v3, and looks about the same in v2, although I've used v2 less so can't be sure.

        I can't speak to editor support, but openapi has an online editor that has linting and error checking. Not sure if that's available as a plugin somewhere, but it's a likely a little more awkward if it is by virtue of being a YAML or JSON file rather that a bespoke file extension.

        I've seen v3 share at least models across multiple files - it can definitely be done. String formats for dates and uuids are available, but likely not as rich as the protobuf ecosystem as you mention.

        And I wholeheartedly agree that the lack of consistent implementation is a problem in openapi. I tried to use v3 for Rust recently and gave up due to it's many rough edges for my use case. It's a shame - the client generation would have been a nice feature to get for free.

        • mumblemumble 5 years ago

          All fair points.

          I had forgotten about the YAML format; I probably skipped over it because I am not a fan of YAML. As far as the description features go, they're something, but the lack of ability to stick extra information just anywhere in the file for the JSON format severely hampers the story for high-level documentation. I'm not a fan of the "the whole is just the sum of the parts" approach to documentation; not every important thing to know can sensibly be attached to just one property or resource.

  • omginternets 5 years ago

    The flip side (IMHO, at least), is that simple build-chains are underrated.

    As much as I love a well-designed IDL (I'm a Cap'n Proto user, myself), the first thing I reach for is ReST. It most cases, it's sufficient, and in all cases it keeps builds simple and dependencies few.

    • sagichmal 5 years ago

      > The flip side (IMHO, at least), is that simple build-chains are underrated.

      Wow, yes! Yes!

      IDLs represent substantial complexity, and complexity always needs to be justified. Plain, "optimistically-schema'd" ;) REST, or even just JSON-over-HTTP, should be your default choice.

  • codegladiator 5 years ago

    Have you seen OpenAPI ? Generating client and server code in your programming language for rest/graphql/grpc is not new.

  • gravypod 5 years ago

    In my personal experience the other huge benefit is you now have a source of truth for documentation. Since your API definition is code you can leave comments in it and code review it all while having a language that is very approachable for anyone who is familiar with the C family of languages.

    For a newcomer having `message Thing {}` and `service ThingChanger {}` is very approachable because it maps directly into the beginner's native programming language. If they started out with Python/C++/Java you can say "It's like a class that lives on another computer" and they instantly get it.

  • andrew_ 5 years ago

    You don't get much more human readable than GraphQL syntax. There are now oodles of code generation tools available for GraphQL schemas which takes most of the heavy lifting out of the equation.

    • tmpz22 5 years ago

      Do the code generators create efficient relational queries? I don't know about everyone else but my production data is highly relational.

      • jrockway 5 years ago

        I use gqlgen for a Go backend and the combination of our schema design and the default codegen results in it opening one database connection for every row that ends up in the output. You can manually adjust it to not do that, but it doesn't seem like a good design to me.

        (It also does each of these fetches in a separate goroutine, leading me to believe that it's really designed to be a proxy in front of a bunch of microservices, not an API server for a relational database. Even in that case, I'm not convinced it's an entirely perfect design -- for a large resultset you're probably going to pop the circuit breaker on that backend when you make 1000 requests to it in parallel all at the exact same instant. Because our "microservice" was Postgres, we very quickly determined where to set our max database connection limit, because Postgres is particularly picky about not letting you open 1000 connections to it.

        I had the pleasure of reading the generated code and noticing the goroutine-per-slice-element design when our code wrapped the entire request in a database transaction. Transactions aren't thread-safe, so multiple goroutines would be consuming the bytes out of the network buffer in parallel, and this resulted in very obvious breakages as the protocol failed to be decoded. Fun stuff! I'll point out that if you are a Go library, you should not assume the code you're calling is thread-safe... but they did the opposite. Random un-asked-for parallelism is why I will always prefer dumb RPCs to a query language on top of a query language. Sometimes you really want to be explicit rather than implicit, even if being explicit is kind of boring.)

      • atom_arranger 5 years ago

        Depends on the implementation. I've been using PostGraphile which says "PostGraphile compiles a query tree of any depth into a single SQL statement, resulting in extremely efficient execution".

        https://www.graphile.org/postgraphile/

  • emodendroket 5 years ago

    So when are SOAP and WSDL coming back into Vogue?

  • sa46 5 years ago

    I adore gRPC but figuring out how to use it from browser JavaScript is painful. The official grpc-web [1] client requires envoy on the server which I don't want. The improbable-eng grpc-web [2] implementation has a native Go proxy you can integrate into a server, but seems riddled with caveats and feels a bit immature overall.

    Does grpc-web work well? Is there a way to skip the proxy layer and use protobufs directly if you use websockets?

    [1]: https://github.com/grpc/grpc-web [2]: https://github.com/grpc/grpc-web

  • nwienert 5 years ago

    On the GraphQL side you can use gqless[0] (or the improved fork I helped sponsor, here[1]). It's by far the best DX I've had for any data fetching library: fully typed calls, no strings at all, no duplication of code or weird importing, no compiler, and it resolves the entire tree and creates a single fetch call.

    [0] https://github.com/gqless/gqless

    [1] https://github.com/PabloSzx/new_gqless

    • robin21 5 years ago

      Pretty cool! Graphql-code-generator is great, but this goes a whole lot further. I no longer have to even write queries.

      I guess the big advantage is that when you write a manual query you can still pull down more data than you need by accident. Whereas this approach only pulls down what you need.

  • karmakaze 5 years ago

    Thrift supports many serializations both binary and human readable that you could choose at runtime, say for debugging.

    I never did use the feature, having got tired of using Thrift for other reasons (e.g. poor interoperability Java/Scala).

    • morelisp 5 years ago

      Wait, thrift interoperates poorly with Java? We are stuck using it in go because another (Java-heavy) team exposes their data via it, and the experience has been awful even for a simple service. So what is it good in?

      • karmakaze 5 years ago

        This was a long way back. Specifically, the Scrooge (or Finagle) generator for Scala and Java supported different versions of Thrift libraries. Maybe the problem was related to having a project with both Java and Scala and over the wire interop would have been fine--don't remember the exact details.

  • mattwad 5 years ago

    Not for me. I tried this with Javascript a couple years ago, and it was painful mapping to and from gRPC types everywhere. There's no "undefined" for example if you have a union type. You have to come up with a "EMPTY" value. On top of that, we had all kinds of weird networking issues that we just weren't ready to tackle the same way we could with good ol' HTTP.

meowzero 5 years ago

Another con of GraphQL (and probably GRPC) is caching. You basically get it for free with REST.

REST can also return protobufs, with content type application/x-protobuf. Heck, it can return any Content-Type. It doesn't have to be confined to JSON.

GRPC needs to support the language you're using. It does support a lot of the popular languages now. But most languages have some sort of http server or client to handle REST.

  • tlrobinson 5 years ago

    I think the benefits of HTTP caching are often exaggerated, especially for APIs and single page applications.

    Often I’ll want much more control over caching and cache invalidation than what you can do with HTTP caching.

    I’d be interested to see an analysis of major websites usage of HTTP caching on non-static (i.e. not images, JS, etc) resources. I bet it’s pretty minimal.

    • jayd16 5 years ago

      You can put a RESTful response on S3 (or even stub a whole service) but AFAIK you can't do that for gRPC or GraphQL.

  • dathinab 5 years ago

    GraphQl queries are often send over an HTTP endpoint. This allows you to take advantage of caching where necessary (but then enough apps don't cache HTTP either).

    For example you want to make sure large resource blobs (e.g. pictures are cached).

    In this case you can decide to not put them in the GraphQl response but instead put a REST uri of them there and then have a endpoint like `/blobs/<some-kind-of-uid>` or `/blobs/pictures/<id>` or similar.

    The same endpoints also can be used for pushing new resources (GraphQl creates a "empty" `/blobs/<id>` entry to which you than can push).

    While this entails an additional round-trip and is properly not the best usage for some cases for others like e.g. (not small) file up-/down-load it is quite nice as this is a operation often explicitly triggered by a user in a way where the additional roundtrip time doesn't matter at all. An additional benefit is that it can make thinks like halting and resuming downloads and similar easier.

  • ec109685 5 years ago

    We solve the cacheability part by supporting aliases for queries by extended the GraphQL console to support saving a query with an alias.

    Also, with gzip, the size of json is not a big deal given redundant fields compress well.

    • pavel_lishin 5 years ago

      Cacheability isn't just about the transfer, it's also about decreasing server load in a lot of applications. An expensive query might return a few bytes of JSON, but may be something you want to avoid hitting repeatedly.

      • ec109685 5 years ago

        Sorry, they were addressing the two points from the comment above. I agree cacheability and transfer size are two separate aspects.

  • phaedryx 5 years ago

    I believe that GraphQL handles this with "persisted queries." Basically, you ask the server to "run standard query 'queryname'." Since it is a standard, predefined query you can cache the results.

    Apollo support link: https://www.apollographql.com/docs/apollo-server/performance...

    • thinkingkong 5 years ago

      The browser supports built in caching without requiring a specific library; additionally the infrastructure of the web provides this as well.

    • Mandatum 5 years ago

      That sounds like RPC with extra steps..

  • cogman10 5 years ago

    What popular languages aren't supported by GRPC?

  • rhacker 5 years ago

    Well actually, it can actually be more advanced with GraphQL. In fact GQL clients have the ability to invalidate caches if it knows the same ID has been deleted or edited and can in some cases even avoid a new fetch.

    That being said some of those advanced use cases may be off by default in Apollo.

zedr 5 years ago

> Easily discoverable data, e.g. user ID 3 would be at /users/3. All of the CRUD (Create Read Update Delete) operations below can be applied to this path

Strictly speaking, that's not what REST considers "easily discoverable data". That endpoint would need to have been discovered by navigating the resource tree, starting from the root resource.

Roy Fielding (author of the original REST dissertation): "A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). (...) Instead, allow servers to instruct clients on how to construct appropriate URIs, such as is done in HTML forms and URI templates, by defining those instructions within media types and link relations. [Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC’s functional coupling].

A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). "[1]

1. https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

  • arethuza 5 years ago

    You are quite correct, but by this stage the original definition of REST to include HATEOAS has pretty much been abandoned by most people.

    Edit: Pretty much every REST API I see these days explains how to construct your URLs to do different things - rather than treating all URLs as opaque. Mind you having tried to create 'pure' HATEOAS REST API I think I prefer the contemporary approach!

    • zedr 5 years ago

      I agree with your preference. I too lean towards a pragmatic approach to REST, which I've seen referred to as "RESTful", as in the popular book "RESTful Web APIs" by Richardson.

    • mkoubaa 5 years ago

      I don't understand why the original dissertation is treated like gospel

      • arethuza 5 years ago

        I don't think its completely unreasonable to look at a definition like REST and be dogmatic about certain aspects like HATEOAS which are arguably absolutely central to the original concept.

        However, in retrospect, it might have been an idea to give what developed from Fielding's original work a clearly different name.

  • smadge 5 years ago

    The web has a standard for identifying resources, URIs. One nice thing about URIs is they can be URLs. A single integer id doesn't even identify a user since if I give you 3 you have no idea if it is users/3 or posts/3, or users/3 on Twitter or users/3 on Google, or the number of coffees I have drunk today.

hankchinaski 5 years ago

i don't know about you but in my experience, unless you have Google's size microservices and infrastructure gRPC (with protocol buffers) is just tedious.

- need an extra step when doing protoc compilation of your models

- cannot easily inspect and debug your messages across your infrastructure without a proper protobuf decoder/encoder

If you only have Go microservices talking via RPC there is GOB encoding which is a slimmed down version of protocol buffer, it's self describing, cpu efficient and natively supported by the Go standard library and therefore probably a better option - although not as space efficient. If you talk with other non-Go services then a JSON or XML transport encoding will do the job too (JSON rpc).

The graphQL one is great as what is commonly known as 'backend for frontend' - but inside the backend. it makes the life of designing an easy to use (and supposedly more efficient) API easier (for the FE) but much less so for the backend, which warrants increased implementation complexity and maintenance.

the good old rest is admittedly not as flexible as rpc or graphql but does the job for simpler and smaller apis albeit I admit i see, anecdotally, it being used less and less

  • jayd16 5 years ago

    > unless you have Google's size microservices and infrastructure

    The protobuf stuff can start to pay off as early as when you have two or more languages in the project.

    • treis 5 years ago

      Don't you get the same benefit by writing a Swagger spec?

      • jayd16 5 years ago

        Possibly. I don't usually find restful api gen to be quite as seamless. Value judgements aside, yes, you _could_ generate from swagger but with gRPC its built in and consistent.

  • gravypod 5 years ago

    > unless you have Google's size microservices and infrastructure gRPC (with protocol buffers) is just tedious

    Having used gRPC in very small teams (<5 engineers touching backend stuff) I had a very different experience from yours.

    > need an extra step when doing protoc compilation of your models

    For us this was hidden by our build systems. In one company we used Gradle and then later Bazel. In both you can set it up so you plop a .proto into a folder and everything "works" with autocompletes and all.

    > cannot easily inspect and debug your messages across your infrastructure without a proper protobuf decoder/encoder

    There's a lot of tooling that has recently been developed that makes all of this much easier.

    - https://github.com/fullstorydev/grpcurl

    - https://github.com/uw-labs/bloomrpc

    - https://kreya.app/

    You can also use grpc-web as a reverse proxy to expose normal REST-like endpoints for debugging as well.

    > If you talk with other non-Go services then a JSON or XML transport encoding will do the job too (JSON rpc).

    The benefit of protos is they're a source of truth across multiple languages/projects with well known ways to maintain backwards comparability.

    You can even build tooling to automate very complex things:

    - Breaking Change Detector: https://docs.buf.build/breaking-usage/

    - Linting (Style Checking): https://docs.buf.build/lint-usage/

    There's many more things that can be done but you get the idea.

    On top of this you get something else that is way better: Relatively fast server that's configured & interfaces with the same way in every programming language. This has been a massive time sink in the past where you have to investigate nginx/*cgi, sonic/flask/waitress/wsgi, rails, and hundreds of other things for every single language stack each with their own gotchas. gRPC's ecosystem doesn't really have that pain point.

  • lokar 5 years ago

    I wish gRPC has the same ability as stubby (what google uses internally): logging rpc (calls and replies, full msg or just the hdr) to a binary file and a nice set of tools to decode and analyze them.

throwaway4good 5 years ago

Am I the only who simply does remote procedure calling over http(s) via JSON? Not REST as in resource modelling but simply sending a request serialized as a JSON object and getting a response back as a JSON object.

  • sneak 5 years ago

    I've done JSON-RPC at scale before and the one downside to it is that you have to write a custom caching proxy for readonly calls that understands your API.

    With REST you can just use a normal HTTP caching proxy for all the GETs under certain paths, off the shelf.

    Using a hybrid (JSON-RPC for writes and authenticated reads, REST for global reads) would have saved me a lot of time spent building and maintaining a JSON-RPC caching layer.

    There is benefit to a GET/POST split, and JSON-RPC forces even simple unauthenticated reads into a POST.

    The other issue with JSON-RPC is, well, json. It's not the worst, but it's also not the best. json has no great canonicalization so if you want to do signed requests or responses you're going to end up putting a string of json (inner) into an key's value at some point. Doing that in protobuf seems less gross to me.

    • throwaway4good 5 years ago

      Personally I prefer to have explicit control over the caching mechanism rather than leaving it to network elements or browser caching.

      That is explicity cache the information in your JavaScript frontend or have your backend explicitly cache. In that way it is easy to understand and your can also control what circumstances a cache is invalidated.

      • sneak 5 years ago

        > Personally I prefer to have explicit control over the caching mechanism rather than leaving it to network elements or browser caching.

        I'm not talking about browser caching, I'm talking about the reverse proxy that fronts your ("backend") service to the internet. High traffic global/unauthenticated reads, especially those that never change, should get cached by the frontend (of the "backend", not the SPA) reverse proxy and not tie up appservers. (In our case, app servers were extremely fat, slow, and ridiculously slow to scale up.)

        • throwaway4good 5 years ago

          I am sure you have good reasons for your concrete design but in the general case: Why not simply build the caching into your backend services rather than having a proxy do it based on the specifics of http protocol? It would be simpler and far more powerful.

          • sneak 5 years ago

            Because the backend app servers were extremely fat (memory and disk intensive) and every request they served that could have been served from an upstream cache was a request that they weren't serving that actually required all of their resources to serve.

            Caching upstream on (vastly cheaper) instances permitted a huge cost savings for the same requests/sec.

          • mkranjec 5 years ago

            Proxy caching / leveraging caching to external service makes sense when you have a LOT of users scattered around the globe - let Akamai/Cloudflare take care of edge node caching and maintenance. In the end it saves you a lot of engineering time and infrastructure costs not mentioning user experience. YMMV but it pays off in our current setup.

          • tatersolid 5 years ago

            Because if you use HTTP caching, you can use a CDN with 100s of global locations. Which is quite a bit more powerful than any custom solution.

    • hashkb 5 years ago

      If I'm going to neuter HTTP like that, I at least do RPC over websockets for realtime feel. And I still usually run it through the whole Rails controller stack so I don't drive myself insane.

  • danpalmer 5 years ago

    This is fine at a small scale. When you're one dev or a small team you can understand the whole system and you'll benefit from this simplicity.

    When you're many devs, many APIs, many resources, it really pays to have a consistent, well-defined way to do this. GraphQL is very close to what you've described, with some more defined standards. GRPC is close as well, except the serialisation format isn't JSON, it's something more optimised.

    As a team grows these sorts of standards emerge from the first-pass versions anyway. These just happen to be pre-defined ones that work well with other things that you could choose to use if you wanted to.

    • sagichmal 5 years ago

      It's fine at medium and large scale, too, as long as the service doesn't change its API very much, and/or it doesn't have too many consumers. It only breaks down when managing change becomes too difficult. IMO way way too many people opt in to the complexity of IDL-based protocols without good reason.

  • throwaway4good 5 years ago

    Just be clear - I am not suggesting JSON-RPC as there is no envelope and the name of the invoked procedure is in the HTTP request line.

    For example:

       POST /api/listPosts HTTP/1.1
       { userId: "banana", fromDate: 2342342342, toDate: 2343242 }
    
    Reponse:

       HTTP/1.1 200 OK
       [ { id: 32432, title: "Happy banana", userId: "banana" }, ... ]
    
    Or in case of an error:

       HTTP/1.1 500 Internal Server Error
       { type: "class name of exception raised server side", message: "Out of bananas" }
    
    The types can be specified with TypeScript needed.
    • throwaway4good 5 years ago

      The point is to get free of resource modelling paradigm, the meaning of various http methods, and potential caching. And also not have the silly overhead of JSON-RPC.

  • yeswecatan 5 years ago

    My team has started doing this for workflow orchestration. When our workflows (implemented in Cadence) need to perform some complex business logic (grab data from 3 sources and munge, for example) we handle that with a RPC-style endpoint.

  • corytheboyd 5 years ago

    No, I have seen many such approaches. It draws undue criticism when the actual REST API starts to suffer due to people getting lazy, at which point they lump the RPC style calls into the blame.

  • fragile_frogs 5 years ago

    I am doing the same.

    GET /api/module/method?param1&param2

    or

    POST /api/module/method Body: Json{ param1, param2 }

  • robin21 5 years ago

    I do JSON-rpc over webscokets.

bluejekyll 5 years ago

Additional GraphQL con: it requires some thought and planning in order to ensure data is cacheable in CDNs and other reverse proxies. This is generally simpler in REST because the APIs tend to be more single use. In GraphQL you have to essentially predefine all the queries in order to achieve the same cache-ability of responses. Then identify those, essentially, over REST.

sillyquiet 5 years ago

In my opinion, it makes very little sense to compare GraphQL to REST from a client perspective - if you are only going to be hitting a single API endpoint, use REST (or gRPC I guess). The overhead of GraphQL doesn't make it worth using at that scale.

The strength and real benefit of GraphQL comes in when you have to assemble a UI from multiple data sources and reconcile that into a negotiable schema between the server and the client.

  • fastball 5 years ago

    Though there are also solutions like Hasura where GraphQL makes sense at approximately any scale because it allows you to create an API from nothing in about 10 minutes.

hashkb 5 years ago

Too concise. This wouldn't be helpful for making a choice; or will mislead. Right off the top, it's not necessary to write REST endpoints for each use case. Many REST apis have filtering and joining, just like gql.

Edit: claiming gql solves over/underfetch without mentioning that you're usually still responsible for implementing it (and it can be complex) in resolvers is borderline dishonest.

  • parhamn 5 years ago

    I'd call it completely lacking, not concise. E.g. its already conflating transport & serialization. But HN loves a good serializer debate so this will be an active thread.

sshb 5 years ago

I've recently stumbled upon WebRPC https://github.com/webrpc/webrpc

It solves gRPC's inability to work nicely with web browsers.

thdxr 5 years ago

Highly recommend taking a look at the JSONAPI spec - https://jsonapi.org/

It directly addresses the cons mentioned in the article while retaining all the pros

  • sagichmal 5 years ago

    Tried it. Definitely not ready yet, and the scope may be large enough that it won't ever get there. Also, many of its design choices are fundamentally in tension with statically typed languages.

    I think you can probably formalize JSON API schemas in a useful way, but JSON API ain't it.

ryandvm 5 years ago

I am totally expecting the next fad in web development to be just exposing a raw SQL interface to the front-end...

  • js8 5 years ago

    Simplify things? No way! The next fad will be SQL over GraphQL.

  • robin21 5 years ago

    Jokes aside, isn’t this ultimately what we are all looking for?

    We have added so many layers and translations between our frontend and database.

    Graphql brought us closer and it starts to run into some of the security concerns already.

    What if someone just made this direct sql interface safe/restricted?

sudowing 5 years ago

The `versus` nature of this question was the driving force behind a project I built last year.

I've been in multiple shops where REST was the standard -- and while folks had interest in exploring GraphQL or gRPC, we could not justify pivoting away from REST to the larger team. Repeatedly faced with this `either-or`, I set out to build a generic app that would auto provision all 3 (specifically for data-access).

I posted in verbose detail about that project a few months ago, so here I'll just provide a summary: The project auto provisions REST, GraphQL & gRPC services that support CRUD operations to tables, views and materialized views of several popular databases (postgres, postgis, mysql, sqlite). The services support full CRUD (with validation), geoquery (bbox, radius, custom wkt polygon), complex_resources (aggregate & sub queries), middleware (access the query before db execution), permissions (table/view level CRUD configs), field redaction (enable query support -- without publication), schema migrations, auto generated openapi3/swagger docs, auto generated proto file.

I hope this helps anyone in a spot where this `versus` conversation pops up.

original promotional piece: https://news.ycombinator.com/item?id=25600934

docker implementation: https://github.com/sudowing/service-engine-template

youtube playlist: https://www.youtube.com/playlist?list=PLxiODQNSQfKOVmNZ1ZPXb...

  • joekrill 5 years ago

    What conclusions did you come to?

    • sudowing 5 years ago

      I can't say I reached any conclusions -- I can only offer that the handful of people that have offered feedback found if feature rich and easy to use.

ledauphin 5 years ago

a big thing not called out here that both gRPC and GraphQL natively do, but that REST does not natively do, is schema definition for the payloads.

It's massively useful to know exactly what 'type' a received payload is, as well as to get built-in, zero-boilerplate feedback if the payload you construct is invalid.

  • sagichmal 5 years ago

    Sure, but that utility also carries massive costs: the infrastructure and tooling required to work with those schema definitions, especially as they change over time. It's certainly not the case that the benefits always, or even usually, outweigh those costs.

    • ledauphin 5 years ago

      I disagree that tooling is required to work with them in the GraphQL case - you still end up getting and sending JSON in the vast majority of cases.

      With gRPC you're absolutely correct. Then again, I wasn't trying to make an argument for or against any of these technologies per se - just pointing out that this is a major part of the value provided that wasn't really called out in the article.

sneak 5 years ago

One thing that people seem to gloss over when comparing these is that you also need to compare the serialization. gRPC using protobuf means you get actual typed data, whereas typing in JSON (used by the other two) is a mess (usually worked around by jamming anything ambiguous-in-javascript like floats, dates, times, et c into strings).

If you're using typed languages on either the client or the server, having a serialization system that preserves those types is a very nice thing indeed.

  • dboreham 5 years ago

    > usually worked around by jamming anything ambiguous like floats, dates, times, et c into strings

    because nobody has ever done this with protobuf...

    btw what is the protobuf standard type for "date"?

    • teeray 5 years ago

      That would be a Timestamp, found in the "well-known types" [0]. You can use your own encoding (milliseconds since epoch, RFC3339 string, etc), but using Timestamp gets you some auto-generated encoding / decoding functions in the supported languages.

      [0] https://developers.google.com/protocol-buffers/docs/referenc...

      • jayd16 5 years ago

        A timestamp is not quite the same thing as a calendar date.

      • jeffbee 5 years ago

        Timestamp is fundamentally flawed and should only be used in applications without any kind of performance/efficiency concerns, or for people who really need a range of ten thousand years. The problem with Timestamp is it should not have used variable-length integers for its fields. The fractional part of a point in time is uniformly distributed, so most timestamps are going to have either 4 or 5 bytes in their representation, meaning int32 is worse than fixed32 on average. The whole part of an epoch offset in seconds is also pretty large, it takes 5 bytes to represent the present time. Since you also have two field tags, Timestamp requires 11-12 bytes to represent the current time, and it's expensive to decode because it takes the slowest-possible path through the varint decoder.

        Reasonable people can used a fixed64 field representing nanoseconds since the unix epoch, which will be very fast, takes 9 bytes including the field tag, and yields a range of 584 years which isn't bad at all.

    • sneak 5 years ago

      int64 representing unix epoch millis (in UTC) is what I usually use. You need to jump through an additional hoop to store timezone or offset.

      You can, of course, do the thing that JS requires you always do and put an ISO8601 date in a string. This has the benefit of storing the offset data in the same field/var.

      Javascript needs int64s as strings, I believe, because the JS int type maxes out at 53 bits.

claytongulick 5 years ago

Surprised no one has mentioned what (to me) is the killer feature of REST, JSON-patch [1]

Completely solves the PUT verb mutation issue and even allows event-sourced like distributed architectures with readability (if your patch is idempotent)

I married it to mongoose [2] and added an extension called json-patch-rules to whitelist/blacklist operations [3] and my API life became a very happy place.

I've replaced hundreds of endpoints with json-patch APIs and some trivial middleware.

When you couple that stack with fast-json-patch [4] on the client you just do a simple deep compare between a modified object and a cloned one to construct a patch doc .

This is the easiest and most elegant stack I've ever worked with.

[1] http://jsonpatch.com/

[2] https://www.npmjs.com/package/mongoose-patcher

[3] https://github.com/claytongulick/json-patch-rules

[4] https://www.npmjs.com/package/fast-json-patch

mdavidn 5 years ago

There are a variety of tricks to solve the over/under-fetching problems of REST. My default approach is JSON:API, which defines standard query parameters for clients to ask the server to return just a subset of fields or to return complete copies of referenced resources.

https://jsonapi.org/

jrsj 5 years ago

Big missing con for GraphQL here — optimization. Doing the right thing in simple cases is easy; in more complex cases you’re looking at having to do custom operations based on query introspection which is an even bigger pain in the ass than using REST in the first place UNLESS all of your data is in one database OR if you’re using it as a middleman between your clients and other backend services, you have a single read-through cache like Facebook which allows you to basically act as if everything were in a single database.

speedgoose 5 years ago

> JSON objects are large and field names are repetitive

I used to write protocol buffer stuff for this reason. But I realized after some time that compressed json is almost as good if not better depending on the data, and a lot simpler and nicer to use. You can consider to pre-share a dictionary if you want to compress always the same tiny messages. Of course json + compression is a bit more cpu intensive than protocol buffers but it's not having an impact on anything in most use cases.

  • ericbarrett 5 years ago

    I think your instinct to reach for the straightforward solution is good. gRPC has advantages, but it also comes with complexity since you have to bring all the tooling along. And the CPU burden of (de-)serializing JSON is a very different story than when Protobufs were developed in 2001.

thinkingkong 5 years ago

All these patterns are helpful because theyre consistent. The graphql and grpc systems are “good” because theyre schema driven so that makes automated tooling easier. That being said, the problem at smaller scales doesnt really exist too much. Saving bytes on the wire is such a weird optimization at early to mid stages that - unless latency is your actual problem ends up being an optimization with very little business value.

bigmattystyles 5 years ago

Maybe it’s because I’m in the .net world, but why is there never any love for odata? If you have SQL in the back, it’s great!

  • littlecranky67 5 years ago

    OData had its momentum but since a couple of years at least, there is no maintained JS odata library that is not buggy and fully usable in modern environments.

    • bigmattystyles 5 years ago

      I can't disagree there, and for all the work MS is putting into it right now for it in dotnetcore - I don't understand how they can have this big a blind spot.

      • pietromenna 5 years ago

        I agree with you that there is support for oData v2 and v4. But they are not exactly mainstream out there. I like oData v4 and I try to use it when it is opossible.

nym3r0s 5 years ago

Apache Thrift is another great way of server-server & client-server communication. Pretty similar to protobuf but feels a bit more mature to me.

https://thrift.apache.org/

lokar 5 years ago

This is really just a comparison of the basic wire format. Full RPC systems like gRPC are much more.

  • sbayeta 5 years ago

    Could you please elaborate briefly? Thanks!

    • swyx 5 years ago

      i think they are talking about how it is very standard for gRPC systems to generate server and client code that make it very easy to use. see this comment for more https://news.ycombinator.com/item?id=26466902

      • lokar 5 years ago

        Also, many others:

        - Standard Authentication and identity (client and server) - Authorization support - Overload protection and flow control - Tracing - Standard logging - Request prioritization - Load balancing - Health checks and server status

_ZeD_ 5 years ago

... vs soap?

  • bluejekyll 5 years ago

    vs corba?

    Between soap and grpc, grpc is the better choice at this point. It’s simpler to implement, and has decent multi-language support.

ctvo 5 years ago

Bananas vs. Apples vs. Oranges

mixxit 5 years ago

I wish gRPC was bidirectional

For now I will stick to SignalR

chrismorgan 5 years ago

Not loading for me (PR_CONNECT_RESET_ERROR on Firefox, ERR_CONNECTION_RESET on Chrome).

https://web.archive.org/web/20210315144620/https://www.danha...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection