Envoy: Modern Proxy server with HTTP 1.1 and 2, GRPC supported reverse proxy
envoyproxy.github.ioYou can also find more information about Envoy on the Istio home page. Istio basically packages up Envoy for Kubernetes and Docker Swarm, and helps with the management of it.
The Istio home page probably does a better job of explaining "why?" than the envoy home page; the envoy home page is better at explaining "what?".
Thank you, I was wondering about the reason Istio was created. It also sheds some light as to why the CNCF brought on both Istio and Envoy. Still wondering why both Envoy and Linkerd are on their page as service mesh and not just one. They have one project for every other category.
Linkerd is written in Scala. There are a class of people who avoid the JVM, sometimes for good reasons. For one, Envoy is more resource-efficient. See https://github.com/envoyproxy/envoy/issues/99.why both Envoy and Linkerd are on their page as service mesh
The docs are pretty light on the grpc web feature. Is there a sample project anywhere that demos how to do this from the browser?
I am converting my proto to typescript with https://github.com/improbable-eng/grpc-web and until recently also used their go server library. That meant my grpc servers were listening on 2 ports, gRPC and gRPC-web. I since replaced the gRPC-web part with envoy. Works fine together with the gRPC-web typescript. The gist should give you an idea how to set up a small example. https://gist.github.com/codesuki/f0514368a30b483058007f5fe38...
"Every distributed solution is doomed to eventually reproduce Erlang, badly."
It isn't very clear to me yet what this does. Is it like nginx for containers or is it more than that?
It's mainly meant for communication between your microservices.
Envoy is a self contained process that is designed to run alongside every application server. All of the Envoys form a transparent communication mesh in which each application sends and receives messages to and from localhost and is unaware of the network topology.
and
Although Envoy is primarily designed as a service to service communication system, there is benefit in using the same software at the edge (observability, management, identical service discovery and load balancing algorithms, etc.). Envoy includes enough features to make it usable as an edge proxy for most modern web application use cases.
both from https://envoyproxy.github.io/envoy/intro/what_is_envoy.html
I am using docker swarm and I just use overlay network to communicate with other sevices and ngnix on the edge. I am trying to understand where this fits into the picture. Any thoughts ?
Envoy/Istio are designed to move logic out of your apps and into the middleware.
For example, say your app A makes an HTTP request to app B and app B times out. Ordinarily app A has to build in retry logic (with expontential backoff to avoid dogpiling). Fine if you have a single app, but if you have a dozen microservices, that's a lot of duplicated code.
The solution is to let a proxy handle it for you. Instead of A -> B, you get A -> Envoy -> B. Envoy can do things like retrying, name resolution (something more flexible than DNS that can, say, be used to do A/B tests where traffic to B actually gets routed to another instance of B that runs code from a different branch), load balancing, request/bandwidth throttling, circuit-breaking (failing requests when an overload "trips" the breaker), logging, profiling (measuring timings and making them available to, say, Prometheus), tracing (inserting HTTP headers to generate a path so if a request goes A -> B -> C, then C has a complete "stack trace" that can be used for logging), and so on.
Istio adds a layer of transparency, at least on Kubernetes. Instead of configuring app A to use a proxy, app A just talks to app B as though there's no proxy at all. In reality, Istio has injected some local network magic in the container to route the traffic through the proxy.
Thanks a lot, you have explained this very well. Being able to explain something this well and so understandably means you really understand what is going on too.
This was awesome explanation, I instantly got it. This should be on their homepage.
Thank you.
You need it for "web-scale" (tm). You probably don't need it.
But unlike a lot of "web-scale" fads, you can drop this in when you actually do need it, you don't have to worry about it now.
So glance at the istio.com home page, and when/if you need any of that, come back and take a closer look.
Here is a nice presentation that explains it in a bit more detail: https://www.youtube.com/watch?v=RVZX4CwKhGE
Interesting tags on Github: https://github.com/envoyproxy/envoy
Seems like it's the only cats-over-dogs project so far. But not for long ;)
Looks pretty neat. Our GRPC services are currently bundled with a JSON-REST proxy as well as a GRPC-WEB proxy in the service itself and then placed behind an NGINX k8s Ingress. This could be a really nice way to extract those proxies as a sidecar.
Last I looked, envoy is fantastically hard to build. Have fun downloading, building and installing all these yourself: https://envoyproxy.github.io/envoy/install/requirements.html
FWIW, I packaged Envoy for NixOS in an evening: https://github.com/NixOS/nixpkgs/blob/dab3272f47f13c2a7442e3...
That nets you
... and then you're done. Of course, we have a CI server that builds our packages regularly, so it's likely you wouldn't even need to build from source (though you certainly could, if desired).$ nix-build -A envoyLyft already took care of that: https://hub.docker.com/r/lyft/envoy/
Would one use this container as a base? I've tried learning Docker, but I often end up *just referring back to LXD when I get confused.
If I wanted to run a Python app, which serves gRPC, what would be the easiest path?
I currently use nghttp2 for gRPC proxy/routing
It is better to run components in separate containers instead of one container that runs everything together. You'll likely have a single proxy but multiple copies of application. Also, you don't need to deal with getting Envoy and Python application built together, or even care about how the Envoy image is built since you can use the provided one.
A big list of libraries one has to install is kind of intimidating, yeah. But them's the breaks.
What seems particularly offensive to me is that it declares that it specifically requires GCC 4.9. I'm really not OK pinning your brand new impressive world changing open source cloud technology to a compiler release series that's 3.5 years old. It looks either staid, or uncaring, or otherwise dull to have slipped to far behind the times, and it's intimidating to think the codebase is complex enough C or C++ that it really matters a whole lot.
It says GCC 4.9+, the + meaning "at least".