Experiments with h3 clients + Envoy

moderation.
2 min readApr 13, 2021

I’ve been experimenting with HTTP/3 (h3) support in Envoy Proxy. I now have both upstream and downstream working

Inspired by the paper referred to below I set out to expand the number of h3 clients for testing

Thanks to @howardjohn at Google for helping with the initial h3 config. Thanks to @triplewy1 for helping me sort out the correct parameters to pass to proxygen.

Big thanks to the Envoy team who have helped with configs, testing, ideas etc. In particular @alyssawilk, @danzh2010 + Matt Klein

Envoy

I’m building Envoy from source (main branch) on Linux with a limited set of extensions

Clients

I’ve ended up building and testing 7 h3 clients:

  1. curl/curl (C, cloudflare/quiche + BoringSSL)
  2. hyperium/h3 (Rust, musl static)
  3. proxygen/hq (C++)
  4. mozilla/neqo (Rust, NSS)
  5. istio/quic-go (Go)
  6. cloudflare/quiche (Rust, musl static)
  7. quinn-rs/quinn (Rust, musl static)

Testing

hyperfine

I used the excellent hyperfine for testing. Please note that benchmarking is hard and this is in no way a proper benchmark. This is more for fun, learning how to build and use new h3 clients and working out how to configure h3 / QUIC for Envoy. Please take all results with a huge grain of salt

tl;dr — neqo generally slightly quicker followed closely by proxygen, quic-go + h3 (not always in that order). Then quinn, curl and cloudflare/quiche. I’m surprised by cloudflare/quiche being so slow however I believe it has not been optimized at this point

h3spec

I’ve also tested using the excellent h3spec. We found one crashing bug using this test suite which has subsequently been fixed

45 examples, 13 failures. This suite has been great for catching crashes but it should be noted the goal is not to attain 100% as there are a number of performance trade-offs to consider

Config

Downstream h3 with local direct responses + h2 upstream

The first config shows how to set up a TCP + UDP listener on the same port, testing JSON structed logging, an Envoy direct response on `/local`, `alt-svc` headers on h2 responses

I use CUE for all of these configs and these are exported YAML. The process is started on the Fish shell with:

`<path to>/envoy — concurrency 1 — log-level debug — config-path (cue export downstream_httpbin_org.cue | psub)`

Note that for h3 to worked today you’ll need to set `concurrency` to 1

Downstream h1 with h3 upstream

This is a simpler config with a stock h1 listener but talks h3 to the upstream service

Future

  • Would be fun to test this with things like dynamic forward proxies
  • The testing above is done on an Envoy proxy with a the runtime value `envoy.reloadable_features.prefer_quic_kernel_bpf_packet_routing: true` set and Linux Capabilities of `sudo setcap cap_bpf+ep <path to>/envoy` on a kernel >= 5.8.x. However as per the following issue it is not sure what effect this has

--

--