JSON to gRPC transcoding with Envoy
Note: this post was updated on 2021-06-02 to work with Envoy v3 config (Envoy version 1.18.3) and gRPC 1.38.0. Please email me if this post gets stale.
I’m a gRPC man now, as you might’ve noticed from the flood of posts about the tech lately.
So, continuing on from the last post about setting up Envoy to proxy gRPC-Web to gRPC, this post has a quick run through on how to set up Envoy to transcode JSON requests to gRPC.
I’m doing this is on a fresh Ubuntu 20.04 box on AWS. Start by updating your package lists:
sudo apt-get update
Create a basic gRPC service definition
// sample.proto
syntax = "proto3";
package sample;
import "google/api/annotations.proto";
service SampleRPC {
rpc Save(SaveReq) returns (SaveRes) {
option (google.api.http) = {
post: "/save/{id}"
body: "*"
};
}
rpc Get(GetReq) returns (GetRes) {
option (google.api.http) = {
get: "/get/{id}"
};
}
}
message SaveReq {
uint32 id = 1;
string message = 2;
}
message SaveRes {
bool success = 1;
uint32 id = 2;
string message = 3;
}
message GetReq {
uint32 id = 1;
}
message GetRes {
uint32 id = 1;
string message = 2;
}
The key thing here is now the google.api.http
annotation. You can read the rules for the annotation in the documentation.
Install Docker
Docker docs here, but here’s the quick version:
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# optional, allows running docker without sudo
sudo usermod -aG docker ubuntu
You need to log out and back in for the usermod
command to take effect.
Grab the HTTP API annotation protos
These aren’t installed with gRPC, so we need to get them manually:
mkdir -p google/api
pushd google/api
wget https://raw.githubusercontent.com/googleapis/googleapis/master/google/api/annotations.proto
wget https://raw.githubusercontent.com/googleapis/googleapis/master/google/api/http.proto
popd
Protocol buffers and gRPC are pretty good at eating their own dog food, so they actually use protobufs to describe protobufs themselves.
Create a Docker container for gRPC code generation
I’ve found this is way easier than installing it locally.
# grpc.Dockerfile
FROM alpine:3 as base
FROM base as build
# build deps
RUN apk add --no-cache cmake git build-base linux-headers
# grpc
WORKDIR /deps
RUN git clone -b v1.38.0 https://github.com/grpc/grpc
WORKDIR /deps/grpc
RUN git submodule update -j 16 --init
WORKDIR /deps/grpc/build
RUN cmake -DgRPC_INSTALL=ON ..
RUN make -j 16
RUN make install
FROM base
RUN apk add --no-cache libstdc++
# copy binaries
COPY --from=build /usr/local/bin /usr/local/bin
# copy includes, needed for protobuf imports
COPY --from=build /usr/local/include /usr/local/include
docker build -t sample/grpc . -f grpc.Dockerfile
Compile the protos
Make a file called generate_protos.sh
:
#!/bin/sh
# generate_protos.sh
protoc -I. \
--include_imports --include_source_info \
--descriptor_set_out sample.pb \
--python_out=. \
--grpc_python_out=. \
--plugin=protoc-gen-grpc_python=$(which grpc_python_plugin) \
\
google/api/annotations.proto \
google/api/http.proto \
sample.proto
The --descriptor_set_out
generates basically a protocol buffer that describes the input files and that Envoy will consume to read the annotations.
Now make it executable and run it:
chmod +x generate_protos.sh
docker run --rm -w /app -v $(pwd):/app sample/grpc ./generate_protos.sh
Create a server
Let’s write a simple Python server:
# server.py
from concurrent import futures
import grpc
import sample_pb2, sample_pb2_grpc
store = {}
class Servicer(sample_pb2_grpc.SampleRPCServicer):
def Save(self, request, context):
store[request.id] = request.message
return sample_pb2.SaveRes(
success=True,
id=request.id,
message=store[request.id],
)
def Get(self, request, context):
if request.id not in store:
context.abort(grpc.StatusCode.NOT_FOUND, "Not found")
return sample_pb2.GetRes(
id=request.id,
message=store[request.id],
)
server = grpc.server(futures.ThreadPoolExecutor(1))
sample_pb2_grpc.add_SampleRPCServicer_to_server(Servicer(), server)
server.add_insecure_port("[::]:50051")
server.start()
server.wait_for_termination()
Install pip
, and gRPC deps:
sudo apt-get install -y python3 python3-pip
pip3 install grpcio protobuf
You can now run the server with python3 server.py
.
Set up Envoy
Create an envoy.yaml
:
# envoy.yaml
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 5000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.stdout
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: sample_cluster
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_json_transcoder
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
proto_descriptor: "/tmp/envoy/proto.pb"
services: ["sample.SampleRPC"]
print_options:
add_whitespace: true
always_print_primitive_fields: true
always_print_enums_as_ints: false
preserve_proto_field_names: false
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: sample_cluster
connect_timeout: 0.25s
type: logical_dns
load_assignment:
cluster_name: sample_cluster
endpoints:
- lb_endpoints:
- endpoint:
address: { socket_address: { address: localhost, port_value: 50051 }}
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
Then a Dockerfile:
# Dockerfile
FROM envoyproxy/envoy:v1.18.3
COPY ./envoy.yaml /etc/envoy/envoy.yaml
COPY ./sample.pb /tmp/envoy/proto.pb
EXPOSE 5000
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
Build the container:
docker build -t sample/envoy .
Run it:
docker run -d --net=host sample/envoy
Try it out
Now start the server:
python3 server.py
You can then try the API, assuming you’re running on localhost:
curl --data '{"message":"Transcoding"}' http://127.0.0.1:5000/save/123
You should get back some JSON:
{
"success": true,
"id": 123,
"message": "Transcoding"
}
Now to retrieve your message:
curl http://127.0.0.1:5000/get/123
Should return:
{
"id": 123,
"message": "Transcoding"
}