Migrate from Nginx to Envoy Proxy

This article will walk you through how to migrate from Nginx to An Envoy Proxy. You can apply any previous experience and understanding of Nginx directly to an Envoy Proxy.

Main Contents:

  • Configure the Server configuration entry for the Envoy Proxy
  • Configure an Envoy Proxy to Proxy traffic to an external service
  • Configure access logging and error logging

After completing this tutorial, you will understand the core features of Envoy Proxy and how to migrate existing Nginx configuration files to Envoy Proxy.

1.Core modules for Nginx and Envoy Proxy

Take a look at a complete example of an Nginx configuration file from the Nginx Wiki, which reads as follows:

$ cat nginx.conf

user  www www;
pid /var/run/nginx.pid;
worker_processes  2;

events {
  worker_connections   2000;
}

http {
  gzip on;
  gzip_min_length  1100;
  gzip_buffers     4 8k;
  gzip_types       text/plain;

  log_format main      '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '"$gzip_ratio"';

  log_format download  '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '"$http_range" "$sent_http_content_range"'; Upstream targetCluster {172.18.0.3:80; 172.18.0.4:80; } server { listen 8080; server_name one.example.com www.one.example.com; access_log /var/log/nginx.access_log  main;
    error_log  /var/log/nginx.error_log  info;

    location / {
      proxy_pass         http://targetCluster/;
      proxy_redirect     off;

      proxy_set_header   Host             $host;
      proxy_set_header   X-Real-IP        $remote_addr; }}}Copy the code

Nginx configuration is typically divided into three key elements:

  1. Configure Server blocks, logs, andgzipFeature, these configurations are global and can be applied to all examples.
  2. Configure Nginx to receive8080Port pair domain nameone.example.comAccess request.
  3. Forwards traffic from different paths of urls to different target backends.

Not all Nginx configuration items are applicable to Envoy Proxy, and some of them can be ignored in Envoy. Envoy Proxy has four key components that can be used to match Nginx’s core configuration blocks:

  • Listeners: Listeners define how envoys handle inbound requests. Currently, envoys only support TCP-based listeners. Once the connection is established, the request is passed to a set of filters for processing.
  • FilterFilters are part of a chain structure that handles inbound and outbound flow. A number of filters with specific functions can be integrated on the filter chain, for example, through integrationGZipFilters compress data before it is sent to the client.
  • Routers: Routes are used to forward traffic to specific target instances, which are defined as clusters in the Envoy.
  • Cluster: A Cluster defines the destination endpoint of traffic and includes optional configurations such as load balancing policies.

Next we’ll use these four key components to create an Envoy Proxy profile that matches the Nginx profile defined earlier.

2.Nginx configuration migration

The first part of the Nginx configuration file defines the working features that Nginx itself runs.

The Worker number of connections

The following configuration defines the number of worker processes and the maximum number of connections for Nginx, which shows how Nginx is able to meet various needs through its elastic capabilities.

worker_processes  2;

events {
  worker_connections   2000;
}
Copy the code

Envoy Proxy manages Worker processes and connections in a different way. By default, Envoy generates a worker thread for each hardware thread in the system. (You can control this with the –concurrency option.) Each Worker thread is a “non-blocking” event loop that listens for each listener, accepts new connections, instantiates the filter stack for each connection, and handles all IO events during the lifetime of the connection. All further processing is done within the Worker thread, including forwarding.

All the connection pools in the Envoy are bound to Worker threads. Although the HTTP/2 connection pool only establishes one connection at a time with each upstream host, if there are four workers, each upstream host will have four HTTP/2 connections in a steady state. The reason why an Envoy works this way is to handle all connections in a single Worker thread, so that almost all code can be written unlocked as if it were a single thread. Having too many workers wastes memory, creates more free connections, and leads to a lower connection pool hit ratio.

You can find more information on the Envoy Proxy blog.

The HTTP configuration

The next configuration block for Nginx is the HTTP block, which includes the resource’s media type (MIME Type), default timeout, and gZIP compression configuration. These functions are implemented in the Envoy Proxy through filters, which are discussed in detail below.

3.Server Configuration Migration

In the HTTP configuration block, the Nginx configuration specifies to listen on port 8080 and receive access requests to the domain names one.example.com and www.one.example.com.

 server {
    listen        80;
    server_name   one.example.com  www.one.example.com;
Copy the code

This part of the configuration is managed by the Listener in the Envoy.

Envoy listener

The most important step in making an Envoy work properly is defining a listener. You first need to create a configuration file describing the Envoy’s running parameters.

The following configuration item creates a new listener and binds it to port 8080.

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0. 0. 0. port_value: 8080 }
Copy the code

There is no need to define server_name; the domain name will be handled by the filter.

4.Location configuration migration

When a request comes into Nginx, the Location block defines metadata about how to process the traffic and how to forward the processed traffic. In the configuration item below, all traffic coming into the site is proxyed to an upstream cluster named targetCluster. Upstream clusters specify back-end instances to receive traffic, which are discussed in more detail in the next section.

location / {
    proxy_pass         http://targetCluster/;
    proxy_redirect     off;

    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
}
Copy the code

This part of the configuration is managed by filters in the Envoy.

Envoy filter

For static profiles, filters define how incoming requests are handled. Here we will create a filter matching server_names from the Nginx configuration in the previous section, and when an inbound request matching the domain name and route defined in the filter is received, the traffic for that request will be forwarded to the specified cluster. The clustering here is equivalent to the upstream configuration in Nginx.

filter_chains:
- filters:
  - name: envoy.http_connection_manager
    config:
      codec_type: auto
      stat_prefix: ingress_http
      route_config:
        name: local_route
        virtual_hosts:
        - name: backend
          domains:
            - "one.example.com"
            - "www.one.example.com"
          routes:
          - match:
              prefix: "/"
            route:
              cluster: targetCluster
      http_filters:
      - name: envoy.router
Copy the code

Envoys. Http_connection_manager is the built-in HTTP filter in an envoy. In addition to this filter, several other filters are built into Envoy, including Redis, Mongo, TCP, etc. See Envoy’s official documentation for a full list of filters.

5.Migrate Proxy and Upstream configurations

In Nginx, the upstream configuration item defines the target service cluster used to receive traffic. The following upstream configuration item assigns two back-end instances:

Upstream targetCluster {172.18.0.3:80; 172.18.0.4:80; }Copy the code

This part of the configuration is managed by clusters in the Envoy.

Envoy cluster

Upstream configuration items are defined as clusters in the Envoy. The hosts list in the Cluster is used to process the traffic forwarded by the filter. The hosts access policies (such as timeout) are also configured in the Cluster, which facilitates finer timeout control and load balancing.

clusters:
- name: targetCluster
  connect_timeout: 0.25s
  type: STRICT_DNS
  dns_lookup_family: V4_ONLY
  lb_policy: ROUND_ROBIN
  hosts: [
    { socket_address: { address: 172.18. 03.. port_value: 80 }},
    { socket_address: { address: 172.18. 04.. port_value: 80 }}
  ]
Copy the code

When discovered using a service of type STRICT_DNS, the specified DNS destination is parsed continuously and asynchronously. Each IP address returned in DNS results will be treated as an explicit host in the upstream cluster. This means that if the query returns three IP addresses, Envoy will assume that the cluster has three hosts and that all three should be load-balanced. If a host is removed from the DNS return result, the Envoy considers it no longer exists and excludes it from all current connection pools. See Envoy’s official document for more details.

6.Log Configuration Migration

The final part of the configuration that needs to be migrated is application logging. Envoy Proxy does not persist logs to disk by default, but follows the cloud native approach, where all application logs are output to STdout and Stderr.

Access logging for user request information is optional and disabled by default. To enable access logging for HTTP requests, add the access_log configuration item to the envoys. Http_connection_manager filter. The log path can be a block device (such as STdout) or a file on disk, depending on your needs.

The following configuration items pass all access logs to stdout:

access_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"
Copy the code

Copy this configuration item into the configuration of the enlist.http_connection_manager filter. The complete filter configuration is as follows:

- name: envoy.http_connection_manager
  config:
    codec_type: auto
    stat_prefix: ingress_http
    access_log:
    - name: envoy.file_access_log
        config:
        path: "/dev/stdout"
    route_config:
Copy the code

The default Envoy uses a formatted string to output a detailed log of HTTP requests:

[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH? :PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n
Copy the code

The log output in this example looks like this:

[the 2018-11-23 T04:51:00. 281 z]"The GET/HTTP / 1.1"200-0 58 4 1"-" "Curl / 7.47.0" "f21ebd42-6770-4aa5-88d4-e56118165a7d" "one.example.com" "172.18.0.4:80"
Copy the code

You can customize log output by setting formatting fields, for example:

access_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"
    format: "[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH? :PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n"
Copy the code

You can also set the jSON_format field to output JSON logs, for example:

access_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"
    json_format: {"protocol": "%PROTOCOL%". "duration": "%DURATION%". "request_method": "%REQ(:METHOD)%"}
Copy the code

For more detailed configuration about Envoy logging configuration, please refer to www.envoyproxy.io/docs/envoy/… .

Logging is not the only way to gain visibility when using Envoy Proxy in a production environment; Envoy also has more advanced features built into it, such as distributed tracking and metrics monitoring. You can find more details in the distributed tracking documentation.

The full Envoy configuration file looks like this:

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0. 0. 0. port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
                - "one.example.com"
                - "www.one.example.com"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: targetCluster
          http_filters:
          - name: envoy.router
  clusters:
  - name: targetCluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [
      { socket_address: { address: 172.18. 03.. port_value: 80 }},
      { socket_address: { address: 172.18. 04.. port_value: 80 }}
    ]

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0. 0. 0. port_value: 9090 }
Copy the code

7.Start the Envoy Proxy

Now that you’ve transformed all of Nginx’s configurations into Envoy Proxy configurations, it’s time to launch the Envoy instance and test it.

Run as a normal user

At the top of the Nginx configuration file there is a line configuring user WWW WWW; , indicating that Nginx is run as a low-privileged user to improve security. Envoy takes a cloud-native approach to managing process owners, specifying a low-privileged user with a command line parameter when launching the Envoy Proxy from the container.

Start the Envoy Proxy

The following command will launch the Envoy Proxy through the container, which exposes the Envoy container on port 80 to listen for inbound requests, but the Envoy Proxy inside the container listens on port 8080. Allows the process to run as a low-privileged user with the –user parameter.

$ docker run --name proxy1 -p 80:8080 --user 1000:1000 -v /root/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy
Copy the code

test

With the agent started, you are now ready for access testing. The following curl command makes a request using the Host field in the request header defined in the Envoy configuration file:

$ curl -H "Host: one.example.com" localhost -i
Copy the code

If nothing else, the request will return a 503 error because the upstream cluster is not yet running and in an unavailable state, and the Envoy Proxy cannot find an available destination back end to handle the request. Here’s how to start the corresponding HTTP service:

$ docker run -d katacoda/docker-http-server
$ docker run -d katacoda/docker-http-server
Copy the code

Once these services are started, Envoy can successfully proxy traffic to the destination back end:

$ curl -H "Host: one.example.com" localhost -i
Copy the code

You should now see that the request was successfully responded to, and you can see from the log which container responded to the request.

Additional HTTP response header file

If the request is successful, you will see additional fields in the response header file of the request that contain the amount of time (in milliseconds) it took the upstream host to process the request. These fields can be helpful if the client wants to determine request processing delays due to network latency.

x-envoy-upstream-service-time: 0
server: envoy
Copy the code