To implement a fully functional API gateway, it is not scientific to write it yourself, but it is necessary to understand the necessary principles.

Compile using the

I plan to use Apisix as the original project, so I should first learn how to compile and install Apisix and make it run successfully.

The project address

Apache APISIX is a cloud native API gateway. The Apache APISIX dashboard is designed to make it as easy as possible for users to operate Apache APISIX through the front end interface

Installation steps

Reference API service gateway implementation APISIX installation and deployment The above blog has problems installing the visual interface, check out the official. The installation process starts with apisix itself and a demo application. So consider a two-step process, installing Apisix first and then installing its accompanying interface. The following is my practice.

Install apisix

Environment: openResty centos 7 has been installed

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ivh epel-release-latest-7.noarch.rpm yum  install yum-utils yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo yum install -y etcd  openresty curl git gcc luarocks lua-devel systemctl start etcd yum install -y https://github.com/apache/incubator-apisix/releases/download/1.1/apisix-1.1-0.el7.noarch.rpmCopy the code

Start apisix: apisix start View the process or listen on port 9080

ps aux|grep apisix
netstat -lntp|grep 9080
Copy the code

Disabling firewalls

To avoid failure, this is not recommended for production environments

systemctl stop firewalld.service

systemctl disable firewalld.service

setenforce 0

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
Copy the code

Install apisix cli

To do this, you need to install the GO environment, NodeJS environment, and YARN environment.

Note:

  • The GO language requires the package image to be converted to domestic
go env -w GOPROXY=https://goproxy.cn,direct
Copy the code
  • The node JS version is later than V10.14.2

Follow the steps mentioned in the apisix-Dashboard official installation tutorial as follows:

git clone- release b / 2.10.1 https://github.com/apache/apisix-dashboard.git &&cd apisix-dashboard
Copy the code

This step if your server connection to Github times out, you can consider downloading it on the local computer and then uploading it to your server. Then perform

make build
Copy the code

This process will connect to Github, and if the connection times out, try several more times.

After success, go to the Output directory under the directory, where the generated results are locatedFirst, conf contains the web configuration structure as follows:

conf:
  listen:
    # host: 127.0.0.1 # The address on which the 'Manager API' should listen
                          # The default value is 0.0.0.0, if want to specify, please enable it.
                          # This value accepts IPv4, IPv6, and hostname.
    port: 9000            # The port on which the `Manager API` should listen.

  # ssl:
  # host: 127.0.0.1 # The address on which the 'Manager API' should listen for HTTPS
                          # The default value is 0.0.0.0, if want to specify, please enable it.
  # port: 9001 # The port on which the `Manager API` should listen for HTTPS.
  # cert: "/tmp/cert/example.crt" # Path of your SSL cert.
  # key: "/tmp/cert/example.key" # Path of your SSL key.

  allow_list:             # If we don't set any IP list, then any IP access is allowed by default.
    - 127.0. 01.           # The rules are checked in sequence until the first match is found.
    - : : 1.                 # In this example, access is allowed only for IPv4 network 127.0.0.1, and for IPv6 network ::1.
                          # It also supports CIDR like 192.168.1.0/24 and 2001:0DB8 ::/32
  etcd:
    endpoints:            # supports defining multiple etcd host addresses for an etcd cluster
      - 127.0. 01.: 2379
                          # yamllint disable rule:comments-indentation
                          # etcd basic auth info
    # username: "root" # ignore etcd username if not enable etcd auth
    # password: "123456" # ignore etcd password if not enable etcd auth
    mtls:
      key_file: ""          # Path of your self-signed client side key
      cert_file: ""         # Path of your self-signed client side cert
      ca_file: ""           # Path of your self-signed ca cert, the CA is used to sign callers' certificates
    # prefix: /apisix # apisix config's prefix in etcd, /apisix by default
  log:
    error_log:
      level: warn       # supports levels, lower to higher: debug, info, warn, error, panic, fatal
      file_path:
        logs/error.log  # supports relative path, absolute path, standard output
                        # such as: logs/error.log, /tmp/logs/error.log, /dev/stdout, /dev/stderr
                        # such as absolute path on Windows: winfile:///C:\error.log
    access_log:
      file_path:
        logs/access.log  # supports relative path, absolute path, standard output
                         # such as: logs/access.log, /tmp/logs/access.log, /dev/stdout, /dev/stderr
                         # such as absolute path on Windows: winfile:///C:\access.log
                         # log example: 2020-12-09T16:38:09.039+0800 INFO filter/logging. Go :46 /apisix/admin/routes/r1 "127.0.0.1:9000", "query": "asDFSAFd = adf&A =a", "requestId":" 3d50ECb8-758C-46D1-af5b-cd9d1C820156 ", "latency": 0, "remoteIP" : "127.0.0.1", "method" : "PUT", "incremented" : []}
  max_cpu: 0             # supports tweaking with the number of OS threads are going to be used for parallelism. Default value: 0 [will use max number of available cpu cores considering hyperthreading (if any)]. If the value is negative, is will not touch the existing parallelism profile.

authentication:
  secret:
    secret              # secret for jwt token generation.
                        # NOTE: Highly recommended to modify this value to protect `manager api`.
                        # if it's default value, when `manager api` start, it will generate a random string to replace it.
  expire_time: 3600     # jwt token expire time, in second
  users:                # yamllint enable rule:comments-indentation
    - username: admin   # username and password for login `manager api`
      password: admin
    - username: user
      password: user

Copy the code

A few points to note: allow_list, which determines what is accessible to the presentation layer, 0.0.0.0/0 can indicate all accessible, and the account password is configured in the Users section.

Using the command

./manager-api
Copy the code

Just start the project. Access:http://ip:9000/user/login?redirect=/The default password is admin/admin or user/user

Function is introduced

Official Usage Introduction

Dashboard – Import Grafana

Grafana is a data visualization, dashboard, and graphics editor for Graphite and InfluxDB, as well as an open source, full-featured metrics dashboard and graphics editor for Graphite, InfluxDB, and OpenTSDB.

Grafana is an open source temporal statistics and monitoring platform that supports many data sources such as ElasticSearch, Graphite, and InfluxDB, and is known for its powerful interface editor.

You might want to consider playing with ElasticSearch on the Network Security Intelligent Control platform. websiteIt is equivalent to implementing various custom dashboards and then simply importing them. The configuration on the official website seems to require a bunch of nodes to be configured, but I only have a simple operation to fill in HTTP links.I don’t know how to use it yet. It seems that I need to install it independently.

vim /etc/yum.repos.d/grafana.repo
# Put it inside
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm-beta
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
# Save and execute
yum -y install grafana
systemctl enable grafana-server
systemctl start grafana-server
Copy the code

Take a look at grafana-Server

[Unit]
Description=Grafana instance
Documentation=http://docs.grafana.org
Wants=network-online.target
After=network-online.target
After=postgresql.service mariadb.service mysqld.service

[Service]
EnvironmentFile=/etc/sysconfig/grafana-server
User=grafana
Group=grafana
Type=notify
Restart=on-failure
WorkingDirectory=/usr/share/grafana
RuntimeDirectory=grafana
RuntimeDirectoryMode=0750
ExecStart=/usr/sbin/grafana-server                                                  \
                            --config=${CONF_FILE}                                   \
                            --pidfile=${PID_FILE_DIR}/grafana-server.pid            \
                            --packaging=rpm                                         \
                            cfg:default.paths.logs=${LOG_DIR}                       \
                            cfg:default.paths.data=${DATA_DIR}                      \
                            cfg:default.paths.plugins=${PLUGINS_DIR}                \
                            cfg:default.paths.provisioning=${PROVISIONING_CFG_DIR}  

LimitNOFILE=10000
TimeoutStopSec=20
CapabilityBoundingSet=
DeviceAllow=
LockPersonality=true
MemoryDenyWriteExecute=false
NoNewPrivileges=true
PrivateDevices=true
PrivateTmp=true
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=invisible
ProtectSystem=full
RemoveIPC=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true
RestrictRealtime=true
RestrictSUIDSGID=true
SystemCallArchitectures=native
UMask=0027

[Install]
WantedBy=multi-user.target
Copy the code

Import data and console: Grafana in minutes. I have no data visualization requirements and it is not a priority, so I will not consider it.

routing

A Route is the entry point of a request and defines the matching rules between a client request and a service. A route can be associated with a Service or Upstream. A Service can be associated with a group of routes, and a route can be associated with an Upstream object (a group of back-end Service nodes). Therefore, each request matching the route will be propped by the gateway to the Upstream Service bound by the route.

The upstream

The upstream list contains created upstream services (that is, back-end services) that can perform load balancing and health checks on multiple target nodes of the upstream service.

service

The service consists of a combination of common plug-in configuration and upstream target information in the routing. A service can be associated with a group of upstream nodes and bound by multiple routes

consumers

Consumers are the consumers of routes, in the form of developers, end users, API calls, and so on. When creating a consumer, you need to bind at least one authentication class plug-in that includes authentication, security protection, flow control, no server architecture, observability, and several other types.

The plug-in

Plug-in types are as follows: Official plug-in documents

  • The authentication
    • authz-casbin
      • Casbin
        • Is a powerful and efficient open source access control framework, its access management mechanism supports a variety of access control models
        • The website address
      • The Lua-Casbin-based Apache APISIX plug-in supports powerful authorization based on a variety of access models
    • authz-keycloak
      • The authorization plug-in used with the Keycloak identity server. Keycloak is an identity server that conforms to OAuth/OIDC and UMA. Although it was developed with Keycloak, it should also work with any OAuth/OIDC and UMA-compatible identity provider
    • basic-auth
      • An authentication plug-in that needs to be used, consumer. Add basic authentication to aservice or route.

The consumer then adds its key to the request header to validate its request

  • hmac-auth
    • An authentication plug-in that needs to be used, consumer. Add HMAC authentication to aservice or route
  • jwt-auth
    • An authentication plug-in that needs to be used, consumer. Add JWT authentication to a service or route
    • The consumer then validates its request by adding its key to the query string parameter, request header, or cookie
  • key-auth
    • An authentication plug-in that should work with consumers
    • Add key authentication (sometimes called API keys) to a service or route. Consumers then validate their request by adding their key to the query string parameter or header
  • ldap-auth
    • An authentication plug-in that can work with consumer. Add Ldap authentication to aservice or route
  • openid-connect
    • The OAuth 2 / Open ID Connect(OIDC) plug-in provides authentication and introspection for APISIX
  • wolf-rbac
    • Authentication and Authorization (RBAC) plug-in. It needs to communicate with the consumer. It also needs to add wolf-rbac to a service or route. The RBAC functionality is provided by Wolf
    • Wolf document
  • Safety protection
    • api-breaker
      • The plug-in implements API circuit breakers to help us protect our upstream business services
    • consumer-restriction
      • Access restrictions are made based on the different objects selected by consumer-restriction
    • cors
      • CORS
    • fault-injection
      • Fault injection plug-in, which can be used with other plug-ins and is executed before other plug-ins. This abort property directly returns the user-specified HTTP code to the client and terminates subsequent plug-ins. The Delay property delays the request and execution of subsequent plug-ins
    • ip-restriction
      • You can restrict access to services or routes by whitelisting or blacklisting ip-restrictionIP addresses. You can use a single IP, multiple IP, or range in CIDR notation, such as 10.10.10.0/24
    • referer-restriction
      • Restrict access to a service or route by whitelisting request header references
    • request-validation
      • The plug-in validates the request before forwarding it to the upstream service. The validation plug-in uses JSON-schema to validate schemas. This plug-in can be used to validate header and body data.
    • ua-restriction
      • Access to services or routes can be restricted by the and header UA-restriction. allowlistdenylist User-Agent
    • uri-blocker
      • The plugin helps us intercept user requests, and we just need to specify block_rules
  • Flow control
    • limit-conn
      • Limit requests for concurrent plug-ins
    • limit-count
      • Limit the request rate by a fixed number of requests in a given time window
    • limit-req
      • Use the “leaky bucket” method to limit the request rate
    • traffic-split
      • The traffic splitting plug-in allows users to step through the percentage of traffic between upstream streams.
      • Note: Due to the shortcomings of the weighted loop algorithm (especially when the WRR state is reset), the ratio between each upstream may not be very accurate
  • None Server Architecture
    • azure-functions
      • A Serverless plug-in built into Apache APISIX for seamless integration with Azure Serverless Function as a dynamic upstream proxy for all requests for a specific URI to the Microsoft Azure cloud, This is one of the most commonly used public cloud platforms in production environments. If enabled, this plug-in terminates ongoing requests for that particular URI, Initiate a new request to Azure FAAS (new upstream) on behalf of the client with the appropriate authorization details, request headers, request body, and parameters set by the user (all three components are passed from the original request) and return the response body, status code, and header to the original client calling the APISIX proxy request
    • serverless-post-function
      • Specifies when the phase ends
    • serverless-pre-function
      • Specifies when the phase starts
  • observability
    • datadog
      • Two monitoring plug-ins built into Apache APISIX for seamless integration with Datadog, one of the most commonly used monitoring and observability platforms for cloud applications. When enabled, this plug-in supports multiple powerful types of metric capture for each request and response cycle, which basically reflects the behavior and health of the system
    • error-log-logger
      • A plugin that sends APISIX log data to a TCP server or Apache SkyWalking
      • This plug-in will provide the ability to send log data for level selection to monitoring tools and other TCP servers as well as SkyWalking over HTTP
    • http-logger
      • A plug-in that pushes log data requests to HTTP/HTTPS servers
    • kafka-logger
      • A plug-in used as a Kafka client driver for the ngx_lua nginx module
      • This plug-in provides the ability to push request log data as a JSON object to an external Kafka cluster
    • prometheus
      • The plugin will add/apisix/Prometheus/metrics to public target
      • These metrics are exposed through a separate Prometheus server address. By default, the address is 127.0.0.1:9091. You can change it in conf/config.yaml in
    • request-id
      • Add a unique ID (UUID) for each request that passes through the APISIX agent. This plug-in can be used to track API requests. Header_name If the request already exists, the plug-in will not add the request ID
    • skywalking
      • SkyWalking uses its native Nginx LUA tracker to provide tracking, topology analysis, and metrics from a service and URI perspective
    • skywalking-logger
      • Plugins that push access log data to the server over HTTP
    • sls-logger
      • A plug-in that uses RF5424 to push Log data requests to ali Cloud Log Server
    • syslog
      • A plug-in that pushes log data requests to Syslog
    • tcp-logger
      • A plug-in that pushes log data requests to the TCP server
    • udp-logger
      • A plug-in that pushes log data requests to the UDP server
    • zipkin
      • A plug-in for OpenTracing
  • other
    • batch-requests
      • The ability to accept multiple requests and apisix sends them through HTTP pipes and returns aggregated responses to clients can significantly improve performance when clients need access to multiple apis
    • client-control
      • Dynamically control the behavior of Nginx to handle client requests
    • ext-plugin-post-req
      • After executing most of the built-in Lua plug-ins, run specific external plug-ins in the plug-in runner
    • ext-plugin-pre-req
      • Before executing most of the built-in Lua plug-ins, run specific external plug-ins in the plug-in runner
    • grpc-transcode
      • Turn GRPC HTTP (s)
    • gzip
      • Dynamically set the gzip behavior of Nginx
    • proxy-cache
      • Provides the ability to cache upstream response data and can be used with other plug-ins. The plug-in supports disk-based caching and will support memory-based caching in the future
    • proxy-mirror
      • Provides the ability to mirror client requests
      • Pay attention to: The response returned by the mirror request was ignored
    • proxy-rewrite
      • An upstream proxy information rewrite plug-in
    • real-ip
      • Dynamically changes the client IP and port that APISIX sees
    • response-rewrite
      • The response rewrite plug-in overrides the content returned upstream as well as Apache APISIX itself

Most of the problems can be solved by using plug-ins properly, such as proxy-mirror and proxy-cache to mirror client requests, which can then be used for additional analysis, such as anti-crawler and data security related content.

certificate

The certificate, which is used by the gateway to process encryption requests, will be associated with the SNI and bound to the host name in the route.

Modify customization

Front-end interface modification

The code structure of the front-end project is as followsThe web directory is the front page of the projectThe entire project is written in VUE and can be built using YARN, so the project can be run in debug mode using the YARN Build command.Official debug documentationIt is worth noting that in the last step on my side, it does not seem to be able to successfully log in using YARN Start, only the YARN Start: e2E directive.

The purpose of debugging is to see the effect without compiling, because YARN detects file changes and compiles in real time. The structure of debugging is

Enter the web directory and the project runs on port 8000
yarn start:e2e 
Exit to the Dashboard project directory
Enable interface service, running on port 9000
make api-run 
Copy the code

Then access the application of port 8000 and log in. Let’s take a look at real-time compilation:Then I go to modify defaultsettings.ts:Seeing it triggers real-time compilation. At this point you can debug dynamically.The page has also changed (123 is something I just added)

This is just a simple example, and more changes will be shared in the next article.

Apisix capability layer changes

There are many different versions of APISIX on Github, so it’s important to know how to switch branches at Github. As far as I can see,This version of the code should be up to dateMore plugins than the stable version. My previous source code was downloaded on Gitee, but the version turned out to be too old and the code was not the same as Github, so I returned to Github. Slow down, slow down. The apisix code downloaded from Github is as follows:The core code is in the Apisix directory,It is full of Lua scripts, and its overall architecture is similar to that of Orange analyzed before. It is composed of plug-in system to form extension functions, but the basic implementation is different. The extensions we do on open source projects are basically extensions. To extend a plug-in, you need to understand its call flow and implementation. This part will be described in the next article.