What is Kong
Kong is the next generation API gateway platform for modern architectures (hybrid cloud, hybrid organization), featuring cloud native, high performance, easy to use, and extensible features.
It applies to scenarios such as Api Gateway, Kubernetes Ingress, and Service Mesh Sidecar.
The main features are:
- Cloud native: Platform-independent, Kong can run bare-metal to Kubernetes
- High performance: Backed by non-blocking communication nGINx, performance goes without saying
- Plugins: Many plugins are available out of the box, and there is an easily extensible custom plugins interface that allows users to develop their own plugins using Lua
- Fusing: Fusing can be realized through plug-ins to avoid system avalanche
- Log: Can record HTTP, TCP, UDP requests and responses through Kong.
- Authentication: Permission control, IP whitelist, also features of OpenResty
- SSL: Setup a Specific SSL Certificate for an underlying service or API.
- Monitoring: Kong provides real-time monitoring plugins
- Authentication: support HMAC, JWT, Basic, OAuth 2.0 and other common protocols
- Traffic limiting: Plug-ins can be used to limit traffic on certain interfaces of a single service to avoid service overload
- REST API: The REST API is used for configuration management, which relieves complex configuration files
- Health check: automatic check, passive check; Node unavailable Synchronization to all Kong nodes takes 1-2 seconds
- Dynamic routing: Kong is behind OpenResty+Lua, so it inherits dynamic routing features from OpenResty
Why use Kong
The problems we need to solve right now
- Unified entry: in the server microservice framework, interface permission verification, IP restriction, flow limiting, etc., are independently implemented in each service. There is no unified entrance, which is not convenient for unified management.
- Ease of use and expansibility: the server technology stack is mainly developed by LNMP, and is being gradually transformed to the development based on Spring Boot and Spring Clound microservice technology stack. This is a gray scale migration process, we need the Proxy can be simple operation, convenient management.
- Continuous integration release: Internet 2C products, users are not using the service all the time, and the product is still in constant iteration, the service can be released at any time, so it must be hot deployment ability, and is automatic. But for Spring Boot-based services to start 15-30 seconds, we need Kong’s blue-green publishing capabilities.
Kong can perfectly solve the above problems. The solutions are as follows:
- Unified gateway: it can be used as a unified gateway for micro services to achieve unified traffic limiting, permission verification, logging, and IP restriction
-
Ease of use and scalability: Provides Restful operations and dashboard management tools
Create an upstream with the name hello curl -X POST http://localhost:8001/upstreams --data "name=hello" Add two load balancing nodes for Hello curl -X POST http://localhost:8001/upstreams/hello/targets --data "target=localhost:8080" --data "weight=100" curl -X POST http://localhost:8001/upstreams/hello/targets --data "target=localhost:8081" --data "weight=100" Configure a service curl -X POST http://localhost:8001/services --data "name=hello" --data "host=hello" Configure a route for the service curl -X POST http://localhost:8001/routes --data "paths[]=/hello" --data "service.id={$service.id configure the}" returned by the service above. Copy the code
Dashboard Tool Screenshot
-
Continuous integration publishing: With GitLab CI/CD, after code is submitted to the code base, automatic packaging, running test cases, blue and green deployment.
How do we use Kong
The Kong architecture diagram is currently used
Kong cluster
-
All nodes are connected to the data center
- Use the Postgresql master/slave mechanism to ensure high availability of databases. Use a later version than 9.6
-
All nodes periodically perform tasks and synchronize data consistently
- Configuration options: db_update_Frequency (default: 5 seconds)
- Every db_update_frequency second, all nodes will pull all updates from the database, and if there are synchronization to update changes, the local cache will be cleared.
- If the Postgresql database is faulty, the node can still provide services using the original data
-
The node has a local cache and can set the cache expiration time
- Db_cache_ttl (default: 0s)
- When the data center fails, services can still be provided by relying on the local cache
-
Dynamic capacity expansion is supported. For details, see the architecture diagram. The Kong cluster is behind Lvs
- Add a new node, synchronize the configuration of other nodes, verify that it is normal, and add it to THE RS of Lvs to provide service
- Delete the node, directly delete the Lvs RS, it will not provide service
- Modify node configuration, remove Kong RS from Lvs first, restart after modification, verify normal, put back into Lvs RS
-
Support gRPC
# /etc/kong/kong.confProxy_listen = 0.0.0.0:8000, 0.0.0.0:8443 SSL, 0.0.0.0:9080 http2Copy the code
-
Service monitoring
- Natural combination of Prometheus, Grafana, and Kong
- Natural combination of Prometheus, Grafana, and Kong
Matters needing attention
- Kong /templates/nginx_kong.lua template files are unified
- For the Kong log to do the cutting, we use Logrotate
Kong plug-in
- Kong has many built-in plug-ins, including authentication, traffic limiting, logging and other related plug-ins. Of course, you can also customize plug-ins. After loading successfully, you can add and use them in this interface
-
Different plug-ins have different parameters, which need to be set. After setting, the plug-in will be enabled
For example, this is the built-in key-auth plug-in, which is used for API authentication. After setting the key, only those who pass the authentication can access it
-
Custom development and deployment, developing plug-ins according to business needs
The basic flow
- Download the template
The base version of the simple
Simple - the plugin ├ ─ ─ handler. Lua └ ─ ─ schema. The luaCopy the code
The advanced premium
Complete - the plugin ├ ─ ─ API. Lua ├ ─ ─ daos. Lua ├ ─ ─ handler. The lua ├ ─ ─ migrations │ ├ ─ ─ Cassandra. Lua │ └ ─ ─ postgres. Lua └ ─ ─ schema.luaCopy the code
The required files are handler.lua and schema.lua
-
Modify handler.lua file. There are many functions in which the custom logic is written
Kong calls the corresponding function at certain stages
local BasePlugin = require "kong.plugins.base_plugin" -- The actual logic is implemented in those modules local access = require "kong.plugins.my-custom-plugin.access" local body_filter = require "kong.plugins.my-custom-plugin.body_filter" local CustomHandler = BasePlugin:extend() function CustomHandler:new() CustomHandler.super.new(self, "my-custom-plugin") end function CustomHandler:access(config) CustomHandler.super.access(self) -- Execute any function from the module loaded in `access`, -- for example, `execute()` and passing it the plugin's configuration. access.execute(config) end function CustomHandler:body_filter(config) CustomHandler.super.body_filter(self) -- Execute any function from the module loaded in `body_filter`, -- for example, `execute()` and passing it the plugin's configuration. body_filter.execute(config) end return CustomHandler Copy the code
- Modify the schema.lua file, mainly to configure some parameters, and parameter check
return { no_consumer = true, -- this plugin will only be applied to Services or Routes, fields = { -- Describe your plugin's configuration's schema here. }, self_check = function(schema, plugin_t, dao, is_updating) -- perform any custom verification return true end } Copy the code
-
Configure the deployment
The plugin file is in
/data/kong/plugins/simple-plugin/ Copy the code
The configuration kong. Conf
lua_package_path = /data/? .lua; ($default); plugins = bundled,simple-plugin Copy the code
If the Lua plugin has no errors, you can see it loading in the background
Problem on line
-
Upstream response cache error (UPSTREAM response cache error) is required to modify the configuration
nginx_proxy_proxy_buffer_size=128k nginx_proxy_proxy_buffers=4 256k nginx_proxy_proxy_busy_buffers_size=256k Copy the code
-
Pay attention to the configuration properties
- Strip_path properties
If set totrueIf paths is set to a value, the request will be replaced with {"paths": ["/service"]."strip_path": true."service": { "id": "..."}} request: GET/service/path/to/resource HTTP / 1.1 Host:... Proxy: GET /path/to/resource HTTP/1.1 {"paths": ["/version/\d+/service"]."strip_path": true."service": { "id": "..."} request: GET/version / 1 / service/path/to/HTTP / 1.1 resource Proxy: GET/path/to/resource HTTP / 1.1Copy the code
- Preserve_host properties
If set totrueHeader request Host Request: GET/HTTP/1.1 host: service.com Proxy: GET/HTTP/1.1 host: service.com Set this parameter tofalse, do not retain header host GET/HTTP/1.1 host: service.com GET/HTTP/1.1 host: <my-service-host.com>Copy the code
conclusion
Kong is a mature open source gateway product in the industry, with good performance in terms of performance, ease of use and extension.
Kong is the wing of service governance, which can realize service degradation, fusing and traffic scheduling more elegantly, conveniently and intelligently.
Let us fly in the complex architecture of self-built Kunernetes private cloud, Hulk cloud, Ali Cloud and so on.