0 x00 environment

I installed Apisix version 1.5 on macOS source code. Etcd and other services are deployed with Docker-compose. Apisix has docker deployment tutorials. Environment deployment is not covered here.

Apisix v1.5 下 载 : github.com/apache/apis…

The debugging code I used during the analysis: github.com/tzssangglas… This part of the code divides the startup process into several steps, prepares the debugging environment, facilitates step by step debugging, and clarifies the startup process.

0 x01 start

Apisix installation environment located in/usr/local/Cellar/apisix/apache – apisix – 1.5, is start command. / bin/apisix restart

The startup command corresponding is/usr/local/Cellar/apisix/apache – apisix – 1.5 / bin/apisix script, the script for the first line of code

#! /usr/bin/env lua
Copy the code

On Unix-like operating systems, add #! At the beginning of the script. (this is called shebang or pound bang) followed by the absolute path of the interpreter used to interpret the script, but this is obviously not followed by an absolute path.

Env is an executable that tells the operating system to look under the current system environment variable $PATH

# check the PATH environment variable echo $PATH # / usr/local/opt/[email protected] / bin: / usr/local/Cellar/[email protected]/1.15.8.3 openresty/luajit/bin: / usr/local/Cellar/apis Ix/apache - apisix - 1.5 / bin: / usr/local/bin: / usr/bin: / bin: / usr/sbin, / sbin/Applications/VMware Fusion.app/Contents/Public:/Applications/Wireshark.app/Contents/MacOSCopy the code

Env lua: to find an interpreter named lua using the Path configured in the current env environment

# # to check the instruction position which the lua lua output/usr/local/Cellar/apisix/apache - apisix - 1.5Copy the code

As you can see, lua is located under the $PATH environment variable. If you delete the Lua interpreter in this path and there is no Lua interpreter in any other path, you will get an error starting Apisix

env: lua: No such file or directory
Copy the code

The main point of this approach is to make the script work on different systems, different environments, and different user permissions, as long as the interpreter properly writes its PATH to the $PATH environment variable.

0x02 start

The function corresponding to the start directive

function _M.start(...).
    -- check running
    local pid_path = apisix_home .. "/logs/nginx.pid"
    local pid, err = read_file(pid_path)  - read/usr/local/apisix/logs/nginx pid...end
Copy the code

The apisix_HOME referred to in the above code is already defined elsewhere in the script. The reason for separating the apisix_HOME variable is that it is the current apisix installation directory, and obtaining this value is a bit tedious because different systems, different installation methods, The apisix installation directory is unknowable, so pull it out and write a separate piece of code to determine this value.

Apisix_home: Determines the apisix installation directory
local apisix_home = "/usr/local/apisix"  -- Default installation address in Linux
Copy the code

/usr/local/apisix is the default installation directory for the Linux environment, but not always. It may be installed elsewhere by the user, for example, I did not install it locally.

So apisix does something with apisix_HOME, which is to call a function to execute the PWD command to get the absolute path of the script, as follows

My startup command is./bin/apisix start
-- If the start command begins with a './'
if script_path:sub(1.2) = ='/' then
    Run the CMD command to view the current path
    apisix_home = trim(excute_cmd("pwd"))  
    if not apisix_home then
        error("failed to fetch current path")
    end
    
    Check whether the installation is in the root directory
    if string.match(apisix_home, '^/[root][^/]+') then
            is_root_path = true
    end
end
Copy the code

The excute_cmd function is as follows

-- Note that the return value of 'excute_cmd' has a newline character at the end. It is recommended to use the 'trim' function to process the return value.
local function excute_cmd(cmd)
    Using the IO. Popen function to execute the command (the same result as executing the command directly on the command line interface, but the result of using the function is saved in a file) is the following PWD command
    local t, err = io.popen(cmd)  The returned t is actually a file descriptor
    if not t then
        return nil."failed to execute command: ". cmd ..", error info:". errend
  
    local data = t:read("*all")   -- Run the read command to read the file
    t:close(a)return data   On my machine, simulate the location of the Apisix script and execute the PWD command
  								- / usr/local/Cellar/apisix/apache - apisix - 1.5 / bin \ n
                  The ending \n is a newline character
end
Copy the code

IDEA+EmmyLua plug-in built lua script debugging environment, the excute_cmd function out of the debugging, as follows

IO. Popen () Description of the command

  • IO. Popen ([prog [, mode]])
  • Explanation: Start the program in an extra processprogAnd returns the file handle for prog. This function can be used to call a command (program) and return a file descriptor associated with the program, usually the output of the called function. The file opening mode is specified by the parametermodeOk, it has a value"r"and"w"The value can be read or write. The default value is read.

After the above processing, apisix_HOME is now the correct apisix installation directory. My environment, apisix_home = / usr/local/Cellar/apisix/apache – apisix – 1.5, continue to look at the start function.

Read_file: reads nginx.pid and checks whether apisix is running
function _M.start(...).
    -- check running
    local pid_path = apisix_home .. "/logs/nginx.pid"
    local pid, err = read_file(pid_path)  - read/usr/local/apisix/logs/nginx pid
    If the nginx process ID exists
    if pid then
        local hd = io.popen("lsof -p ". pid)local res = hd:read("*a")
        if res and res ~= "" then
            print("APISIX is running...")
            return nil
        end
    end
    
    - start apisix
    init(...)
    - start the etcd
    init_etcd(...)

    local cmd = openresty_args
    -- print(cmd)
    os.execute(cmd)
end
Copy the code

Nginx. pid is the ID of the current nginx process, that is, pid. Apisix has been started. Let’s look at nginx.pid

At line 4, you essentially find the Apisix runtime directory.

Here with the help of IO. Read () function, the parameter is “a” and “* all two,” I looked at the debug and compare the results and the meaning of these two parameters should be the same, starting from its current location, read the entire file. If it is at the end of the file, or if the file is empty, the call returns an empty string. At the end of the file, it returns an empty string.

“*all” is written for lua5.1, see www.lua.org/pil/21.1.ht…

“*a” is written for lua5.2, refer to www.lua.org/manual/5.2/…

At line 13, it checks to see if nginx is running, and if so, prints the log and exits.

Init: start apisix

The start function shows that init was called, with three points in the argument (…). Indicates that the function can take a different number of arguments.

local function init(a)
    -- Check whether the apisix installation directory is the root path
    if is_root_path then
        print('Warning! Running apisix under /root is only suitable for development environments'.' and it is dangerous to do so. It is recommended to run APISIX in a directory other than /root.')
    end

    -- read_yaml_conf
    -- Read the configuration file
    local yaml_conf, err = read_yaml_conf()
    if not yaml_conf then
        error("failed to read local yaml config of apisix: ". err)end
    -- print("etcd: ", yaml_conf.etcd.host)...end
Copy the code

Start with the read_yaml_conf() function

local function read_yaml_conf(a)
    Call the profile module
    local profile = require("apisix.core.profile")
    The apisix_HOME attribute is assigned to the profile module metatartable, which lets the profile module know where apisix is installed
    profile.apisix_home = apisix_home .. "/"
    -- Profile module yaml_path function is used
    --apisix_home .. "conf/" .. "config" .. ".yaml"
    In my case, the result of this concatenation is a string:
    - "/ usr/local/Cellar/apisix/apache - apisix - 1.5 / conf/config. Yaml." "
    -- that is, where apisix configuration files are located
    local local_conf_path = profile:yaml_path("config")
    -- Read the configuration file
    local ymal_conf, err = read_file(local_conf_path)
    if not ymal_conf then
        return nil, err
    end
    -- Parse the TinYYAML module to yamL configuration files
    return yaml.parse(ymal_conf)
end
Copy the code

Yaml configuration is resolved into table values. The stack information of YAMl_conf is as follows: Yaml was converted to table form in the 20th century.

To continue down

local function init(a)
    View openResty build information
    local or_ver = excute_cmd("openresty -V 2>&1")
    local with_module_status = true
    -- Check whether the http_stub_status_module module is installed
    if or_ver and not or_ver:find("http_stub_status_module".1.true) then
        io.stderr:write("'http_stub_status_module' module is missing in "."your openresty, please check it out. Without this "."module, there will be fewer monitoring indicators.\n")
        with_module_status = false
    end...end
Copy the code

This section is equivalent to perform openresty – V 2 > &1 | grep http_stub_status_module command, mainly is the check to compile openresty at compile time whether compiled http_stub_status_module module, no compilation, There will be fewer monitoring indicators.

The following section is to prepare the system context environment. This section is mainly to verify and convert yaml_conf parameters, and obtain the information related to the system runtime environment, and finally obtain a complete sys_conf in table form

local function init(a)...-- Using template.render
    Prepare context information
    local sys_conf = {
        lua_path = pkg_path_org,
        lua_cpath = pkg_cpath_org,
        - Obtain information about the current system
        os_name = trim(excute_cmd("uname")),
        apisix_lua_home = apisix_home,
        with_module_status = with_module_status,
        error_log = {level = "warn"}},-- A series of checks
    if not yaml_conf.apisix then
        error("failed to read `apisix` field from yaml file")
    end

    if not yaml_conf.nginx_config then
        error("failed to read `nginx_config` field from yaml file")
    end
    -- Check whether it is a 32-bit machine
    if is_32bit_arch() then
        --worker_rlimit_core: nginx configuration that sets the maximum number of core files each worker can open, used to increase limits without restarting the main process.
        sys_conf["worker_rlimit_core"] = "4G"
    else
        sys_conf["worker_rlimit_core"] = "16G"
    end
    -- Transfer the configuration information in YAMl_conf to sys_conf
    for k,v in pairs(yaml_conf.apisix) do
        sys_conf[k] = v
    end
    for k,v in pairs(yaml_conf.nginx_config) do
        sys_conf[k] = v
    end
    -- Parameter verification & optimization parameters
    local wrn = sys_conf["worker_rlimit_nofile"]
    local wc = sys_conf["event"] ["worker_connections"]
    if not wrn or wrn <= wc then
        -- ensure the number of fds is slightly larger than the number of conn
        sys_conf["worker_rlimit_nofile"] = wc + 128
    end
    -- Whether to enable dev mode
    if(sys_conf["enable_dev_mode"] = =true) then
        Set the number of workers if dev mode is enabled
        sys_conf["worker_processes"] = 1
        sys_conf["enable_reuseport"] = false
    else
        sys_conf["worker_processes"] = "auto"
    end
    -- Whether to configure external DNS resolution
    local dns_resolver = sys_conf["dns_resolver"]
    if not dns_resolver or #dns_resolver == 0 then
        If DNS resolution is not configured, the default resolution is used
        local dns_addrs, err = local_dns_resolver("/etc/resolv.conf")
        if not dns_addrs then
            error("failed to import local DNS: ". err)end

        if #dns_addrs == 0 then
            error("local DNS is empty")
        end
        sys_conf["dns_resolver"] = dns_addrs
    end
    At this point, sys_conf information is almost ready...end
Copy the code

This part is not complicated, though it looks like a lot of code. Part of it is parameter verification, part of it is to get the server information at run time.

This is basically all around preparing sys_conf.

Yaml_conf is a part of sys_conf based on user configuration parameters, such as whether to enable dev mode.

Sys_conf is used to define the behavior of nginx based on parameters such as YAMl_conf, operating system environment, OpenResty compile information, Apisix installation environment, lua PATH environment variable, etc. That is, to the nginx.conf configuration file.

The final sys_conf stack information is shown below

To convert sys_conf to nginx.conf, use a library called lua-resty-template, which can be used at github.com/bungle/lua-…

 local function init(a)...Get the template rendering engine, ngx_tPL is the nginx-template template defined in the code, too long to show here
    local conf_render = template.compile(ngx_tpl)
    Ngxconf (sys_conf, ngxconf, ngxconf
    local ngxconf = conf_render(sys_conf)

    Write the ngxconf configuration to a file called nginx.conf
    local ok, err = write_file(apisix_home .. "/conf/nginx.conf", ngxconf)
    if not ok then
        error("failed to update nginx.conf: ". err)end

    To get the openResty version number, call local (CMD)
    local op_ver = get_openresty_version()
    -- In my environment, op_ver = 1.15.8.3
    if op_ver == nil then
        io.stderr:write("can not find openresty\n")
        return
    end

    local need_ver = "1.15.8"
    Verify openReSTY version numbers by using op_ver and need_ver. Divide them into arrays and compare them one by one
    if not check_or_version(op_ver, need_ver) then
        io.stderr:write("openresty version must >=", need_ver, " current ", op_ver, "\n")
        return
    end
end
Copy the code

Get_openresty_version (); check_or_version (); get_openresty_version();

Ngxconf, rendered by the template engine, is a string value with the following contents

It then outputs the content to a file, resulting in the following nginx.conf file

# Configuration File - Nginx Server Configs
# This is a read-only file, do not try to modify it.

master_process on;

worker_processes auto;

error_log logs/error.log warn;
pid logs/nginx.pid;

worker_rlimit_nofile 20480;

events {
    accept_mutex off;
    worker_connections 10620;
}

worker_rlimit_core  16G;

worker_shutdown_timeout 240s;

env APISIX_PROFILE;


http {
    lua_package_path  "$prefix/deps/share/lua / 5.1 /? .lua; $prefix/deps/share/lua / 5.1 /? /init.lua; / usr/local/Cellar/apisix/apache - apisix - 1.5 /? .lua; / usr/local/Cellar/apisix/apache - apisix - 1.5 /? /init.lua;; . /? .lua; / usr/local/Cellar/[email protected]/1.15.8.3 openresty/luajit/share/luajit - 2.1.0 -beta3 /? .lua; / usr/local/share/lua / 5.1 /? .lua; / usr/local/share/lua / 5.1 /? /init.lua; / usr/local/Cellar/[email protected]/1.15.8.3 openresty/luajit/share/lua / 5.1 /? .lua; / usr/local/Cellar/[email protected]/1.15.8.3 openresty/luajit/share/lua / 5.1 /? /init.lua; /Users/tuzhengsong/Library/Application Support/JetBrains IntelliJIdea2020.2 / plugins/intellij - emmylua/classes/debugger/mobdebug /? .lua; /Users/tuzhengsong/IdeaProjects/apisix-learning/src/? .lua; /Users/tuzhengsong/IdeaProjects/apisix-learning/? .lua;; / usr/local/Cellar/apisix/apache - apisix - 1.5 / deps/share/lua / 5.1 /? .lua; / usr/local/Cellar/apisix/apache - apisix - 1.5 / deps/share/lua / 5.1 /? /? .lua; / usr/local/Cellar/apisix/apache - apisix - 1.5 / deps/share/lua / 5.1 /? .lua;";
    lua_package_cpath "$prefix/deps/lib64 / lua 5.1 / /? .so; $prefix/deps/lib/lua 5.1 / /? .so;; . /? .so; / usr/local/lib/lua 5.1 / /? .so; / usr/local/Cellar/[email protected]/1.15.8.3 / openresty luajit/lib/lua 5.1 / /? .so; / usr/local/lib/lua / 5.1 / loadall. So.";

    lua_shared_dict plugin-limit-req     10m;
    lua_shared_dict plugin-limit-count   10m;
    lua_shared_dict prometheus-metrics   10m;
    lua_shared_dict plugin-limit-conn    10m;
    lua_shared_dict upstream-healthcheck 10m;
    lua_shared_dict worker-events        10m;
    lua_shared_dict lrucache-lock        10m;
    lua_shared_dict skywalking-tracing-buffer    100m;


    # for openid-connect plugin
    lua_shared_dict discovery             1m; # cache for discovery metadata documents
    lua_shared_dict jwks                  1m; # cache for JWKs
    lua_shared_dict introspection        10m; # cache for JWT verification results

    # for custom shared dict

    # for proxy cache
    proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G;

    # for proxy cache
    map $upstream_cache_zone $upstream_cache_zone_info {
        disk_cache_one /tmp/disk_cache_one,1:2;
    }

    lua_ssl_verify_depth 5;
    ssl_session_timeout 86400;

    underscores_in_headers on;

    lua_socket_log_errors off;

    resolver 192.168.1.1 192.168.0.1 valid=30;
    resolver_timeout 5;

    lua_http10_buffering off;

    lua_regex_match_limit 100000;
    lua_regex_cache_max_entries 8192;

    log_format main '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time';

    access_log logs/access.log main buffer=16384 flush=3;
    open_file_cache  max=1000 inactive=60;
    client_max_body_size 0;
    keepalive_timeout 60s;
    client_header_timeout 60s;
    client_body_timeout 60s;
    send_timeout 10s;

    server_tokens off;
    more_set_headers 'Server: APISIX web server';

    include mime.types;
    charset utf-8;

    real_ip_header X-Real-IP;

    set_real_ip_from 127.0.0.1;
    set_real_ip_from unix:;

    upstream apisix_backend {
        server 0.0.0.1;
        balancer_by_lua_block {
            apisix.http_balancer_phase()
        }

        keepalive 320;
    }

    init_by_lua_block {
        require "resty.core"
        apisix = require("apisix") local dns_resolver = {" 192.168.1.1 ", "192.168.0.1"}local args = {
            dns_resolver = dns_resolver,
        }
        apisix.http_init(args)
    }

    init_worker_by_lua_block {
        apisix.http_init_worker()
    }


    server {
        listen 9080 reuseport;
        listen 9443 ssl http2 reuseport;


        listen[: :] :9080 reuseport;
        listen[: :] :9443 ssl http2 reuseport;

        ssl_certificate      cert/apisix.crt;
        ssl_certificate_key  cert/apisix.key;
        ssl_session_cache    shared:SSL:20m;
        ssl_session_timeout 10m;

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphersECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDH E-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;ssl_prefer_server_ciphers on;

        location = /apisix/nginx_status {
            allow 127.0.0.0/24;
            deny all;
            access_log off;
            stub_status;
        }

        location /apisix/admin {
                allow 127.0.0.0/24;
                deny all;

            content_by_lua_block {
                apisix.http_admin()
            }
        }

        location /apisix/dashboard {
                allow 127.0.0.0/24;
                deny all;

            alias dashboard/;

            try_files $uri $uri/index.html /index.html =404;
        }

        ssl_certificate_by_lua_block {
            apisix.http_ssl_phase()
        }

        location / {
            set $upstream_mirror_host        ' ';
            set $upstream_scheme             'http';
            set $upstream_host               $host;
            set $upstream_upgrade            ' ';
            set $upstream_connection         ' ';
            set $upstream_uri                ' ';

            access_by_lua_block {
                apisix.http_access_phase()
            }

            proxy_http_version 1.1;
            proxy_set_header   Host              $upstream_host;
            proxy_set_header   Upgrade           $upstream_upgrade;
            proxy_set_header   Connection        $upstream_connection;
            proxy_set_header   X-Real-IP         $remote_addr;
            proxy_pass_header  Server;
            proxy_pass_header  Date;

            ### the following x-forwarded-* headers is to send to upstream server

            set $var_x_forwarded_for        $remote_addr;
            set $var_x_forwarded_proto      $scheme;
            set $var_x_forwarded_host       $host;
            set $var_x_forwarded_port       $server_port;

            if($http_x_forwarded_for ! ="") {
                set $var_x_forwarded_for "${http_x_forwarded_for}.${realip_remote_addr}";
            }
            if($http_x_forwarded_proto ! ="") {
                set $var_x_forwarded_proto $http_x_forwarded_proto;
            }
            if($http_x_forwarded_host ! ="") {
                set $var_x_forwarded_host $http_x_forwarded_host;
            }
            if($http_x_forwarded_port ! ="") {
                set $var_x_forwarded_port $http_x_forwarded_port;
            }

            proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
            proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
            proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;

            ### the following configuration is to cache response content from upstream server

            set $upstream_cache_zone            off;
            set $upstream_cache_key             ' ';
            set $upstream_cache_bypass          ' ';
            set $upstream_no_cache              ' ';
            set $upstream_hdr_expires           ' ';
            set $upstream_hdr_cache_control     ' ';

            proxy_cache                         $upstream_cache_zone;
            proxy_cache_valid                   any 10s;
            proxy_cache_min_uses                1;
            proxy_cache_methods                 GET HEAD;
            proxy_cache_lock_timeout            5s;
            proxy_cache_use_stale               off;
            proxy_cache_key                     $upstream_cache_key;
            proxy_no_cache                      $upstream_no_cache;
            proxy_cache_bypass                  $upstream_cache_bypass;

            proxy_hide_header                   Cache-Control;
            proxy_hide_header                   Expires;
            add_header      Cache-Control       $upstream_hdr_cache_control;
            add_header      Expires             $upstream_hdr_expires;
            add_header      Apisix-Cache-Status $upstream_cache_status always;

            proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;
            mirror          /proxy_mirror;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location @grpc_pass {

            access_by_lua_block {
                apisix.grpc_access_phase()
            }

            grpc_set_header   Content-Type application/grpc;
            grpc_socket_keepalive on;
            grpc_pass         grpc://apisix_backend;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location = /proxy_mirror {
            internal;

            if ($upstream_mirror_host = "") {
                return 200;
            }

            proxy_pass$upstream_mirror_host$request_uri; }}}Copy the code

At this point, the startup process is not complete, just the required nginx.conf configuration file is generated.

Etcd init_etcd: start

This section reads the Settings in yaml_conf for etCD.

local function init_etcd(show_output)
    - read yaml_conf
    local yaml_conf, err = read_yaml_conf()
    -- A series of parameter verification
    if not yaml_conf then
        error("failed to read local yaml config of apisix: ". err)end

    if not yaml_conf.apisix then
        error("failed to read `apisix` field from yaml file when init etcd")
    end

    if yaml_conf.apisix.config_center ~= "etcd" then
        return true
    end

    if not yaml_conf.etcd then
        error("failed to read `etcd` field from yaml file when init etcd")
    end
    Finally get the parameters related to the root ETCD
    localEtcd_conf = yaml_conf. Etcd...end
Copy the code

Etcd_conf configuration stack information is as follows

Continue to

local function init_etcd(show_output)...local timeout = etcd_conf.timeout or 3
    local uri
    -- Convert the old single ETCD configuration to multiple ETCD configurations
    if type(yaml_conf.etcd.host) == "string" then
        yaml_conf.etcd.host = {yaml_conf.etcd.host}
    end
    Get multiple addresses of the configured ETCD cluster
    local host_count = #(yaml_conf.etcd.host)

    -- Check whether the user has enabled etCD V2
    for index, host in ipairs(yaml_conf.etcd.host) do
        uri = host .. "/v2/keys"
        Here is the concatenation CMD command, in my environment, the concatenation command is
        - curl - I - m - o/dev/null - 60 - s w % {http_code} http://127.0.0.1:2379/v2/keys
        This command is equivalent to getting the HTTP status code for curl
        local cmd = "curl -i -m ". timeout *2." -o /dev/null -s -w %{http_code} ". uri-- Execute command
        local res = excute_cmd(cmd)
        -- Verifies the returned result
        if res == "404" then
            -- IO. Stderr: standard error output
            io.stderr:write(string.format("failed: please make sure that you have enabled the v2 protocol of etcd on %s.\n", host))
            return
        end
    end...end
Copy the code

To verify whether the etCD v2 protocol is enabled, run the curl command to query the status code of the etcd cluster based on the host information.

Equivalent to execute the curl – I – m – o/dev/null – 60 – s w % {http_code} http://127.0.0.1:2379/v2/keys

Results the following

Continue to

local function init_etcd(show_output)...local etcd_ok = false
    -- Traverses the host address of the ETCD cluster
    for index, host in ipairs(yaml_conf.etcd.host) do

        local is_success = true
        --host --> http://127.0.0.1:2379
        --etcd_conf.prefix --> /apisix
        uri = host .. "/v2/keys". (etcd_conf.prefixor "")
        Ready to create some directories in etCD
        for _, dir_name in ipairs({"/routes"."/upstreams"."/services"."/plugins"."/consumers"."/node_status"."/ssl"."/global_rules"."/stream_routes"."/proto"}) do
            Splicing CMD commands
            local cmd = "curl ". uri .. dir_name .."? prev_exist=false -X PUT -d dir=true "."--connect-timeout ". timeout .." --max-time ". timeout *2." --retry 1 2>&1"
            - the CMD example: curl http://127.0.0.1:2379/v2/keys/apisix/routes? prev_exist=false -X PUT -d dir=true --connect-timeout 30 --max-time 60 --retry 1 2>&1
            local res = excute_cmd(cmd)
            Curl curl curl curl curl curl curl curl curl curl curl
            No createdIndex(keyword in the RES returned on successful creation) or index (keyword returned on repeated creation) is matched in the result.
            if not res:find("index".1.true)
                    and not res:find("createdIndex".1.true) then
                If you go to this area, the resources to be created cannot be created in this loop (for example, directories and routers). The result is neither a successful creation result nor a duplicate creation message, and the resources cannot be created
                is_success = false
                If the last etcd host address has been traversed, an error is reported, because there is no chance to create the resource on another host
                if (index == host_count) then
                    error(cmd .. "\n". res)end
                If yes, exit the current loop and continue to create other resources in the directory
                break
            end
            Whether to print CMD and res
            if show_output then
                print(cmd)
                print(res)
            end
        end
        If the etCD_OK resource is successfully created on a host, exit the loop (host)
        if is_success then
            etcd_ok = true
            break
        end
    end
    
    It is possible to configure multiple host addresses for the ETCD cluster, but as long as one execution is successfully initialized, it is complete
    if not etcd_ok then
        error("none of the configured etcd works well")
    end
end
Copy the code

The CMD command is concatenated as:

The curl http://127.0.0.1:2379/v2/keys/apisix/routes? prev_exist=false -X PUT -d dir=true --connect-timeout 30 --max-time 60 --retry 1 2>&1

The following parameters in this command are received by the ETCD V2 API

  • Prev_exist: checks whether the key exists: if prev_exist is true, it is an update request; If prev_exist is false, it is a create request
  • Dir =true: creates a directory

CMD creates a directory resource on the API, and the etCD configuration is as follows

The CMD effect is as followsReturns on successful creation

{"action":"set"."node": {"key":"/apiseven/routes"."dir":true."modifiedIndex":91."createdIndex":91}}
Copy the code

Returns if the same value has already been created

{"errorCode":102."message":"Not a file"."cause":"/apiseven/upstreams"."index":89}
Copy the code

Therefore, when checking CMD execution results, index and createdIndex will be matched. If either match, the directory is successfully created.

The logic of this two-layer loop is somewhat convoluted, which states that all directory resources must be created successfully in the configured ETCD cluster. Even though kennels are not created on the same host, etCD guarantees final consistency as long as they are created successfully.

At this point, there are two important steps in the apisix startup process

  • Create nginx. Conf
  • Initialize directory resources for etcd

It’s done.

Openresty openresty_args: start

The last part of the start function looks like this:

local openresty_args = [[openresty -p ]]. apisix_home ..[[ -c ]]. apisix_home ..[[/conf/nginx.conf]]

function _M.start(...).

    local cmd = openresty_args
    -- print(cmd)
    os.execute(cmd)
end
Copy the code

It’s time to start OpenResty. Apisix is an application built on OpenResy

The CMD command is spliced as follows:

Openresty -p/usr/local/Cellar/apisix/apache - apisix - 1.5 - c/usr/local/Cellar/apisix/apache - apisix - 1.5 / conf/nginx. ConfCopy the code
  • -p Specifies the project directory
  • -c Specifies the configuration file

0x03 end

If there are fallacies, welcome to point out, discuss and learn together.