Nuster is a high-performance caching server based on HAProxy

Github.com/jiangwenyua…

introduce

Nuster is a high-performance caching server based on HAProxy. Nuster is fully compatible with HAProxy and utilizes HAProxy’s ACL functionality to provide very detailed caching rules, such as

  • Cache when request address is xyz
  • Cache when X in the request parameter is Y
  • Cached when X in the response header is Y
  • Cache when the request rate exceeds what
  • , etc.

performance

Very fast, 3 times as fast as Nginx in single-process mode, 2 times as fast as Nginx in multi-process mode, and 3 times as fast as Varnish.

See the benchmark

The installation

make TARGET=linux2628
make installCopy the code

See The HAProxy README for details

Method of use

Add cache ON to global and cache filter and cache rule to Backend or LISTEN

instruction

cache

syntax: cache on|off [data-size size]

default: none

context: global

Controls whether caching is enabled. Data-size can be set to control the amount of memory used to cache data. You can use m, M, G, and G. The default is 1MB, which is also the minimum usage. Only HTTP content is counted, not the memory overhead associated with using caching.

filter cache

syntax: filter cache [on|off]

default: on

context: backend, listen

Define a cache filter and add a cache-rule. You can add for multiple agents and set whether caching is enabled for a single agent. If multiple filters are defined, you need to put the cache filter last.

cache-rule

syntax: cache-rule name [key KEY] [ttl TTL] [code CODE] [if|unless condition]

default: none

context: backend, listen

Define cache rules. You can define more than one at a time, but pay attention to the order, and a match will stop the test.

acl pathA path /a.html
filter cache
cache-rule all ttl 3600
cache-rule path01 ttl 60 if pathACopy the code

Rule path01 is never executed because all matches all rules.

name

Define a name.

key KEY

Define key, which consists of the following keywords:

  • method: http method, GET/POST…
  • scheme: http or https
  • host: the host in the request
  • path: the URL path of the request
  • query: the whole query string of the request
  • header_NAME: the value of header NAME
  • cookie_NAME: the value of cookie NAME
  • param_NAME: the value of query NAME
  • body: the body of the request

The default is the key method. The scheme. The host. The path. The query. The body

Example

GET http://www.example.com/q?name=X&type=Y http header: GET /q? name=X&type=Y HTTP/1.1
Host: www.example.com
ASDF: Z
Cookie: logged_in=yes; user=nuster;Copy the code

Will get:

  • method: GET
  • scheme: http
  • host: www.example.com
  • path: /q
  • query: name=X&type=Y
  • header_ASDF: Z
  • cookie_user: nuster
  • param_type: Y
  • body: (empty)

So the default key will get GEThttpwww.example.com/qname=X&type=Y, And the key method. The scheme. Host. Path. Header_ASDF. Cookie_user. It generates GEThttpwww.example.com/qZnusterY param_type

If the key of a request can be found in the cache, the cache contents are returned.

ttl TTL

To define the expiration time of a key, use d, H, m, and s. The default is 3600 seconds. Set to 0 if you don’t want to fail

code CODE1,CODE2…

By default, only 200 responses are cached. If you need to cache others, you can add them. All will cache any status code.

Cache-rule only200 cache-rule 200and404 code 200,404 cache-rule all code allCopy the code

if|unless condition

Refer to HAProxy Configuration 7. Using ACLs and fetching samples to define ACL conditions

FAQ

How to debug?

Add debug in global, or start haProxy with -d

Cache-related debugging information starts with [CACHE]

How do I cache POST requests?

Add a option HTTP – buffer – the request

Use the body keyword if you have a custom key

The request body may be incomplete, as described in the Option HTTP-Buffer-Request section of HAProxy Configuration

You can also set up a separate back end for POST requests

Example

global
    cache on data-size 100m
    #daemon
    ## to debug cache
    #debug
defaults
    retries 3
    option redispatch
    timeout client  30s
    timeout connect 30s
    timeout server  30s
frontend web1
    bind *:8080
    mode http
    acl pathPost path /search
    use_backend app1a if pathPost
    default_backend app1b
backend app1a
    balance roundrobin
    # mode must be http
    mode http

    # http-buffer-request must be enabled to cache post request
    option http-buffer-request

    acl pathPost path /search

    # enable cache for this proxy
    filter cache

    # cache /search for 120 seconds. Only works when POST/PUT
    cache-rule rpost ttl 120 ifPathPost Server S1 10.0.0.10:8080 Backend App1b balance Roundrobin Mode HTTP filter Cache ON# cache /a.jpg, not expire
    acl pathA path /a.jpg
    cache-rule r1 ttl 0 if pathA

    # cache /mypage, key contains cookie[userId], so it will be cached per user
    acl pathB path /mypage
    cache-rule r2 key method.scheme.host.path.query.cookie_userId ttl 60 if pathB

    # cache /a.html if response's header[cache] is yes
    http-request set-var(txn.pathC) path
    acl pathC var(txn.pathC) -m str /a.html
    acl resHdrCache1 res.hdr(cache) yes
    cache-rule r3 if pathC resHdrCache1

    # cache /heavy for 100 seconds if be_conn greater than 10
    acl heavypage path /heavy
    acl tooFast be_conn ge 100
    cache-rule heavy ttl 100 if heavypage tooFast 

    # cache all if response's header[asdf] is fdsa
    acl resHdrCache2 res.hdr(asdf)  fdsa
    cache-rule resCache ttl 0 ifResHdrCache1 Server S1 10.0.0.10:8080 Frontend web2bind *:8081
    mode http
    default_backend app2
backend app2
    balance     roundrobin
    mode http

    # disable cache on this proxyFilter cache off cache-rule all server s2 10.0.0.11:8080 listen web3bind *:8082
    mode http

    filter cache
    cache-rule everything

    server s3 10.0.0.12:8080Copy the code