The previous articles on The Keycloak app are all about single-node deployment. But in a real production environment, Keycloak, the underlying authentication service, would have to deploy multiple instances in clusters to ensure high availability. This article describes how to quickly deploy a Keycloak high availability cluster in a production environment using Docker.
Keycloak Cluster deployment problem to be solved
By default, data such as users, roles, and sessions are not synchronized between multiple Keycloak instances.
- The data between realms, clients, users, and roles within the Keycloak instance is shared or synchronized
- Session sharing or synchronization of the Keycloak instance within the cluster
Keycloak Instance data sharing
Keycloak uses the built-in H2 database as the data store by default, so just replace the H2 with an external shared database for data sharing. Keycloak supports MySQL, PostgreSQL, Oracel, SQL Server and other databases.
Keycloak Session synchronization
To synchronize sessions between multiple Instances of Keycloak, you need to configure the Keycloak instance for automatic discovery. See the Official Keycloak Blog: Keycloak Cluster Setup. There are several ways to do this. In practical terms, JDBC_PING is most recommended for automatic discovery.
Docker steps for deploying the Keycloak cluster
Build Docker images that support automatic discovery
The official Docker image of Keycloak supports external shared database, but does not integrate automatic discovery mechanism, so we need to build based on the official image to integrate automatic discovery function.
FROM jboss/keycloak:latest
ADD cli/TCPPING.cli /opt/jboss/tools/cli/jgroups/discovery/
ADD cli/JDBC_PING.cli /opt/jboss/tools/cli/jgroups/discovery/
Copy the code
The focus here is on jdbc_ping. cli. The official blog provides only the MySQL version
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS:2})
/subsystem=jgroups/stack=tcp:remove()
/subsystem=jgroups/stack=tcp:add()
/subsystem=jgroups/stack=tcp/transport=TCP:add(socket-binding="jgroups-tcp") /subsystem=jgroups/stack=tcp/protocol=JDBC_PING:add() /subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=datasource_jndi_name:add(value=java:jboss/datasources/KeycloakD S) /subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=initialize_sql:add(value="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, ping_data varbinary(5000) DEFAULT NULL, PRIMARY KEY (own_addr, cluster_name)) ENGINE=InnoDB DEFAULT CHARSET=utf8")
/subsystem=jgroups/stack=tcp/protocol=MERGE3:add()
/subsystem=jgroups/stack=tcp/protocol=FD_SOCK:add(socket-binding="jgroups-tcp-fd")
/subsystem=jgroups/stack=tcp/protocol=FD:add()
/subsystem=jgroups/stack=tcp/protocol=VERIFY_SUSPECT:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.NAKACK2:add()
/subsystem=jgroups/stack=tcp/protocol=UNICAST3:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.STABLE:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.GMS:add()
/subsystem=jgroups/stack=tcp/protocol=pbcast.GMS/property=max_join_attempts:add(value=5)
/subsystem=jgroups/stack=tcp/protocol=MFC:add()
/subsystem=jgroups/stack=tcp/protocol=FRAG3:add()
/subsystem=jgroups/stack=udp:remove()
/subsystem=jgroups/channel=ee:write-attribute(name=stack, value=tcp)
/socket-binding-group=standard-sockets/socket-binding=jgroups-mping:remove()
run-batch
try
:resolve-expression(expression=${env.JGROUPS_DISCOVERY_EXTERNAL_IP})
/subsystem=jgroups/stack=tcp/transport=TCP/property=external_addr/:add(value=${env.JGROUPS_DISCOVERY_EXTERNAL_IP})
catch
echo "JGROUPS_DISCOVERY_EXTERNAL_IP maybe not set."
end-try
stop-embedded-server
Copy the code
If you want to use another database, you need to replace the initialize_SQL section, such as PostgreSQL, with the following initialize_SQL
/subsystem=jgroups/stack=tcp/protocol=JDBC_PING/property=initialize_sql:add(value="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP, ping_data bytea DEFAULT NULL, PRIMARY KEY(own_addr, cluster_name))")
Copy the code
The vigoz/ Keycloato-ha image supports MySQL and PostgreSQL. The vigoz/ Keycloato-ha image supports MySQL and PostgreSQL
Deploying a database
The database used to store data at Keycloak, such as MySQL and PostgreSQL, is not the focus of this article and will be skipped here
Run multiple instances with built images
This section uses the PostgreSQL database as an example
Run keycloak1
docker run -d --name keycloak1 --restart=always \
-p 8080:8080 \
-p 8443:8443 \
-p 8009:8009 \
-p 9990:9990 \
-p 7600:7600 \
-p 57600:57600 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-e DB_VENDOR=postgres \
-e DB_ADDR=localhost \
-e DB_PORT=5432 \
-e DB_DATABASE=keycloak \
-e DB_SCHEMA=public \
-e DB_USER=keycloak \
-e DB_PASSWORD=password \
-e JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING \
-eJGROUPS_DISCOVERY_EXTERNAL_IP = 172.31.72.101 \-e PROXY_ADDRESS_FORWARDING=true \
-eKEYCLOAK_FRONTEND_URL = https://your-domain/auth \ vigoz/keycloak - ha: 10.0.0 - postgresCopy the code
Run keycloak2
docker run -d --name keycloak2 --restart=always \
-p 8080:8080 \
-p 8443:8443 \
-p 8009:8009 \
-p 9990:9990 \
-p 7600:7600 \
-p 57600:57600 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-e DB_VENDOR=postgres \
-e DB_ADDR=localhost \
-e DB_PORT=5432 \
-e DB_DATABASE=keycloak \
-e DB_SCHEMA=public \
-e DB_USER=keycloak \
-e DB_PASSWORD=password \
-e JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING \
-eJGROUPS_DISCOVERY_EXTERNAL_IP = 172.31.72.102 \-e PROXY_ADDRESS_FORWARDING=true \
-eKEYCLOAK_FRONTEND_URL = https://your-domain/auth \ vigoz/keycloak: 10.0.0 - postgresCopy the code
Description of Docker environment variables
KEYCLOAK_USER: indicates the name of the Keycloak administrator
KEYCLOAK_PASSWORD: indicates the password of the Keycloak administrator
DB_VENDOR: specifies the database type to use
DB_ADDR: specifies the database address
DB_PORT: indicates the database port
DB_DATABASE: specifies the name of the database library
DB_SCHEMA: database Schema
DB_USER: specifies the database user name
DB_PASSWORD: specifies the database password
JGROUPS_DISCOVERY_PROTOCOL: indicates the automatic discovery protocol. JDBC_PING is recommended
JGROUPS_DISCOVERY_EXTERNAL_IP: specifies the external IP address. Set this IP address to the IP address of the Docker instance to ensure that multiple instances can communicate with each other
PROXY_ADDRESS_FORWARDING: Proxy address forwarding. Set this parameter to true when load balancing is performed before keycoat
KEYCLOAK_FRONTEND_URL: the basic address for KEYCLOAK_FRONTEND_URL
After the above steps, you have a Keycloak cluster of two instances running.
Configuring Load Balancing
As a final step, you’ll need to configure a load balancer in front of Keycloak to provide a unified external access point for forwarding traffic to multiple Keycloak instances
Using nginx configuration reference is as follows:
upstream keycloak {
server 172.31.72.101:8080;
server 172.31.72.102:8080;
}
server {
server_name your-domain;
location / {
proxy_pass http://keycloak;
}
listen 443 ssl;
ssl_certificate /path/to/your-domain.cer;
ssl_certificate_key /path/to/your-domain.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
}
server {
if ($host = your-domain) {
return 301 https://$host$request_uri;
}
server_name your-domain;
listen 80;
return 404;
}
Copy the code
conclusion
Using Docker to deploy Keycloak high availability cluster is very simple and efficient, after configuring the database and load balancing can be used in production environment, very convenient.
Dockerfile address: keycloato-ha
Docker image address: keycloato-ha