The SpringBoot project aggregates the WebSockert code. The code is as follows:
/** * Enable WebSocket support ** / @configuration Public class WebSocketConfig {/** * scan and register all servers with @serverEndpoint annotation * @return */ @Bean public ServerEndpointExporter serverEndpointExporter() { return new ServerEndpointExporter(); } } package com.zntech.cpms.script.provider.config; import com.zntech.cpms.script.provider.constant.ConstantUtils; import com.zntech.cpms.script.provider.enums.MessageTypeEnum; import com.zntech.cpms.script.provider.enums.RunStatusEnum; import com.zntech.cpms.script.provider.pojo.ClientInfo; import com.zntech.cpms.script.provider.pojo.MessageRequest; import com.zntech.cpms.script.provider.pojo.ProjectTask; import com.zntech.cpms.script.provider.pojo.TaskFailInfo; import com.zntech.cpms.script.provider.service.ClientInfoService; import com.zntech.cpms.script.provider.service.ProjectTaskService; import com.zntech.cpms.script.provider.service.TaskFailInfoService; import com.zntech.cpms.script.provider.service.impl.MessageHandlerServiceImpl; import com.zntech.cpms.script.provider.util.CollectionsUtil; import com.zntech.cpms.script.provider.util.JacksonUtil; import lombok.extern.slf4j.Slf4j; import org.springframework.context.ApplicationContext; import org.springframework.stereotype.Component; import javax.validation.constraints.Max; import javax.websocket.*; import javax.websocket.server.PathParam; import javax.websocket.server.ServerEndpoint; import java.io.IOException; import java.lang.reflect.Array; import java.util.*; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import java.util.stream.Collectors; /** * websocket server, multi-instance, Each webSocket connection corresponds to one instance * @serverendpoint annotation with a value of URI, mapping the URL entered by the client to connect to the WebSocket server */ @Component @serverendpoint ("/{name}") @slf4j public class WebSocketServe {/** Log the number of current online connections. Designed to be thread-safe. */ private static AtomicInteger onlineCount = new AtomicInteger(0); {uri:WebSocketServer}, */ Private static ConcurrentHashMap<String, WebSocketServe> webSocketServerMAP = new ConcurrentHashMap<>(); private Session session; // A connection session with a client through which to send data to the client. // Client message sender private String URI; // Connection URI /** * Triggered when the connection is successfully established, binding parameter * @param session * Optional parameter. Session indicates the connection session with a client. * @param name * @param toName * @throws IOException */ @onOpen public void OnOpen (Session Session, @PathParam("name") String name, @PathParam("toName") String toName) throws IOException { this.session = session; this.name = name; this.uri = session.getRequestURI().toString(); WebSocketServe webSocketServer = webSocketServerMAP.get(uri); if(webSocketServer ! = null){// If the same business is already online, the original business will be pushed offline. WebSocketServer. Session. GetBasicRemote (.) sendText (uri + "repeated connection was crowded go"); webSocketServer.session.close(); OnClose ()} webSocketServerMap.put (uri, this); // Save the uri for the connection service addOnlineCount(); /** * connection closed when triggered, * @throws IOException */ @onClose public void OnClose () throws IOException { webSocketServerMAP.remove(uri); // Delete the connection service reduceOnlineCount() corresponding to the URI; // Number of online connections minus 1} /** * When a client message is received, @param Message is triggered. @throws IOException */ @onMessage public void OnMessage (String message) {log.info(" received message: "+ message); } @param session @param error @onError public void OnError (session session, Throwable error) {try {log.info("{}: communication error, connection closed ",name); webSocketServerMAP.remove(uri); }catch (Exception e){}} /** * Get the number of connections * @return */ public static int getOnlineCount() {return onlineCount.get(); } / * * * atomic operation, the number of connections online add a * / public static void addOnlineCount () {onlineCount. GetAndIncrement (); * * *} / atomic operations, online connection number minus one * / public static void reduceOnlineCount () {onlineCount. GetAndDecrement (); }}Copy the code
I use gradle project. Of course, don’t forget to import the WebSocket jar with the Gradle JAR.
Implementation (" org. Springframework. The boot: spring - the boot - starter - websocket: 2.3.0. RELEASE ") {exclude the module: "spring-boot-starter-tomcat" }Copy the code
Maven project jar
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-websocket</artifactId> <version>${websocket.version}</version> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions></dependency>
Copy the code
In the beginning, the project was published to the server and could only support 10,000 long connections. The top command looks at the Parameters of Linux and finds that the CPU has a lot of free memory. Then the bottleneck is not the CPU. And looked at the network bandwidth and other parameters, there is no problem. The default startup container for the SpringBoot project is Tomcat, which supports 1W connections by default. More than that and you refuse. Since tomcat is the problem, there are now two solutions. 1. Increase the number of Tomcat connections. 2. Change the container to Jetty. Jetty is definitely a better startup container for hundreds of thousands of long connections. Replace the startup container with Jetty, just exclude Tomcat from the jar reference and add Jetty’s JAR. Attach the JAR reference code
Gradle project jar
implementation("org.springframework.boot:spring-boot-starter-web") { exclude group: 'org.springframework.boot', module: 'spring - the boot - starter - tomcat'} implementation 'org. Springframework. The boot: spring - the boot - starter - jetty: 2.3.1. RELEASE' Implementation (" org. Springframework. The boot: spring - the boot - starter - websocket: 2.3.0. RELEASE ") {exclude the module: "spring-boot-starter-tomcat" }Copy the code
Maven project jar
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-websocket</artifactId> <version>${websocket.version}</version> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions></dependency> <! <dependency> <groupId>org.springframework.boot</groupId> < artifactId > spring - the boot - starter - jetty < / artifactId > < version > 2.3.1. RELEASE < / version > < / dependency > < the dependency > <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-websocket</artifactId> <version>${websocket.version}</version> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions></dependency>Copy the code
There is a pitfall here. When introducing the WebSocket JAR, you must also exclude the Tomcat dependency. Otherwise, the startup container is still Tomcat. This code is now placed on the Linux server. It should support tens of thousands of long connections. However, it backfired. After testing, it was found that only 1.6W+ long connection could be established. But the server’s memory and CPU are obviously free. So what’s the problem? Since the container has been replaced, the project should now support long connections depending on the server configuration. The problem should not be the framework. Could that be a limitation of Linux itself? Can only maintain 1.6W+ long connections? Ulimit -a See various parameter limits in Linux.
Max Locked Memory (kbytes, -l) 16384 Max Locked Memory (kbytes, -l) 16384 Is similar to the number of long connections. Ulimit -l unlimited Sets this parameter to unlimited. Now test the number of long connections. We can already reach 5W+. Because of the testing tools. No more pressure tests, but preliminary observations. Keeping 100,000 long connections should be supported. The command to permanently modify Max Locked Memory is attached
1) Remove the limits on the maximum number of processes and the maximum number of open files in Linux: Vi/etc/security/limits. Conf \ # add the following line root soft nofiles 1048576 root hard nofiles 1048576 * soft nofiles 1048576 * hard nofile 1048576 root soft memlock 102400 root hard memlock 102400 * soft memlock 102400 * hard memlock 102400Copy the code