Usually, we use Tomcat server, because Tomcat uses the way of epoll+ multithreading scheduling. Each thread schedules one request at a time, and Redis writes the request to an RDB. This generates multiple RDBS. The company uses the Jetty container, where a single thread sends requests to Redis every time a request is made, and redis itself is a single-threaded middleware, so continuous logging increases the size of the RDB snapshot, thereby increasing memory.

Error:

HTTP ERROR 500
Problem accessing /settle/. Reason:

    Server Error
Caused by:
org.springframework.dao.InvalidDataAccessApiUsageException: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.; nested exception is redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs fordetails about the error. at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:44) at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:36) at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy .java:37) at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java: 37) at org.springframework.data.redis.connection.jedis.JedisConnection.convertJedisAccessException(JedisConnection.java:210) at  org.springframework.data.redis.connection.jedis.JedisConnection.hMSet(JedisConnection.java:2652) at org.springframework.data.redis.core.DefaultHashOperations$7.doInRedis(DefaultHashOperations.java:134)
	at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:191)
	at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:153)
	at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:86)
	at org.springframework.data.redis.core.DefaultHashOperations.putAll(DefaultHashOperations.java:131)
	at org.springframework.data.redis.core.DefaultBoundHashOperations.putAll(DefaultBoundHashOperations.java:85)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository$RedisSession.saveDelta(RedisOperationsSessionRepository.java:778)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository$RedisSession.accessThe $000(RedisOperationsSessionRepository.java:670)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository.save(RedisOperationsSessionRepository.java:388)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository.save(RedisOperationsSessionRepository.java:245)
	at org.springframework.session.web.http.SessionRepositoryFilter$SessionRepositoryRequestWrapper.commitSession(SessionRepositoryFilter.java:244)
	at org.springframework.session.web.http.SessionRepositoryFilter$SessionRepositoryRequestWrapper.accessThe $100(SessionRepositoryFilter.java:214)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:167)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1631)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:549)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:568)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
	at org.eclipse.jetty.server.Server.handle(Server.java:462)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
	at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) at  org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
	at java.lang.Thread.run(Thread.java:748)
Caused by: redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
	at redis.clients.jedis.Protocol.processError(Protocol.java:127)
	at redis.clients.jedis.Protocol.process(Protocol.java:161)
	at redis.clients.jedis.Protocol.read(Protocol.java:215)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
	at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:239)
	at redis.clients.jedis.BinaryJedis.hmset(BinaryJedis.java:886)
	at org.springframework.data.redis.connection.jedis.JedisConnection.hMSet(JedisConnection.java:2650)
	... 35 more
Caused by:
redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
	at redis.clients.jedis.Protocol.processError(Protocol.java:127)
	at redis.clients.jedis.Protocol.process(Protocol.java:161)
	at redis.clients.jedis.Protocol.read(Protocol.java:215)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
	at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:239)
	at redis.clients.jedis.BinaryJedis.hmset(BinaryJedis.java:886)
	at org.springframework.data.redis.connection.jedis.JedisConnection.hMSet(JedisConnection.java:2650)
	at org.springframework.data.redis.core.DefaultHashOperations$7.doInRedis(DefaultHashOperations.java:134)
	at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:191)
	at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:153)
	at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:86)
	at org.springframework.data.redis.core.DefaultHashOperations.putAll(DefaultHashOperations.java:131)
	at org.springframework.data.redis.core.DefaultBoundHashOperations.putAll(DefaultBoundHashOperations.java:85)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository$RedisSession.saveDelta(RedisOperationsSessionRepository.java:778)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository$RedisSession.accessThe $000(RedisOperationsSessionRepository.java:670)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository.save(RedisOperationsSessionRepository.java:388)
	at org.springframework.session.data.redis.RedisOperationsSessionRepository.save(RedisOperationsSessionRepository.java:245)
	at org.springframework.session.web.http.SessionRepositoryFilter$SessionRepositoryRequestWrapper.commitSession(SessionRepositoryFilter.java:244)
	at org.springframework.session.web.http.SessionRepositoryFilter$SessionRepositoryRequestWrapper.accessThe $100(SessionRepositoryFilter.java:214)
	at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:167)
	at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1631)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:549)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:568)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
	at org.eclipse.jetty.server.Server.handle(Server.java:462)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232)
	at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) at  org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536)
	at java.lang.Thread.run(Thread.java:748)
Copy the code

Scenario: Jetty adds context to each request

            Server server = new Server(port);
            System.out.println("Begin start jetty...");
            HandlerCollection hc = new HandlerCollection();
            WebAppContext webapp = new WebAppContext();
            webapp.setContextPath(contentPath);
            File webappdir = new File(new File(webprjdir), "src/main/webapp");
            //System.out.println(webappdir.getCanonicalPath());
            if(! webappdir.exists()) { throw new RuntimeException("Directory" + webprjdir + SRC /main/webapp does not exist);
            }
            webapp.setWar(webappdir.getAbsolutePath().replace("% 20".""));
            InputStream fis = RunAppHelper.class.getResourceAsStream("/jettyetc/webdefault.xml");
            File tempfie = new File("webdefault.xml");

            FileOutputStream fos = new FileOutputStream(tempfie);
            IOUtils.copy(fis, fos);
            fos.close();

            String deffile = tempfie.getAbsolutePath().replace("% 20"."");
            webapp.setDefaultsDescriptor(deffile);
            webapp.setInitParameter("org.eclipse.jetty.servlet.Default.dirAllowed"."false");
            webapp.setInitParameter("org.eclipse.jetty.servlet.Default.useFileMappedBuffer"."false");
            hc.addHandler(webapp);
Copy the code

According to Spring’s interception flow, handlerServlet intercepts HttpServlet

public class RemoteInvokeServlet extends HttpServlet {
@Override
    protected void doPost(HttpServletRequest req, HttpServletResponse resp)
        throws ServletException, IOException {

        ServiceContext ctx = new ServiceContext();
        //        Map<String, Object> result = new HashMap<>();
        ProxyResponse result = null;
        TraceLogger traceLogger = traceLogger = TraceLogger.get();
        traceLogger.initialize();
        try {
            ServiceContext.set(ctx);
            ProxyRequest obj = (ProxyRequest) SerializeUtils.hessianDeserialize(req.getInputStream());

            ctx.setCodePrefix(obj.getCodePrefix());
            ctx.setSid(obj.getSid());
            ctx.setUserId(obj.getUserId());
            ctx.setOrgId(obj.getOrgId());
            ctx.setProject(obj.getProject());

            //类
            Class port = Class.forName(obj.getClassName());
            Method method = port.getMethod(obj.getMethodName(), obj.getParamTypes());
            logger.debug("Begion invoke service " + obj.getClassName() + "." + obj.getMethodName()
                + ", args:" + ArrayUtils.toString(obj.getArgs(), "[]")); Object target = springctx.getBean(port); Object robj; traceLogger.action(port.getSimpleName() +"-" + method.getName());
            traceLogger.requestId(UUID.randomUUID().toString());
            logger.debug("=== begin request processing ===");
            RequestWrapper requestWrapper = new RequestWrapper(req);
            requestWrapper.initialize();
            logRequest(requestWrapper, traceLogger);
            robj = method.invoke(target, obj.getArgs());

            result = ProxyResponse.success(robj, ServiceContext.get().getDatas());
        } catch (Exception e) {
            result = ProxyResponse.error(getErrorMessage(e));
            logger.error("Invoke remote service error", e);
        } finally {
            logResponse(resp, traceLogger);
            logger.debug("=== finish request processing ==="); traceLogger.cleanup(); // Remove the context servicecontext.remove (); SerializeUtils.hessianSerialize(result, resp.getOutputStream()); }}}Copy the code

Found that the filter lifecycle was passed before the remote call was made

Aof rewrite mechanism

AOF works by appending write operations to a file, and the file becomes more and more redundant. So Redis is smart enough to add rewriting. When the AOF file size exceeds a set threshold, Redis compresses the contents of the AOF file.

The solution

We observe that child processes are constantly forked in the server logs. This is obviously because Redis’s AOF flush strategy forks a new process that reads the data in memory and writes it back to a temporary file. No old files are read. Finally, replace the old AOF file.

Read the configuration file description to find this property. Because of the time interval required for our AOF rewrite, closing appendfsync simply means that writing to the file is closed, but Redis still forks a new process to write to the file. There is no write at this time, but the previous file backups cause memory to pile up and the asynchronous flush exceeds the threshold