Spring Boot was an early adopter within Snowball, and our team was one of the first to use it (v1.2.4, released in mid-2015). Up to now, Spring Boot has become the basic framework popular in the company, but the use of different teams is not uniform, version 1.2, 1.3, 1.5 are used, also caused some middleware version dependence problem.

Ours is the oldest and has deviated from four major versions. In fact, the upgrade was planned very early, and was tried once during version 1.5, but stalled due to the changes involved and the busy business at the time. The Spring Boot 2.0 release was released in March, and April was the perfect time to upgrade the entire back-end module to the 2.0 benchmark.

This time, we will share some experiences and problems in the upgrade process, aiming to help you upgrade more smoothly and push you to upgrade to the unified benchmark at the right opportunity.

| simple introduce new features

The biggest new feature that Spinrg Boot 2.0 brings is Reactive Spring & WebFlux, specifically due to the introduction of Spring 5.0.

Reactive Programing (RP) uses asynchro and message-driven to build fluid, resilient systems. RP is not a new technology and will not be introduced here. WebFlux is a new set of responsive programming models built on Reactive Streams and provides a webClient-like responsive client, based on the Spring MVC module.

Under the RP model, it is important to ensure that the whole process is asynchronous. If a Callback is blocked, the entire system will be affected. However, most JAVA libraries are currently blocked, and Spring brings a whole stack of responsive technologies. However, the BLOCKING model of JDBC has not been supported (Alibaba internally solves this problem through Wisp coroutine technology of JVM, and implements a complete set of RP model framework based on RX Java). We hope Spring Data Reactive can solve this problem as soon as possible.

There are also some other features such as HTTP/2 support, Actuator module enhancements, Quartz module support and more features available in the official documentation.

In addition to the new features, there are some other interesting changes, perhaps a technical direction?

  • Lettuce instead of Jedis

  • Caffeine instead of Guava

  • HikariCP replaces Tomcat Pool

In addition to Jedis, there are also Lettuce and Redisson, both of which are Netty based implementations and support responsive APIS. I guess the reason Spring chose Lettuce is because it is simple enough compared to Redisson

The Guava Cache is a very common component, and Caffeine has adopted a more efficient LRU algorithm and optimized memory usage. Official performance ratings are about 10 times better than Guava, which is probably why Spring dropped Guava. Caffeine also provides a Guava Adapter that allows users to migrate seamlessly, and we are currently replacing our team.

HikariCP has long been touted as the fastest connected pool component, and this new Spring Boot default is an official endorsement of HikariCP. The Snowball team has been using Druid connection pooling, mainly because of its comprehensive monitoring, plugins, and extensive “testing” by many of the country’s largest manufacturers.

| the upgrade process

For older versions, the official recommendation is to upgrade to 1.5 as a springboard to 2.0. However, our aggressive approach also resulted in a large number of changes involving dependency package upgrades, configuration changes, and code level changes. We were surprised to see 250+ files changed when we submitted PR for launch.

Here are a few components that can help reduce the administrative difficulty of relying on the “starters” package

  • spring-boot-starter-json

  • spring-boot-starter-test

  • spring-boot-starter-cache

These official components support common modules such as Jackson, test JUnit, Mockito, and common cache Manager support.

There have also been many changes at the configuration level, with a number of key structures refactored and the spring-boot-properties-migrator module provided to assist migration. It provides looser binding rules (dashes, humps, underscores), but recommends using the lowercase + dashes specification in.properties and.yml files (I recommend this standard for all configurations)

These changes are numerous but easy to spot, or fail to compile or start. The scariest part of the upgrade process are some hidden risks that are easy to ignore.

| Path Matching close suffix match by default

In the old days you could change the suffix of the path to get the correct resolution, for example /fund/000961.xyz will get the correct resolution to /fund/000961. But 2.0 disallows suffix matching patterns by default, which is Spring’s official recommendation.

In fact, this change was already known during the upgrade process, but our team has been using the REST style with no suffix, so we didn’t pay much attention to it. But to my surprise, another team used our open interface, but adopted their own path rules, added the.json suffix, which had been stable until now, but now all the interfaces were down. This was the only online accident caused by the upgrade process. Fortunately, it was found and repaired in time and did not have a major impact.

The solution is to turn suffix matching on:

| HTTP PUT new Method of the Filter

During the test after the upgrade, it was found that some interfaces failed even though the parameters were correct. Data overlap of PUT parameters was found

The HttpPutFormContentFilter is used to handle the body parameter of PUT and PATCH (this PATCH was introduced after version 1.5 and enabled by default).

Since browser forms only support GET and POST requests, early Spring did not support body for PUT requests. The correct solution is to implicitly convert Method at the back end. However, this scheme was not used before, but the client changed the parameter to be passed as param, which was not a big problem in itself, but forgot to remove the parameter in the body, which became the same name parameter in the new version, and received as character array (numeric class is not affected).

The solution is to temporarily turn off the Filter, wait until the client is fixed, and then open it again in the future when the older version of the problem client is no longer supported:

2.0 | Swagger is not supported

After all the upgrades, I found that Swagger could not be used. I went to the official repo to seek a solution. I found that someone had already asked and the official response was that version 2.0 😱 was not supported for the time being. Fortunately, in other posts, I found that some people said that the use of success, but did not give a clear solution, so I had to explore a little bit, fortunately, the final solution.

The solution is to remove the old Security configuration and add some resource mappings

| summary

This upgrade involves more than ten subsystem modules, from development -> test -> gray -> online about 3 weeks, except for the minor accidents caused by the above upgrade process is much smoother than expected, the experience is summarized as follows

  • Be familiar with system code and scheme

  • Don’t just read the upgrade guide, read the official documentation

  • Have a lot of time and be careful

Some people are reluctant to upgrade because of the risk, thinking that enough is enough. However, it seems to me that a reasonable upgrade is necessary and the benefits are obvious, as long as the correct upgrade method is used, most of the risks are manageable. For the upgrade, the longer the time, the greater the version difference means the higher the risk, the appropriate upgrade cycle can also effectively reduce the risk.

* * | there’s another thing

** Snowball’s engineer team is recruiting Java engineers, operation and maintenance development engineers, test development engineers, algorithm engineers. Interested students can check the original text to see the specific position and requirements, just waiting for you.