Abstract: In the November 2016 Issue of Technology Radar, ThoughtWorks gave microservices a high rating. At the same time, more and more organizations are embracing microservices as a necessary part of their architecture evolution. But in an organization with many legacy systems, it’s not easy to break down what were once monolithic systems into microservices.
Credit: Justin Kenneth Rowley. You can find the original photo at flickr.
The microservices style of architecture highlights rising abstractions in the developer world because of containerization and the emphasis on low coupling, offering a high level of operational isolation. Developers can think of a container as a self-contained process and the PaaS as the common deployment target, using the microservices architecture as the common style. Decoupling the architecture allows the same for teams, cutting down on coordination cost among silos. Its attractiveness to both developers and DevOps has made this the de facto standard for new development in many organizations.
In the November 2016 Issue of Technology Radar, ThoughtWorks gave microservices a high rating. At the same time, more and more organizations are embracing microservices as a necessary part of their architecture evolution. But in an organization with many legacy systems, it’s not easy to break down what were once monolithic systems into microservices. This paper discusses how to use Dubbo framework to realize the migration from single system to microservice based on the principle of microservice transformation of legacy system.
1. Principle requirements
The idea of microservicing a standard three-tier monolithic system — in short — is to transform what used to be local calls between services within a single process into distributed calls across processes. Although this is not the whole content of the micro service transformation, it directly determines whether the system can maintain the same business capability before and after the transformation, and how much the transformation cost.
1.1 Suitable framework
In the field of microservices, there are many technology stacks, but there are only two schools, RPC and RESTful, of which the most influential representatives are Dubbo and Spring Cloud. They have similar abilities, had a very different way of implementation, this article does not want to researches in the selection process of micro service framework, also don’t want to the relative merits of these two kinds of framework to conduct a comprehensive comparison, mentioned in this chapter are all requirements are beyond the concrete implementation of these principles, in any of the micro service framework should be applicable. Readers can replace all Dubbo in this article with Spring Cloud without any impact on the final result, only the implementation details need to be changed. Therefore, no matter what the final decision is, there is no right or wrong. The key is to choose the most suitable one that fits the current situation of the organization.
1.2 Easily expose services as remote interfaces
In a single system, calls between services are completed in the same process; Microservices, on the other hand, divide independent business modules into different application systems, and each application system can be deployed and run as an independent process. Therefore, the transformation of microservices requires the transformation of in-process method invocation into interprocess communication. There are many ways to implement interprocess communication, but obviously the network-based call is the most common and easy to implement. Therefore, whether local services can be easily exposed as network services determines whether the exposure process can be implemented quickly, and the simpler the exposure process is, the lower the risk of inconsistency between the exposed interface and the previous interface.
1.3 Easy generation of remote service invocation proxy
When a service is exposed as a remote interface, the local implementation within the process ceases to exist. Simplifying the use of callers — generating local proxies for remote services and encapsulating the details of the underlying network interactions — is essential. In addition, the remote service proxy should not be used or functionally different from the original local implementation.
1.4 Keep the original interface unchanged or backward compatible
During the microservice transformation process, it is important to ensure that the interface is unchanged or backward compatible so that it does not have a huge impact on the caller. In practice, it is possible that we can only control the modified system and not access or modify the caller system. If the interface changes significantly, the maintainers of the calling system will find it difficult to accept: there will be unpredictable risks and impacts to their work, as well as additional work to accommodate the new interface.
1.5 Keep the original dependency injection relationship unchanged
In legacy systems developed on the basis of Spring, services are often related to each other in a dependency injection manner. With the microservice transformation, the injected service implementation becomes a local proxy, and in order to minimize code changes, it is a good idea to automatically switch the injected implementation class to a local proxy.
1.6 Keep the functions or side effects of the original code unchanged
This may seem complicated, but it is essential. After the transformation of the system and the original system to maintain the same business capabilities, if and only if the transformation of the code and the original code to maintain the same role or even side effects. There are additional side effects. In the process of modification, we can pay close attention to the general effects, but often ignore the effects of side effects. For example, when a method call is made internally in Java, the parameters are passed by reference, which means that the values in the parameters can be changed in the method body and “returned” to the called party. Look at the following example to make it easier to understand:
public void innerMethod(Map map) {
map.put(“key”, “new”);
}
public void outerMethod() {
Map map = new HashMap<>();
map.put(“key”, “old”);
System.out.println(map); // {key=old}
this.innerMethod(map);
System.out.println(map); // {key=new}
}
This code runs in the same process without a problem, because both methods share the same memory space, and changes made to the map by the innerMethod are directly reflected in the outerMethod method. The innerMethod and outerMethod run in two separate processes. The memory between the processes is isolated from each other. The innerMethod’s modification must be actively sent back to the outerMethod before it can receive it. Simply changing the value of the parameter does not achieve the purpose of sending back data.
The concept of a side effect here is a modification of the contents of the passed parameters in the method body that has a perceptible effect on the external context. Obviously side effects are unfriendly and should be avoided, but since it’s a legacy system, we can’t guarantee that there won’t be code written in this way, so we still need to maintain the effects of side effects during the microservice transformation process for better compatibility.
1.7 Change the internal code of the legacy system as little as possible (preferably none)
In most cases, not all legacy system code can be smoothed: for example, if the methods mentioned above have side effects, or if the incoming and outgoing parameters are non-serializable objects (without implementing the Serializable interface), etc. We can’t guarantee that we won’t make any changes to the code on legacy systems, but we should at least keep those changes to a minimum and try to work around them — such as adding code rather than changing it — so that the code is at least backward compatible.
1.8 Good fault tolerance
Unlike in-process invocation, cross-process network communication is not reliable and may fail for various reasons. Therefore, remote method invocation needs to take more fault tolerance into account when implementing microservice transformation. When a remote method call fails, it can be retried, restored, or demoted, otherwise untreated failures propagate (bubble) up the call chain, causing cascading failures throughout the system.
1.9 The transformation result is pluggable
The micro-service transformation for the legacy system cannot guarantee one-time success, and it requires continuous attempts and improvements, which requires that the original code and the modified code coexist in a period of time, and the system can seamlessly switch between the original mode and the micro-service mode through some simple configuration. Try the microservice mode first, in case of problems can quickly switch back to the original mode (manual or automatic), step by step, until the microservice mode becomes stable.
1.10 more
Of course, the requirements of micro-service transformation are far beyond the points mentioned above, and should also include such things as: Configuration management, service registration and discovery, load balance, gateway, current-limiting relegation, scalability, monitoring, and distributed transactions, but these needs most is to be finished in micro service system has been upgraded, complexity, increasing flow after rising to a certain extent will meet and need, so is not the focus in this paper. But that doesn’t mean these things aren’t important, and microservices can’t function properly, smoothly, or at high speed without them.
Second, simulate a single system
2.1 System Overview
We need to build a monolithic system with a three-tier architecture to emulate legacy systems, a simple Spring Boot application called Hello-Dubbo. All of the source code covered in this article can be viewed and downloaded on Github.
Firstly, the system has a model User and a DAO to manage the model, and the interface to access the User model is exposed to the upper layer through UserService. In addition, there is a HelloService that calls UserService and returns a greeting; After that, the Controller exposes the RESTful interface externally. Finally, the Application of Spring Boot is integrated into a complete Application.
2.2 Modular Split
Generally speaking, a single system with a three-tier architecture has its Controller, Service and DAO in a whole module. If micro-service transformation is to be carried out, it is necessary to split the whole system first. The separation method is to divide the Service layer into two sub-modules. The upper part of the Service layer is a sub-module (called Hello-Web), which provides RESTful interfaces externally. The Service layer comes down as another submodule (called hello-core) that includes the Service, DAO, and model. Hello-core is dependent on hello-Web. Of course, in order to better embody the spirit of contract-oriented programming, we can split hello-core further: all interfaces and models are separated into hello-API, and hello-core relies on hello-API. Finally, the module relationship after splitting is as follows:
hello-dubbo
| – hello – web (including Application, and the Controller)
| - hello - core (including the Service and the implementation of the DAO)
| - hello - API (containing the Service and the DAO interface and model)
2.3 Core code analysis
2.3.1 User
public class User implements Serializable {
private String id;
private String name;
private Date createdTime;
public String getId() {
return this.id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public Date getCreatedTime() {
return this.createdTime;
}
public void setCreatedTime(Date createdTime) {
this.createdTime = createdTime;
}
@Override
public String toString() {
}}
The User model is a standard POJO that implements the Serializable interface (because model data is transferred over the network, it must support serialization and deserialization). The default toString method is overridden for console output.
2.3.2 UserRepository
public interface UserRepository {
User getById(String id);
void create(User user);
}
The UserRepository interface is the DAO that accesses the User model and, for simplicity, contains only two methods: getById and Create.
2.3.3 InMemoryUserRepository
@Repository
public class InMemoryUserRepository implements UserRepository {
private static final Map STORE = new HashMap<>();
static {
}
@Override
public User getById(String id) {
return STORE.get(id);
}
@Override
public void create(User user) {
STORE.put(user.getId(), user);
}}
InMemoryUserRepository is an implementation class of the UserRepository interface. This type uses a Map object STORE to STORE data and adds a default user to the object through a static code block. The getById method retrieves user data from the STORE based on the ID parameter, while the Create method simply stores the passed User object into the STORE. Since all of these operations are done only in memory, this type is called InMemoryUserRepository.
2.3.4 UserService
public interface UserService {
User getById(String id);
void create(User user);
}
One – to – one correspondence with the UserRepository approach exposes the access interface to a higher level.
2.3.5 DefaultUserService
@Service(“userService”)
public class DefaultUserService implements UserService {
private static final Logger LOGGER = LoggerFactory.getLogger(DefaultUserService.class);
@Autowired
private UserRepository userRepository;
@Override
public User getById(String id) {
}
@Override
public void create(User user) {
}}
DefaultUserService is the default implementation of the UserService interface and is declared as a Service via the @Service annotation with the Service ID UserService (which will be needed later). The service internally injects an object of type UserRepository, UserRepository. The getUserById method retrieves data from userRepository by ID, while the createUser method stores the passed user parameter through the UserRepository.create method, setting the creation time of the object before storing it. Obviously, according to the side effects described in Section 1.6, setting the creation time of the User object is a side effect operation that needs to be retained after the microservice transformation. In order to see the effect of the system, the two methods are printed in the log.
2.3.6 HelloService
public interface HelloService {
String sayHello(String userId);
}
The HelloService interface provides only one method, sayHello, which returns a greeting to the user based on the userId passed in.
2.3.7 DefaultHelloService
@Service(“helloService”)
public class DefaultHelloService implements HelloService {
@Autowired
private UserService userService;
@Override
public String sayHello(String userId) {
}}
DefaultHelloService is the default implementation of the HelloService interface and is declared as a Service with the Service ID HelloService via the @Service annotation (again, this name will be used later in the transformation process). This type injects an object of type UserService, UserService. The sayHello method retrieves user information through the userService based on the userId parameter and returns a formatted message.
2.3.8 Application
@SpringBootApplication
public class Application {
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}}
The Application type is the entry point for Spring Boot applications. For details, refer to the official Spring Boot documentation.
2.3.9 Controller
@RestController
public class Controller {
@Autowired
private HelloService helloService;
@Autowired
private UserService userService;
@RequestMapping(“/hello/{userId}”)
public String sayHello(@PathVariable(“userId”) String userId) {
return this.helloService.sayHello(userId);
}
@RequestMapping(path = “/create”, method = RequestMethod.POST)
public String createUser(@RequestParam(“userId”) String userId, @RequestParam(“name”) String name) {
}}
The Controller type is a standard Spring MVC Controller, which I won’t discuss in detail here. The only thing to note is that this type injects objects of type HelloService and UserService, and calls the relevant methods of those two objects in the sayHello and createUser methods.
2.4 Packaged Operation
The Hello-Dubbo project consists of three sub-modules: Hello-API, Hello-core, and Hello-Web, which are managed using Maven. All the POM files involved so far are relatively simple. In order to save space, I will not list them here. Interested friends can go to the Github repository for their own research.
The Hello-Dubbo project is very straightforward to package and run:
Compile, package, and install
Execute the command in the project root directory
$ mvn clean install
run
Execute the command in the hello-web directory
$ mvn spring-boot:run
The test results are as follows. Note the date and time in parentheses each time they are printed.
Go back to the console of the Hello-Web system and check the log output. The time should be the same as above.
Three, begin to transform
3.1 Transformation Objectives
In the previous chapter, we successfully built a simulation system that is a single unit and provides two RESTful interfaces. The goal of this chapter is to split the monolithic system into two independently operating microservice systems. As described in Section 2.2, modularizing the split is an important step in implementing a microservice transformation because there is an implicit convention in the following description that the hello-Web, Hello-core, and Hello-API modules are the same capabilities as set out in the previous chapter. Based on the requirement of “changing the internal code of the legacy system as little as possible (preferably none)” mentioned in Section 1.7, the code in these three modules will not be extensively modified, but only slightly adjusted to adapt to the new microservice environment.
The specific goals to be achieved are as follows:
The first microservice system:
Hello – web (including Application and Controller) | – hello – service – reference (including Dubbo related service reference configuration)
| - hello - API (containing the Service and the DAO interface and model)
The second microservice system:
Hello – service – the provider (including Dubbo about service exposed configuration) | – hello – core (including the service and the implementation of the DAO)
| - hello - API (containing the Service and the DAO interface and model)
Hello-web, as before, is a terminal system that provides Web services to end users. It only contains Application, Controller, Service interface, DAO interface and model, so it does not have any business capabilities. You must rely on the Hello-service-reference module to remotely invoke the hello-service-provider system to complete services. The hello-service-provider system needs to expose the remote interface that can be invoked by the Hello-service-Reference module, and implement the specific business logic defined by the Service and DAO interfaces.
This chapter mainly introduces how to construct hello-service-provider and Hello-service-reference modules and their roles in the process of micro-service transformation.
3.2 Exposing remote Services
The combination of Spring Boot and Dubbo can introduce starter packages such as spring-boot-starter-Dubbo, which makes it easier to use. However, considering the simplicity and versatility of the project, this paper still adopts the classical way of Spring for configuration.
First, we need to create a new module called hello-service-provider that exposes the remote service interface. Thanks to Dubbo’s powerful service exposure and integration capabilities, the module doesn’t need to write any code, just add some configuration.
Note: Specific usage and configuration instructions for Dubbo are not the focus of this article. Please refer to the official documentation.
3.2.1 Adding the dubo-services. XML file
dubbo-services.xml
Configuration is the key to this module, and Dubbo automatically exposes the remote service based on this file. This is a standard Spring-style configuration file that introduces the Dubbo namespace and needs to be placed in
src/main/resources/META-INF/spring
Directory, so Maven automatically adds it to the classpath when it’s packaged.
3.2.2 Adding a POM File
The use and configuration of Maven is also not the focus of this article, but this module uses some Maven plugins. The functions and functions of these plugins are described here.
3.2.3 Adding the assembly. XML file
The main function of the Assembly plug-in is to repackage the project so that you can customize the packaging and content. For this project, you need to generate a compressed package that contains all the JARS, configuration files, startup scripts, and so on needed to run the service. The Assembly plug-in requires assembly.xml
File to describe the specific packaging process, this file should be placed in the SRC /main/assembly directory. For details on how to configure the assembly.xml file, refer to the official documentation.
3.2.4 Adding the logback. XML file
Since logback is specified as the log output component in the POM file, logback.xml is also required
File to configure it. The configuration file must be stored in the SRC /main/resources directory. For details about the configuration file, refer to the Code Repository. For details about the configuration, refer to the official documentation.
3.2.5 Packaging Because the packaging configuration has been defined in the POM file, run the following command in the hello-service-provider directory:
$ mvn clean package
Upon successful execution, a file named
Hello - service provider - 0.1.0 from - the SNAPSHOT - assembly. Tar. Gz
The contents of the compressed package are as shown in the figure below:
3.2.6 run
After the configuration is complete, you can run the following command to start the service:
$ MAVEN_OPTS=”-Djava.net.preferIPv4Stack=true” mvn exec:java
Note: in macOS system, multicast mechanism for service registration and discovery, we need to add – Djava.net.preferIPv4Stack=true parameters, otherwise it will throw an exception.
You can run the following command to check whether the service is running properly:
$ netstat -antl | grep 20880
If information similar to the following is displayed, the system is running properly.
In a formal environment, decompress the package generated in the previous step and run the script in the bin directory.
3.2.7 summary
Exposing remote services in this way has several advantages:
Remote service exposure is done with Dubbo without concern for the underlying implementation details
There is no intrusion into the original system, and the existing system can continue to start and run in the original way
The exposure process is pluggable
The Dubbo service can coexist with the original service during development and operation
You don’t have to write any code
3.3 Referencing remote Services
3.3.1 Adding a Service Reference
In the same way as the hello-service-provider module, in order not to invade the original system, we create another module called hello-service-reference. The module has only one configuration file, dubo-References. XML, in the SRC /main/resources/ meta-INF /spring/ directory. The document is very straightforward:
Different from the Hello-service-provider module, however, this module only needs to be packaged into a JAR. The POM file contains the following contents:
To summarize, our legacy system was divided into three modules: Hello-Web, hello-core, and Hello-API. After microservitization, hello-core and Hello-API are stripped out, and the hello-service-provider module is added to form a hello-service-provider system that can run independently. So it needs to be packaged as a complete application; In order for hello-web to invoke the service provided by Hello-core, it can no longer rely on the hello-core module directly, but needs to rely on the Hello-service-reference module created here. Therefore, hello-service-reference appears as a dependency library whose purpose is to remotely invoke the hello-service-provider exposed service and provide the local proxy.
The hello-Web module’s dependencies change: The hello-Web module relies directly on Hello-core and indirectly on Hello-API via Hellocore, but now we need to change it to rely directly on the Hello-service-reference module. The hello-service-reference module indirectly relies on the Hello-API. The dependence relationship before and after transformation is as follows:
3.3.2 Starting the Service
In a test environment, you only need to run the following command. Before performing this operation, you need to start the Hello-service-provider service.
$ MAVEN_OPTS=”-Djava.net.preferIPv4Stack=true” mvn spring-boot:run
Oops! The system does not work as expected and will throw the following exception:
Mean net. Tangrui. Demo. Dubbo. Hello. Web. The Controller of this class helloService fields need a type Net. Tangrui. Demo. Dubbo. Hello. Service. HelloService Bean, but didn’t find it. The relevant code snippet is as follows:
@RestController
Public class Controller {
@Autowired
private HelloService helloService;
@Autowired
private UserService userService;
. }
Obviously, neither helloService nor userService can be injected. Why?
The reason, of course, has to do with changing the dependencies of the hello-web module. Hello-web originally relies on hello-core, which declares HelloService and UserService (via @service annotation), And then the Controller is automatically bound at sign Autowired. However, now that we have replaced hello-core with hello-service-reference, two references to the remote service are declared in the hello-service-reference configuration file, which should work. But that is clearly not the case.
Think carefully, we execute MVN exec: the Java command to start the hello – service – specifies the start when the provider module com. Alibaba. Dubbo. Container. The Main types, This is confirmed by the log (which prints a lot of stuff with the [Dubbo] tag). Obviously we don’t see anything like this in this run, indicating that Dubbo is not being started properly. It comes down to Spring Boot, which requires some configuration to load and start Dubbo correctly.
There are many ways to get Spring Boot to support Dubbo, such as the spring-boot-starter-Dubbo starter package mentioned earlier, but for simplicity and generality, we’ll stick with the classic approach.
This module failed to start Dubbo because it added a reference to hello-service-reference. The hello-service-reference module has only one file, dubo-References. XML. This indicates that Spring Boot is not loaded into this file. With that in mind, you just need Spring Boot to successfully load the file, and you’re done. Spring Boot does provide this capability, but it’s not entirely invulsion-free, except that changes are acceptable. This is done by replacing the annotations in the Application (why this is done is beyond the scope of this article, please Google).
@Configuration
@EnableAutoConfiguration
@ComponentScan
@ImportResource(“classpath:META-INF/spring/dubbo-references.xml”)
public class Application {
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}}
The main change here is to replace the @SpringBootApplication annotation with @Configuration, @EnableAutoConfiguration, @ComponentScan, and @ImportResource annotations. It’s not hard to see that the last @importResource is what we need.
Then try again and everything will be fine.
But how do we verify that the result really comes from the hello-service-provider service? If you go back to the console of the Hello-service-provider service, you will see something like this:
Then you can be sure that the system split has been successfully implemented. Try creating the user interface again:
$ curl -X POST ‘http://127.0.0.1:8080/create?userId=huckleberry&name=Huckleberry%20Finn’
Wait, what! The createdTime field has no value at all.
3.4 Maintain side effects
Let’s review the side effects mentioned in Section 1.6. In DefaultUserService. The create method, we create time for the incoming user preferences, this operation is what we want to focus on a side effect of operation.
Let’s start with the monomer system. A singleton system runs in a Java virtual machine, where all objects share a memory space and can be accessed by each other. When the system is running, the Controller. Create method takes user input and encapsulates the input parameter as a user object. Concrete is then passed to the UserService. The create method (the call DefaultUserService. Create methods), then the user object createdTime field is set up. Because Java passes parameters by reference, changes made to the user object in the create method are reflected to the caller — changes can be retrieved in the controller.create method, so when returned to the user, This createdTime exists.
Let’s talk about microservices. In this case, the system runs on two VMS independently, and their memory is isolated from each other. The starting point is again the Controller.create method of the Hello-Web system: it takes user input and encapsulates the User object. Instead of calling the DefaultUserService method directly, a local proxy with the same interface is called. After serializing the User object, The device is transmitted to the Hello-service-provider system over the network. After the system receives the data, it deserializes it, producing an identical copy of the original object, which is then processed by the userService.create method (which calls the implementation of DefaultUserService). At this point, the createdTime copy of the user object is always in the hello-service-provider system’s memory. It is never passed out, so it cannot be read by the Hello-Web system. So this is N over A. Remember we have DefaultUserService. Create methods in the log output, so back to the hello – service provider system console, you can see the following log information, instructions in this system createdTime field is, indeed, have a value.
There is only one way to make this side effect felt by the Hello-Web system in another virtual machine, and that is to send back the changed data.
3.4.1 Adding a return value to a method
The easiest way to do this is simply to modify the service interface and return the changed data.
First, modify the create method of the UserService interface to add the return value:
public interface UserService {
.
// Add the return value User create(User User) for the method; }
Then modify the corresponding method in the implementation class to return the changed User object:
@Service(“userService”)
public class DefaultUserService implements UserService {
.
@Override
public User create(User user) {
}}
Finally, modify the caller implementation to receive the return value:
@RestController
public class Controller {
.
@RequestMapping(path = “/create”, method = RequestMethod.POST)
public String createUser(@RequestParam(“userId”) String userId, @RequestParam(“name”) String name) {
}}
Compile, run, and test (figure below), and as expected, the create time in parentheses is back. It works in the same way as described at the beginning of this section, but in reverse. I won’t expand it in detail here and leave it to you to think for yourself.
This modification method has the following advantages and disadvantages:
The method is simple and easy to understand
The system interface is changed and the changed interface is incompatible with the original interface (contrary to section 1.4 “Keep the original interface unchanged or backward compatible” principle)
This inevitably leads to changes to the internal code of the legacy system (contrary to the principle of “minimize (preferably none) changes to the internal code of the legacy system” in Section 1.7).
The modification method is not pluggable (it violates the requirement of “The modification result is pluggable” principle in Section 1.9)
This is a simple change, but it does more good than harm, and unless we can fully control the system, the risk of this change increases dramatically as the system complexity increases.
3.4.2 Adding a new method
If we cannot do this without changing the interface, we should at least make the changed interface backward compatible with the original interface. One way to ensure backward compatibility is not to change the old method but to add a new one. The process is as follows:
First, add a new method __rpc_create to the UserService interface. This may seem odd, but it has two advantages. First, it does not have the same name as an existing method, since the Java naming convention does not recommend using such identifiers for methods. Second, in the original method before adding __rpc_ prefix, can be done with the original method corresponding, easy to read and understand. The following is an example:
public interface UserService {
.
Void create(User User);
// Add a method that returns the value User __rpc_create(User User); }
Then, implement the new method in the implementation class:
@Service(“userService”)
public class DefaultUserService implements UserService {
.
@override public void create(User User) {
}
@override public User __rpc_create(User User) {
}}
In __rpc_create, the user argument is passed to the create method by reference, so any changes made by the create method can be retrieved by the __rpc_create method. This will be the same logic as before.
Third, add local stubs to the service reference side (see the official documentation for the concept and usage of local stubs).
Add a class UserServiceStub to the hello-service-reference module.
public class UserServiceStub implements UserService {
private UserService userService;
public UserServiceStub(UserService userService) {
this.userService = userService;
}
@Override
public User getById(String id) {
return this.userService.getById(id);
}
@Override
public void create(User user) {
User newUser = this.__rpc_create(user);
user.setCreatedTime(newUser.getCreatedTime());
}
@Override
public User __rpc_create(User user) {
return this.userService.__rpc_create(user);
}}
This type is a local stub. Simply put, the caller calls the corresponding method in the local stub before calling the local proxy’s method, so the local stub needs to implement the same interface as the service provider and service reference. Constructors in local stubs are required, and the method signature is agreed upon — the local proxy is passed in as a parameter. The getById and __rpc_create methods call local proxy methods directly, so we don’t need to focus on the create method. First, create calls the __rpc_CREATE method in the local stub, which accesses the corresponding method of the service provider through the local proxy and successfully receives the return value newUser, which contains the modified createdTime field. So all we need to do is get the value of the createdTime field from the newUser object and set it to the user parameter to create a side effect. The user parameter is then “passed” to the caller of the Create method with the newly set value of createdTime.
Finally, modify a configuration in the dubo-References. XML file to enable the local stub:
interface=”net.tangrui.demo.dubbo.hello.service.UserService”
Version = “1.0”
stub=”net.tangrui.demo.dubbo.hello.service.stub.UserServiceStub” />
Given how the local stub works, we don’t need to modify any code or configuration in the caller’s Hello-Web module. Compile, run, and test to get what we want.
This approach is an improvement over the first approach, but it also has fatal weaknesses:
Backward compatibility of the interface is maintained
Local stubs are introduced without modifying the caller code
The result of transformation can be pluggable through configuration
The implementation is complex, especially with native stubs, and it can be time-consuming and error-prone to reproduce this side effect if the legacy system’s code makes inordinate changes to the contents of the passed parameters
It is difficult to understand
Four,
At this point, the task of transforming the legacy system into a micro-service system is accomplished, and basically meets the ten transformation principles and requirements proposed at the beginning of the article (here should give yourself some applause), I do not know whether it is helpful to everyone? Although the sample project is tailor-made for the narrative requirements, the ideas and methods mentioned in the article are actually explored and summarized from practice — the potholes, problems encountered, solutions and difficulties of transformation are all presented to everyone.
Microservices are not a new technology at present, but the historical burden is still an important factor limiting its development. I hope this article can give you some inspiration to better embrace the changes brought by microservices in the following work.
The original link