It will take about 12.4 minutes to read this article.


Source:http://suo.im/4wqRi7 Author: Yang Minghan

foreplay



Front-end and back-end separation has become the industry standard for Internet project development. It can be effectively decouple by using Nginx + Tomcat (or by adding a NodeJS in the middle), and the front-end and back-end separation will be used for large-scale distributed architecture, elastic computing architecture, micro-service architecture, and multi-terminal services (multiple clients, such as: Browsers, in-car terminals, Android, IOS, etc.) lay a solid foundation.


This step is necessary for a system architecture to evolve from an ape to an adult.


The core idea is that the front-end HTML page invokes the Back-end Restuful API through Ajax and interacts with JSON data.
Noun explanation:


In the Architecture of the Internet, Web servers: generally referred to as nginx, Apache and other servers, they can only parse static resources.


Application server: generally refers to servers such as Tomcat, Jetty, resin can resolve dynamic resources and static resources, but the ability to resolve static resources is not as good as web server.


Generally, only the Web server can be accessed from the Internet, and only the application server can be accessed from the Intranet.
Expertise (developer separation)


Most of the previous JavaWeb projects were Java programmers working on both the front end (Ajax /jquery/ JS/HTML/CSS etc.) and the back end (Java /mysql/ Oracle etc.).


With the development of The Times, gradually many small and medium-sized companies began to put the boundaries of the front and back end more and more clear, front-end engineers only front-end things, back-end engineers only back-end things.


Is the so-called specialty, if a person can do anything, so he is not good at anything after all.


Large and medium-sized companies need professionals, small companies need generalists, but for individual career development, I recommend separate.


For backend Java engineers:


Focus on Java fundamentals, design patterns, JVM principles, Spring + Spring MVC principles and source code, Linux, mysql transaction isolation and locking mechanism, mongodb, HTTP/TCP, multi-threading, distributed architecture (Dubbo, Dubbox, Spring Cloud), Elastic computing architecture, microservices architecture (SpringBoot + ZooKeeper + Docker + Jenkins), Java performance optimization, and related project management, etc.


The back end pursues the three high points (high concurrency, high availability, high performance), security, storage, business, and so on.


For front end engineers:


Focus on HTML5, CSS3, jquery, AngularJS, Bootstrap, ReactJS, Vuejs, Webpack, less/sass, gulp, NodeJS, Google V8, javascript multi-threading, modular, Faceted programming, design patterns, browser compatibility, performance optimization, and more.


The front-end is about page performance, speed, compatibility, user experience, and so on.


There is specialization, so that your core competitiveness will be higher and higher, is the so-called what you put into life, life will give you what.


And both ends of the development are more and more advanced, you think what will, after all, you are not fine.


By dividing the team into front and back end teams, engineers on both sides can focus more on their respective fields and manage them independently, and then build a full-stack team of excellence.


Primitive age (various couplings)


There was a time when our JavaWeb projects used several backend frameworks, SpringMVC/Struts + Spring + Spring JDBC/Hibernate/Mybatis, etc.


Most projects have three layers on the Java back end: a control layer (Controller/Action), a business layer (Service/Manage), and a persistence layer (DAO).


The control layer is responsible for receiving parameters, calling relevant business layers, encapsulating data, and routing & rendering to JSP pages.


Then JSP page using a variety of tags (JSTL/EL/Struts tags, etc.) or handwritten Java expressions (<%=%>) will show the background data, play is MVC that set of ideas.


Let’s look at the situation: requirements are finalized, code is written, tests are tested, and then what? Ready to launch?


You need to use maven and eclipse tools put your code into a war file, and then the war file to your production environment of the web container (tomcat/jboss, weblogic/websphere/jetty/resin), right?


Once you’re done publishing, you need to start your Web container and start providing services, at which point you’ll be able to access your site (assuming you’re a website) by configuring the domain name, DNS, etc.


Is your front and back end code all in that WAR package? That includes your JS, your CSS, your images, all kinds of third-party libraries, right?


Ok, now enter your web domain name (www.xxx.com) in your browser. What happens? (This question is also asked at many companies.)


I picked up the dry said ah, the foundation of poor children’s shoes please go to search.


The browser through the domain name through DNS server to find your server external IP, send HTTP request to your server, after the tcp3 handshake (HTTP is TCP/IP below), through TCP protocol start to transmit data, your server gets the request, start to provide services, receive parameters, and then return your response to the browser, The browser then uses the Content-Type to parse the content you return and present it to the user.


So let’s take a look at, we first assume that there are 100 images in your home page, at this point, the user’s seemingly an HTTP request, it’s not time, the user at the time of the first visit, the browser does not have a cache, your 100 images, the browser to be attached to the request 100 HTTP requests (HTTP long even short even someone would tell me the problem, Not discussed here), your server receives these requests and consumes memory to create sockets to play with TCP transfers (consuming computing resources on your server).


Key here, in this case, the pressure of your server will be very big, because the page of all the requests are only request to you on this server, if one is good, if 10000 individual concurrent access (first not to chat server cluster, there is a single instance server), that you of how much server can resist a TCP connection? How much bandwidth do you have? How much memory does your server have? Is your hard drive high-performance? How much IO can you hold? How much memory do you assign to the Web server? Will it go down?


That’s why, the larger the medium to large web applications, the more decoupled they are.


Theoretically you can put your database + application service + message queue + cache + files uploaded by users + log + and so on on a server, you don’t have to play what service governance, also don’t have to do what performance monitoring, what alarm mechanism, etc., just a mess.


But it’s like putting all your eggs in one basket, and it can be very dangerous. If the entire server runs out of memory due to memory instability in one of the sub-applications, your entire site is down.


If there is an accident, and your business is in the peak period of blowout development, then congratulations, business success is stuck by technology, it is likely to lose a large number of users, the consequences are unimaginable.


Note: technology must be ahead of business, otherwise you will miss the best growth period


In addition, your applications are all coupled together, like a monolith, and when the server is under load, you tend to cluster the servers in a load-balancing way, so you’re actually extending the monolith horizontally, and the performance acceleration is getting lower and lower.


There is no need to horizontally extend functionality or modules that are already low load. The example in this article is that your performance bottleneck is not in the front end, so why horizontally extend the front end??


And when the release deployment went online, I clearly only changed the back-end code, why do I want to release the front-end as well??


(Reference: Architecture Exploration – Lightweight Microservice Architecture, Huang Yong) Normal Internet architecture, is to be dismantled, your Web server cluster, your application server cluster + file server cluster + database server cluster + message queue cluster + cache cluster and so on.


JSP pain points


Most of the previous javaWeb projects use JSP as the page layer to show data to the user, because the flow is not high, so there is no such demanding performance requirements, but now is the era of big data, the performance requirements for Internet projects are getting higher and higher.


So the original architecture model of front and back coupling is no longer enough for us, so we need to find a decoupled way to dramatically increase our load capacity.


1. Dynamic resources and static resources are all coupled together, the server is under great pressure, because the server will receive various HTTP requests, such as CSS HTTP requests, JS, images and so on.


Once the server is in trouble, the front and back end play together, and the user experience is terrible.


2. After the UI has a good design drawing, the front-end engineer is only responsible for cutting the design drawing into HTML, and Java engineers need to set HTML into JSP pages, which has a high error rate (because there are often a lot of JS codes in the page).
It is inefficient for both parties to cooperate in the development of problem modification.


3. JSP must be run in Java web server (such as Tomcat, Jetty, resin, etc.), can not use Nginx, etc. (Nginx is said to single instance HTTP concurrency up to 5W, this advantage should be used), performance is not improved.


4. The first request for a JSP must be compiled into a servlet in the Web server, which will run slowly the first time.


5. Each request JSP is to access the servlet output stream output HTML page, efficiency is not as high as using HTML directly (is every time, pro ~).


6. JSP contains a lot of tags and expressions, front-end engineers in the modification of the page will be overwhelmed, encounter a lot of pain points.


7. If the JSP has a lot of content, the page response will be slow because it is loaded synchronously.


8. The need for front-end engineers to use Java IDE (such as Eclipse), and the need to configure a variety of back-end development environment, have you considered the feelings of front-end engineers?


Based on some of the pain points mentioned above, we should move the development weight of the entire project forward to achieve true decoupling of the front and back ends!


Development mode


The old way was:
1. Product experience/leader/customer demand
2.UI makes design drawings
3. Front-end engineers create HTML pages
4. Back-end engineers will set HTML pages into JSP pages (before and after the end of the strong dependence, back-end must wait for the front-end HTML to do a good job to set JSP. If the HTML changes, it’s even more painful and inefficient.)
5. Integration problems occur
6. Front-end rework
7. Back-end rework
8. Secondary integration
9. The integration is successful
10. Delivery


The new approach is:
1. Product experience/leader/customer demand
2.UI makes design drawings
3. Specify interfaces & data & parameters on the front and back ends
4. Parallel development at the front and back ends (without strong dependence, parallel development at the front and back ends is possible. If the requirements change, as long as the interface and parameters remain unchanged, there is no need to modify the code on both sides, and the development efficiency is high)
5. Integrate the front and rear ends
6. Adjust the front-end page
7. The integration is successful
8. Delivery


Request way


The old way was:
1. Client request
2. Servlets or controllers on the server side receive requests (the back-end controls routing and rendering pages, and the weight of the whole project development is mostly on the back-end)
3. Invoke the Service, and the DAO code completes the service logic
4. Return to the JSP
5. JSP shows some dynamic code


The new approach is:
1. The browser sends a request
2. Directly reach the HTML page (the front end controls the routing and rendering page, and the weight of the whole project development moves forward)
3. The HTML page is responsible for invoking the server-side interface to generate data (via Ajax, etc., the background returns data in JSON format, which replaces XML for its simplicity and efficiency)
4. Fill in HTML, display dynamic effects, parse and manipulate DOM on the page.
(Interested children can visit alibaba and other large sites, and then click F12, monitor how you refresh the page, its HTTP play, most of the separate requests for background data,
Use JSON to transfer data, rather than a large and complete HTTP request to return the entire page, including dynamic and static.


To summarize the new approach to request steps:


Large number of concurrent browser requests –> Web Server Cluster (NGINx)– > Application Server Cluster (Tomcat)– > File/database/cache/message queue server cluster.


At the same time, it can be divided into modules and divided into small clusters according to business to prepare for the later architecture upgrade.


The advantages of front and back separation


1. Real front-end and back-end decoupling can be achieved, with the front-end server using NGINx.


Front/WEB server to put the CSS, js, pictures and so on a series of static resources (you can even CSS, js, pictures and other resources on a particular file server, such as ali cloud oss, and use the CDN acceleration), the front-end server is responsible for the control of page references & jump & routing, the front page asynchronous calls the backend interface, The back-end/application server uses Tomcat (think of it as a data provider) to speed up overall response times.


(Use front-end engineering frameworks such as NodeJS, React, Router, React, Redux, webpack)


2. Find the bug, who is the problem can be quickly located, there will be no mutual kicking phenomenon.


Page logic, jump errors, browser compatibility issues, scripting errors, page styles, etc., are all the responsibility of the front-end engineer. Problems such as interface data error, data submission failure, and reply timeout are all solved by back-end engineers.


The two sides do not interfere with each other, and the front end and back end are a loving family.


3. In the case of large concurrency, I can expand the front and back end servers horizontally at the same time. For example, a home page of Taobao needs 2000+ front-end servers as a cluster to resist the daily pv of hundreds of millions.


(Go to ali’s tech summit and hear them write their own web container, even if it has 100 thousand HTTP concurrent singletons, 2000 is 200 million HTTP concurrent singletons, and they can scale infinitely based on predicted flood peak, horrible, just one home page…)


4. Reduce the concurrency/load on the back-end server. All HTTP requests except interfaces are transferred to the front-end Nginx.


And the browser makes a lot of calls to the local cache except for the first page request.


5. Even if the back-end service times out or goes down temporarily, the front-end page will be accessed normally, but the data will not be reproduced.


6. Maybe you also need to have a light application related to wechat, so that your interface can be completely shared. If you also have app-related services, you can also reuse a lot of interfaces through some code reconstruction to improve efficiency. (Multi-terminal application)


7. It’s okay to display too much on a page because it’s loading asynchronously.


8. Nginx support page hot deployment, no need to restart the server, front-end upgrade more seamless.


9. Increase code maintainability & readability (code that is coupled at the front and back ends is quite laborious to read).


10. Improve the development efficiency, because the front and back end can be developed in parallel, instead of the previous strong dependency.


11. Deploy certificates in nginx, the external network uses HTTPS access, and only open ports 443 and 80, all other ports are closed (to prevent hacker port scanning), the internal network uses HTTP, performance and security are guaranteed.


12. A large number of front-end component code to reuse, componentalization, improve the development efficiency, out!


Matters needing attention


1. During the requirements meeting, the front-end and back-end engineers must all attend and prepare interface documents. The back-end engineers should write test cases (two dimensions). Test cases for the Service layer are written in junit. Ps: Can you play unit tests on the front end?


2. The above interface is not the Java interface, so call the method in your Controler.


3. Increased workload of front-end team, reduced workload of back-end team, improved performance and scalability.


4. We need some front-end frameworks to solve functions like page nesting, paging, page jump control and so on. (Those front-end frameworks mentioned above).


5. If your project is small, or if it’s purely an Intranet project, you can rest assured that you don’t need any architecture, but if your project is an Internet project, hehe.


6. In the past, some people have used template frameworks such as Velocity/Freemarker to generate static pages, which is a matter of opinion.


7. The main purpose of this article is to say that JSP has been eliminated in large external Java Web projects, but it does not say that JSP can be completely not learn, for some students friends, JSP /servlet and other related Java Web foundation should be mastered firmly. What else do you think a framework like SpringMVC is based on?


8. If the page has some permissions and other related verification, then these related data can also be retrieved from the interface through Ajax.


9. For logic that can be done at both the front end and the back end, I suggest putting it at the front end. Why?


Because you need the logic of the computing resources to calculate, if on the backend to run logic, will & CPU & memory bandwidth and so on computing resources, you should remember is the server-side computing resources are limited, and if on the front end, using the client computing resources, so your server load will drop (high concurrency scenario).


Similar to data verification, both the front and back end need to do!


10. The front end should have a mechanism to deal with the timeout of back-end requests and the breakdown of back-end services, and show them to users in a friendly way.


Further reading


1. As a matter of fact, static resources such as JS, CSS and pictures can be considered to be placed on file servers such as OSS of Ali Cloud (if it is a common server & operating system, the IO will have serious performance problems if the files are stored after reaching pb level or the number of files in a single folder reaches 30,000 to 50,000). With CDN (national child Node Acceleration) on OSS, your pages will fly no matter where you are in the country, and your Nginx load will be further reduced.


2. If you want to play with lightweight micro service architecture, to use nodejs gateway, the benefits of using nodejs and conducive to seo optimization, because nginx is static resource to the browser to return to the page, and domestic search engine crawlers would grab the static data, the page that will not be parsed js, which can not good search engine application support.


Also, because Nginx does not do page assembly rendering, the static page needs to be returned to the browser and rendered, which increases the rendering burden of the browser.


Requests initiated by the browser are distributed through Nginx, and URL requests are uniformly distributed to NodeJS for page assembly and rendering. API requests are sent directly to the back-end server to complete the response.


3. If you encounter cross-domain problems, spring4’s CORS can be perfectly solved, but generally nginx reverse proxy will not have cross-domain problems, unless you split the front-end and back-end services into two domains.
The JSONP approach is also obsolete.


4. If you want to play multiple applications, you should remove the original Tomcat session mechanism, use the token mechanism, use caching (because it is a distributed system), and do a single point. For the security of the token mechanism, you can search JWT.


5. Mock tests can be added to the front-end project (virtual test objects can be constructed to simulate the back-end, which can be independently developed and tested). The back-end needs detailed test cases to ensure the availability and stability of services.


conclusion


Front-end separation is not just a development pattern, but an architectural pattern (front-end separation architecture).


Don’t think that separating the front end from the back end means separating the front end from the back end. Front-end projects and back-end projects are two projects, hosted on two different servers, deployed independently, two different projects, two different code bases, and different developers.


The front and back end engineers need to agree interaction interfaces to achieve parallel development. After development, they need to deploy independently. The front end invokes HTTP requests through Ajax and invokes restful apis at the back end. The front end only needs to focus on page style and dynamic data parsing & rendering, while the back end focuses on concrete business logic.



, END,

The growth path of programmers

Though the road is long, the journey is sure to come

This article was originally posted on the wechat public account of the same name “The Growth of programmers”, reply to “1024” you know, give a thumbs up.

Reply [520] to receive the best learning method for programmers

Reply to [256] for Java programmer growth plans