Welcome to the Futu Web development team
Night has been deep, April Fool’s Day to love the girl confession ah. Haha…
Xiaobian wishes everyone here first.
This weekend, I wanted to distribute some articles that haven’t been sorted out recently. But xiaobian weekend a little dizzy, haven’t sorted out. Probably had too much kebabs on Saturday night.
There was a recent article on Module & Babel that hasn’t been sorted out yet. Let’s warm up with a Node.js article. A lot of things in this article, now rich way front end is still in use, to the back of everyone to understand the rich way front end because it will be helpful.
The good news is that the futo front-end framework is slowly being replaced by VUE (JQ, angular.js). Access layer Node.js is also in progress, and external services will be available soon. I believe that vue. Js, Node. js partners have been ready to move.
Yes, xiaobian is one of the promoters of the Futu Node. js service.
In futu do front-end, not only need to write front-end, but also need to write Node.js service, this we do not think too much.
But let’s get back to how we learned to use Node.js on the job.
Note: The futu front-end framework has now been migrated to vue.js. But it doesn’t affect your understanding of the reading.
The original article comes from futu WEB blog: original link
The body of the
By chance, I had the opportunity to cross the browser divide and experience Node.js for real.
First of all, I’d like to say: “It’s a great honor to land the first Node.js project after two months of hard work.” After the whole project is finished, it is relatively smooth.
The story is simple: Node.js does an access layer.
For a reason
Front-end technology innovation is changing with each passing day, front-end engineering has been inseparable from Node.js. Most projects today use a separate front-end and back-end architecture, where the back-end provides the interface and the front-end renders the data through the interface data. But now the front-end code logic is more and more complex, and more and more scenarios. It is worth considering whether this architecture is suitable for all application scenarios. The emergence of the big front end is an attempt. Try to use Node.js access for various application scenarios.
Technological innovation is a must, both individually and as a team. That’s the problem with our team right now, so someone has to take the step. And I was lucky enough to be the first one to do it.
first
No matter what the technology is, no matter how good it is, its application should be carefully considered. But, you know, not all of them. So what to do. Look for project pilot, online project running well certainly can’t reconstruct, and manpower is tight. We’ll just have to find a new project. As it happened, the company needed to do a new project and thought it would be done in a separate way. And then one day…
The group leader said, “Isn’t the team going to select the technology? Is it feasible to use Node.js as an access layer for this project? “.
After careful consideration, I replied, “Yes, no problem.” . B: Screw it. Let’s talk about it. 😄)
Borrow my eldest brother’s words: “technology this thing does not fall to the ground, said also in vain”.
Background: The team has always been very interested in Node.js, including me. I’ve been reading and researching the source code of Node.js. The infrastructure team has also been investigating the Node.js technology framework to create an integration project framework suitable for team development.
So I believe: chance will always take care of the prepared mind.
And so my Node.js journey began.
All things are difficult before they are easy
I probably run commands in Node.js every day, write various NPM packages, and even write some of my own projects. But there is pressure to actually develop projects using Node.js. Because the technical structure of the project requires me to worry about more things. Usually I just write some front-end logic code and do front-end engineering. However, under this structure, I have to learn and apply things THAT I am not familiar with.
I’ve outlined some broad directions:
- 1. What is the overall architecture of node.js access layer?
- 2. What does front-end technology use?
- 3. How to do front-end engineering?
- 4. How does the project run in different environments (usual: development, test, formal)?
- 5. What about front-end automation?
- Unit testing?
- 7. Coding style?
- 8. How does Node.js connect to the server?
- 9. What should I do for logging, reporting, login service access, permission verification, etc.?
- 10. How to launch the project?
- 11. How to ensure the service stability once it is online?
- 12. How do I debug problems?
There may be many, many more problems that need to be dealt with but you can already see that. I feel like I only know the tip of the iceberg. Code code again beautiful feeling also weak. What is required is no longer a single coding ability, but a big picture, a change in perspective.
Anyway, start a new Git repository and start working on it.
How to get an appropriate project architecture
This is really a problem, whether the architecture design is reasonable. It will affect whether the later coding can achieve rapid development, and also affect the later functional iteration and maintenance.
So the question is, do I pre-design or pre-code?
I chose to code first and then refactor.
Background: As mentioned above, the infrastructure group already has a simple Node.js integration framework, which is incomplete, but simple enough. That said, I had no problem refactoring my own project architecture from there.
You might think, well, design up front, right?
It’s about having a different focus on coding, running the project, and then refactoring to find the right architecture for the project.
To answer the question of coding or designing first, I borrowed a sentence from refactoring:
“Refactoring changes pre-designed roles. Without refactoring, there’s a lot of pressure to make sure your design is right up front. This means that any future modifications to the original design can be very costly. Therefore, you need to focus more on the design up front to avoid future changes. If you choose refactoring, the focus shifts. You can still design up front, but you don’t have to come up with the right solution, you just need to come up with the right solution right now. From refactoring – Improving the Design of Existing Code
How hard is it to recombine a simple solution into a flexible solution? The answer is: “Quite easily”. From refactoring – Improving the design of existing Code
I recommend refactoring – Improving the Design of Existing Code.
So I focused on pre-coding and looking for the right architecture after the whole project demo was up and running. A reasonable architecture is to put the code where it should be. Code is leaky, like a room that gets messy if it’s never cleaned up. Refactoring is taking the code and rearranging it and putting it back in place.
Technical framework selection considerations
The choice of technical framework will affect the overall architecture, coding, productivity, and maintenance costs of the project.
First of all, I’d like to say: “No matter what the front-end or back-end framework is, I think it’s still a team thing to think about. After all, it’s not an individual project. Just because I’m not here doesn’t mean no one can maintain the program.”
Node. Js backend
Koa2. Why not use frameworks like KOA or Express, or why the team didn’t develop them themselves.
Node.js v8LTS is almost here. Koa has been upgraded to KOA2 and there is no need to use the old Express which is too old. Koa2 has shown its strength in the past two years. At the present stage, there is no need for the team to spend a lot of manpower to develop its own framework. Instead, it can change its thinking and build an integrated framework suitable for team projects based on KOA2.
With this infrastructure team using KOA2 as the main framework is most appropriate at this stage. In particular, node.js V7.6 + natively supports async and await syntax.
The front frame
The dynasty of jQuery is slowly falling apart. The era of angular.js, React and Vue has arrived. Once again, based on the current situation of the team, the most advantageous angular.js v1.x was selected.
I’m not saying anything bad about other frameworks here, it’s all about the current situation of the team and whether the current framework can help me develop efficiently. If one day I feel that angular.js is no longer suitable for the current development needs of the project, I will be obliged to ask my questions.
For example: when the project needs us to consider speeding up page rendering, consider server rendering; Consider separating the front and back ends when the server is under pressure. Isomorphism is the most appropriate encoding method react and Vue are good choices.
Frameworks are not right or wrong, only appropriate.
Webpack2 as a popular fried chicken, I am also a priority. Why not choose WebPack3…
In fact, I have actually used Webpack3 to test this project. My metric is that the compression is less than it is now. It didn’t work out so it didn’t merge.
Gulp workflow processing, no problem. Why use gulp when you use webpack2? Why use both?
There is no absolute opposition between these two components. Here they complement each other.
The overall front-end framework: Angular.js v1.x + Webpack2 + gulp.
Babel is used to compile front-end code.
The main framework used in the project is shown as follows:
Front-end engineering
The overall architecture of the project and the selection of the front-end technical framework will certainly have a profound impact on the front-end engineering. Where to put the front-end code, how to do webpack packaging, where to put the output files. What gulp needs to do, more or less, and whether it is annoying or not. All of these problems challenge the architecture of your project. This is one of the reasons why I code first and then refactor to fine-tune the project architecture. If you specify the overall architecture of the project in advance, then later your code will try to fit the architecture of the project, and the resulting code can be imagined — it must be unsatisfactory.
Which brings us to the first question.
Where is the source code for the anglaur. Js part written by myself
To address this issue, I made a recommendation to the underlying architecture early in node.js development: front-end source code should not be placed in the server static resource directory. Only packaged files are placed in the static resource files directory unless the file is directly accessible.
This means THAT I need to find a file directory to put the front-end source code in. The most logical location is horizontal to the server directory.
webpack
Save the file to the static resource directory through compilation packaging of Webpack. I’ve handed over all the coding and compiling tasks to WebPack, including extracting common files, versioning, compression, and template file injection.
How do I do version control
Versioning is mostly done in two ways: file-based and hashing.
File-based is like generating a file with a different name every time you pack. It facilitates running multiple versions online.
Based on hash means that there is never more than one file on the line for that function, and full graying is not possible.
One problem with file-based versioning is that the file name of the packaged.js or.css file is not controllable, so you cannot write the imported JS or CSS file path into the HTML template file. So when packaging through Webpack, I need to specify which template file is, through webpack template file injection plug-in to complete the JS or CSS file path import.
Other means; By saving the hash parameter of the returned value after the Webpack is finished. This also allows for file-based versioning.
Gulp workflow
Gulp works well with Webpack, which is the most important part of the Gulp task stream. Consider packaging and compilation, all left to Webpack. All gulp has to do is make sure that the tasks on the front end are executed correctly. This includes when to perform webpack packaging and what to do once it’s done.
Front-end automation
Automation here may differ from what you mean by automation elsewhere. Front-end automation here mainly refers to how the front-end code is packaged and compiled automatically. In fact, there are many processes that can be automated in the project. Jenkins is connected in the project, which is mainly used to automatically complete front-end packaging and compilation, and then package all files after webpack and compilation into.zip files by using zip command. Because the packaged files are not stored.
It’s okay to be confused here. First of all, why not include the files generated after webpack into git repository?
The simple truth is that when any file in the Git repository changes, the next version number is created. Webpack is bound to change files every time it is packaged and compiled. If a package file is included in the version library, the file must be committed to produce the version number. That is to say, after I locally submit the code to git repository, Jenkins will package it, and then the packaged file must be submitted back to Git repository. In this way, every time I submit the code, there will be two submission records (one submission by myself, and one submission completed by Jenkins after automatic packaging). . So in order to prevent Jenkins from committing files to the Git repository after the package is complete, all you need to do is remove the files generated after the Webpack package from the repository.
But the problem is not so simple, the webpack is not included in the version library, release time, these webpack generated file release. The solution here is to pack all webpack related files into a ${commitId}. Zip package using the zip command. CommitId =$(git rev-parse HEAD) ${commitId}. Zip: ${commitId}. Zip: ${commitId}. Zip: ${commitId}.
Why are there 2 packing missions?
The first time is webPack packaging, the front-end code needs to be packaged and compiled. The second time is the file packaging, publishing needs, the reason is that the webpack file does not enter the library solution.
Therefore, the team must be able to build and have used Jenkins. This tool is very helpful to the team. It is much better to pre-package and cache files than to package them when the project is released. Packaging problems can be discovered in advance and remedied in time, so as not to affect the release progress and normal operation of online projects due to packaging problems during release.
The Git repository supports adding hooks. So you can add triggering events to your Git repository. Let Jenkins automate the packing.
If ONE day I need to write unit tests, I can try to get Jenkins to run automated tests for me. Did I answer the unit test question? Hahahahahahaha…
The front-end issues are mostly resolved, and now it’s on the server side.
Node.js server running environment configuration
To write a project, it’s easy to run. My project entry file is server/index.js. To start the service, run the following command:
node server/index.js
Copy the code
But sometimes, circumstances are not as simple as I thought. Because the project needs to run against different environments, it is necessary to use different configuration files for different environments. This requires me to start the Node.js service with different parameters. So I was asked to make the environment parameters as configurable as possible when coding — as far as the parameters related to the execution environment could be configured.
Node.js Access layer services and permission verification
The real worry for a white person is how I can make requests to real servers in Node.js. I project site login service authentication how to do, as well as the user login, there is no permission to access are a problem.
Access the HTTP service
Initiate a REQUSET request through the HTTP module. In fact, AT the beginning, I was also at a loss about how to request back-end services in the access layer, which I had never considered before as a front-end. That’s what I think of it now. Some things may be very complicated to think about, really do it as if there is a seed: mountains and rivers doubt no way, a bright future and a spring. Feeling.
Node.js access layer request back-end service simple code implementation:
exports.example = async (ctx)=>{
let options = {
port: 80.hostname: 'www.test.com'.method:'GET'.path:'/api/getuser? token=document.cookie.token'
};
let getData = function (){
return new Promise((resolve , reject) = >{
let request = http.request(options , (socket)=>{
let data = ' ';
console.log('status: ' , socket.statusCode , socket.headers);
socket.on('data' , (chunk)=>{
data += chunk;
});
socket.on('end', () = > {console.log('server call back get data: ' , data);
return resolve(data);
});
socket.on('error' , (e)=>{
return reject(data);
});
});
request.end();
});
}
ctx.body = await getData();
}
Copy the code
I’m not considering HTTPS here because HTTPS is built on TOP of SSL/TLS, which requires a private and public key and a CA certificate. CA certificate although you can be issued by yourself but still have to install the machine to be effective. For those interested in HTTPS self-issuing CA certificates, see this article: HTTPS Self-issuing CA Certificates.
Back-end server (PHP/JAVA…) All you need to do is verify that the caller has permission to use the function based on whether the request parameters are valid or not. Examples abound, such as using third-party services.
As small as Number check
It is possible that even the simplest parameter verification does not know how to verify. It has to do with the javascript language and the way the front end thinks. That’s what HAPPENED to me at first. It felt weird to write code.
This is a simple example. To check whether a value of type Number is valid in the front end, I usually do this:
num = typeof num === 'number'&& num === num && num ! = =Infinity ? num : 0;
Copy the code
This kind of thinking and logic is perfectly fine on the front end, but writing it in the Node.js access layer feels awkward. So to shift my thinking:
num = Number.isFinite(num) ? num : 0;
Copy the code
Small to the calibration of parameters, I have to seriously consider. It’s time to change your mindset and consider using JavaScript native rather than writing it yourself.
Permission verification
I don’t want all users to have access to the project, even if they are already logged in. That’s what I’m trying to solve.
This is where permission management becomes extremely important. The best way to do this is to servitize permission related functionality.
Mission feels just beginning !!!!!
Project deployment online
It can be said that I have no experience in project deployment and operation. But one thing is that the availability of the project after it goes online must be guaranteed. You can’t just let the service die and have to manually restart it because of a minor problem. Do not say that the server power, restart after the manual boot it. All these problems must be solved.
pm2
After the efficient development of the project is completed, the real mission of the project has just begun: how to ensure the stable operation of the service online and high availability. This is done with the help of other components. Pm2 management is a good solution.
-
NPM install -g pm2
-
After the installation is complete, you can configure pM2 in your project.
Case study:
//test.config.js
'use strict'; Module. exports = {apps:[{name: 0;} // exports = {apps: 0;'test',
script: './server/index.js'// CWD:'/',
instances : 1,
watch : ['server'],
env: {
'NODE_ENV': 'development',
},
env_production: {
'NODE_ENV': 'production',
},
exec_mode : 'cluster',
source_map_support : true,
max_memory_restart : '1G', // log address error_file:'/data/logs/pm2/test_error.log',
out_file : '/data/logs/pm2/test_access.log', listen_timeout: 8000, kill_timeout: 2000, restart_delay: 10000, // Restarts for max_timeout: 10}]};Copy the code
- It can then be started with a command:
pm2 start test.config.js
Copy the code
nginx
Nginx is a very lightweight HTTP server written by the Russians. Nginx, pronounced “Engine X”, is a high-performance HTTP and reverse proxy server. Nginx configuration is also essential, port 80 is only one, so I need Nginx for forwarding.
Take the following example:
Upstream test_upstream {server 127.0.0.1:6666; keepalive 64; } server{ listen 80; server_name www.test.com; client_max_body_size 10M; index index.html index.htm; error_log /data/nginx/log/error_www.test.com.log;
access_log /data/nginx/log/access_www.test.com.log combined;
location / {
proxy_store off;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Remote-Host $remote_addr;
proxy_set_header X-Nginx-Proxy true;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade"; Proxy_http_version 1.1; proxy_pass http://test_upstream/; proxy_read_timeout 60s; }}Copy the code
The project starts with port 6666 on this machine, but I can’t say I visited www.test.com with a port number at the end. This is when nginx comes into play. By default, port 80 is used to access domain names without ports, and nginx does the reverse proxy to port 6666.
One point here is that the client_max_body_size parameter Settings for post requests directly affect the size of data.
Log, report, operation and maintenance
The health of the project is reflected in the logs and reports. I just have to look at the log every day and look at the view to get an idea of how the project is running that day. Without these auxiliary functions, both eyes would be blacked out and nothing would happen.
Coding style
Eslint’s syntax standards are followed in terms of encoding style. Use the latest async/await and import syntax.
The debug code
Node.js already supports debugging node.js code directly in Chrome by adding the –inspact parameter when starting a project.
node --inspect server/index.js
Copy the code
Copy the url in the red box above to open it in Chrome, and then click Start, then visit the page, you can click Stop when you need to pause, code analysis.
conclusion
As a beginner, I can only say that Node.js can do it like a duck to water on the access layer, and the key point is the opportunity. Without the Node.js access layer, front-end engineering is entirely possible. But the server isomorphic rendering is no way to do, unless with the backend students with; With node.js access layer, the front end can handle some tricky problems more easily, and the back end services are protected from direct attacks because there is another Node.js access layer in front of them.