preface
Project demo: Server disabled…
Making: server | front end
Why sketchpad: V2EX
As a front-end, we’re always consciously or unconsciously touching NodeJS, consciously or unconsciously looking at documentation, consciously or unconsciously looking at frameworks, but when it comes to making good use of it in our work, we’re often left with the feeling that “it’s as good as it gets on paper”. So about a week ago I decided to give it a try, trying to integrate what I had learned by mistake, and ended up rewriting an old sketchpad Demo and adding a server side.
Technology stack
- [vue + vuex + VUE-router] Page rendering + data sharing + route redirect
- [Axios] uses the HTTP request as a Promise
- [stylus] CSS preprocessing
- [element – the UI] UI library
- [Webpack] packs the above stuff
- [KOA 2 & KOA-Generator] NodeJS frame and frame scaffolding
- [mongodb & Mongoose] database and operational database library
- [Node-canvas] Server data copy record
- [socket. IO] Real-time push
- [PM2] Node service deployment
- [nginx] deploys static resource access service (HTTPS), proxy requests
- [letsencrypt] Generates a free HTTPS certificate
Webpack is also listed because this project, as a module of the project Luwuer.com, requires Webpack to be packaged independently
node-canvas
The installation
Node-canvas is the most difficult dependency I have encountered so far, so MUCH so that I do not want to install it in Windows at all. Its functions rely on many packages that do not exist by default on the system. You can also see many issues on Github under the label of Installation Help. In the case of CentOS 7 Clean edition, you will need to install the following dependencies before installing it. It is worth noting that the commands provided in the NPM documentation do not have Cairo.
Centos preconditions
sudo yum install gcc-c++ cairo cairo-devel pango-devel libjpeg-turbo-devel giflib-devel
# Install ontology
yarn add canvas -D
Copy the code
There is also an unknown pit, if the installation body is still stuck in the package fetching step after the preconditions are ready (no error is reported), then you need to update the NPM separately
Use the sample
It is easy to learn the basic usage from reference documents. In the following example, we first take the pixel data to generate ImageData, and then use putImageData to draw the historical data to canvas.
const {
createCanvas,
createImageData
} = require('canvas')
const canvas = createCanvas(canvasWidth, canvasHeight)
const ctx = canvas.getContext('2d')
/ / initialization
const init = callback= > {
Dot.queryDots().then(data= > {
let imgData = new createImageData(
Uint8ClampedArray.from(data),
canvasWidth,
canvasHeight
)
/ / remove the Smooth
ctx.mozImageSmoothingEnabled = false
ctx.webkitImageSmoothingEnabled = false
ctx.msImageSmoothingEnabled = false
ctx.imageSmoothingEnabled = false
ctx.putImageData(imgData, 0.0.0.0, canvasWidth, canvasHeight)
successLog('canvas render complete ! ')
callback()
})
}
Copy the code
Socket.io
In the design of this project, push must be used in two places, one is the point information of other users, the other is the chat message sent by all users.
client
// socket.io init
// transports: [ 'websocket' ]
window.socket = io.connect(window.location.origin.replace(/https/.'wss'))
// Receive images
window.socket.on('dataUrl'.data= > {
this.imageObject.src = data.url
this.loadInfo.push('Render images... ')
this.init()
})
// Receive other users to build points
window.socket.on('newDot'.data= > {
this.saveDot(
{
x: data.index % this.width,
y: Math.floor(data.index / this.width),
color: data.color
},
false)})// Receive the latest push messages from everyone
window.socket.on('newChat'.data= > {
if (this.msgs.length === 50) {
this.msgs.shift()
}
this.msgs.push(data)
})
Copy the code
server /bin/www
let http = require('http');
let io = require('socket.io')
let server = http.createServer(app.callback())
let ws = io.listen(server)
server.listen(port)
ws.on('connection'.socket= > {
// The client that establishes the connection joins the chatroom in order to broadcast below
socket.join('chatroom')
socket.emit('dataUrl', {
url: cv.getDataUrl()
})
socket.on('saveDot'.async data => {
// Push to other users, i.e. broadcast
socket.broadcast.to('chatroom').emit('newDot', data)
saveDotHandle(data)
})
socket.on('newChat'.async data => {
// Push to all users
ws.sockets.emit('newChat', data)
newChatHandle(data)
})
})
Copy the code
letsencrypt
To apply for the certificate
# get program
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
# automatically generated certificate (there will be two confirm environment after the installation), certificate of directory/etc/letsencrypt/live/the first domain name} {input I/etc/letsencrypt/live/www.luwuer.com/ here
./letsencrypt-auto certonly --standalone --email [email protected] -d www.luwuer.com -d luwuer.com
Copy the code
Automatic renewal
Enter scheduled task editing
crontab -e
# Submit the application every two months. The expiration time is three months
* * * */2 * cd /root/certificate/letsencrypt && ./letsencrypt-auto certonly --renew
Copy the code
nginx
yum install -y nginx
Copy the code
/etc/nginx/config.d/https.conf
Listen 443 SSL http2 default_server; Add_header strict-transport-security "max-age=6307200; add_header strict-transport-security "max-age=6307200; preload"; # add_header Strict-Transport-Security "max-age=6307200; includeSubdomains; preload"; Add_header x-frame-options DENY; Add_header x-Content-type-options in Internet Explorer 9, Chrome, and Safari nosniffing; # SSL certificate ssl_certificate/etc/letsencrypt/live/www.luwuer.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/www.luwuer.com/privkey.pem; # certificate of OCSP Stapling ssl_trusted_certificate/etc/letsencrypt/live/www.luwuer.com/chain.pem; Ssl_stapling_verify on; ssl_stapling_verify on; ssl_stapling_verify on; Ssl_stapling on; DNS resolver 8.8.8.8 8.8.4.4 valid=300s Ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; CloudFlare's Internet Facing SSL Cipher Configuration SSL_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:! MD5; Ssl_prefer_server_ciphers on; server_name ~^(\w+\.) ? (luwuer\.com)$; # $1 = 'blog.' || 'img.' || '' || 'www.' ; $2 = 'luwuer.com' set $pre $1; if ($pre = 'www.') { set $pre ''; } set $next $2; root /root/apps/$pre$next; location / { try_files $uri $uri/ /index.html; index index.html; } the location ^ ~ / API / {proxy_pass http://43.226.147.135:3000/; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # socket proxy configuration location/socket. IO / {proxy_pass http://43.226.147.135:3000; Proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # location /weibo/ { # proxy_pass https://api.weibo.com/; # } include /etc/nginx/utils/cache.conf; } server { listen 80; server_name www.luwuer.com; rewrite ^(.*)$ https://$server_name$request_uri; }Copy the code
The appendix
Database storage structure thinking process
Width: 1024px, height: 512px}, which means that there are 1024 * 512 = 524,288 pixels, or 524,288 * 4 = 2,097,152 color numbers. The minimum way to store these data amounts without compression is to remove the A in RGBA. An array with a length of 524,288 * 3 = 1,572,864 takes up about 1.5 MB of Memory if assigned to a variable (data from Chrome Memory). To store the above structures, I first classify two types of storage structures:
- Store points as objects, which means there will be 524,288 pieces of data
- Color RBGA storage, optimized for RGB storage
- Colors are stored in hexadecimal format
- The entire canvas data is stored as a single piece of data
Although structure 2 seems a bit silly, I did think about it at first, and it wasn’t clear to me that the most time-consuming thing to fetch data was not queries but IO.
Later, I tested the two structures 1.1 and 1.2 respectively, and directly rejected structure 2, because I found that the IO consumption accounted for more than 98% of the total consumption in the test, and structure 2 undoubtedly could not achieve absolute performance advantages due to a single piece of data.
- 1.1
- Storage size 10M
- Fetch all data 8000+ms
- Select * from findOne; select * from find;
- Other 20ms (findOne and find comparison results)
- 1.2
- Storage size 10M
- Fetch all data 7500+ms
- A full table query
- The rest of the time
Structure 2 is a death sentence if the data is not taken at the millisecond level, because a single pixel change in this structure requires storing the entire image data
To be honest, this test result was a little difficult for me to accept. I asked several people I knew why the backend performance was so poor and if there was a solution, but nothing came of it. To make matters worse, the test was performed on my desktop computer with an i7 CPU, and when I put the test environment on a single-core server, the time to fetch the full table was multiplied by 10. Fortunately, if you want to think about a problem for a long time, even if you just think about this problem daze sometimes, you can always burst out a few ineffable inspiration. One of the key things I came up with was that the data could only be taken out and put into memory when the service started, and the database and the in-memory copy of the data could be changed simultaneously when the pixels changed, so development continued. In the end, I chose the 1.1 structure for reasons related to “data transfer” below.
const mongoose = require('mongoose')
let schema = new mongoose.Schema({
index: {
type: Number.index: true
},
r: Number.g: Number.b: Number
}, {
collection: 'dots'
})
Copy the code
Replacing x & y with index and removing a from RGBA and replacing it in code can significantly reduce the actual storage size of a collection
In fact, there was a very strange problem during the testing process. On the Mono Overlord server, if I took out all the data at once and stored it in an Array, the application would crash in the middle of the process without any error message. At first I thought it was the CRASH caused by the CPU being overloaded for a long time, so I specially rented a new server and wanted to use a group of friends to remind me of the “distributed”. Some time later, I paged through the data and found that the program always fetched 200,0hundreds (a fixed number) and suddenly crashed, so THE CPU was vindicated.
PS: fortunately, there was no distributed experience before, or a road to black, may now still think that is the CPU problem.
Data transmission thinking process
As mentioned above, a color array of length 1,572,864 takes up 1.5 megabytes of memory, which I assume is the same size for data transfer. At first I thought, I’ll compress this data (not gzip), but since I can’t, I came up with an alternative. In order to avoid the high IO consumption when taking the number, I have stored a copy of the data in memory. I thought that I could generate ImageData by splicing the data (1.1 structure consumes much less CPU) and then draw it on Canvas through CTx. putImageData. So that’s key number two: drawing a copy of the data on a canvas on the server.
Then becomes easy, can be by CTX toDataURL | | fs. The writeFile (” {path} “, canvas toBuffer (‘ image/jpeg) data is pushed to the client in the form of images, the algorithm of image itself helped us to compress the data, You don’t have to do it yourself. In fact, the compression rate is very considerable. When the early drawing board is almost full of repeated colors, 1.5m data can even be compressed to less than 10K, and it is estimated that the later period should be within 300K.
Since DataURL is more convenient, I adopted the method of DataURL to transfer picture data here.
Work record
- Day 1 Reconstruct the front-end content of the pixel drawing board to solve the problem that the enlarged view is stuck when the image is too large
- Day 2 deals with back-end logic and tries different storage structures due to database IO constraints, but none of them perform well
- Day 3 continued to study the problem and finally decided to synchronize a Canvas operation on the server instead of just storing it in the library. However, the process was not completed yet because I slept in the afternoon
- Day 4 1 core 1G server crashed when accessing the database to get 50W pieces of data. After discussing with my friends, I found the actual problem and came up with a solution (some time, I equipped a new server with an environment, but it was abandoned due to the problem solving).
- Day 5 Added bulletin, user, chat, and historical pixel information query functions
- Day 6/7 solved the socket. IO HTTPS problem, stayed up all night for two days and finally found the CDN acceleration problem, almost spiralling into the sky
The actual problem mentioned in Day 4 is the NodeJS variable size limit or the Object Number limit, because the problem disappeared after I converted the 50W Array[Object] to 200W Array[Number]. If you know the specific reason, please contact us.
The record was copied from the diary. Day 6/7 was indeed the most difficult two days. In fact, there was nothing wrong with the code from the very beginning. In fact, during two days of repeated testing, I doubted the CDN twice because there was nothing else I could do. The first time, I resolved the domain name to the server IP, but the test still returned an error, and then the acceleration was restored. The second time was at five o ‘clock in the morning of the seventh day. At that time, my head was very swollen and I felt very uncomfortable, so I stopped the CDN directly. I thought that if the final test failed, I would remove the HTTPS certificate of CDN and use HTTP to access it. Then did I found that after I ping domain to determine the resolution has changed (about 10 minutes after the modification of parsing), domain names can be interpreted to lacunar CDN (the reason I don’t know why repeatedly, ali cloud DNS), no it is for this reason should be the first test, slightly after a long period of time will no longer. After solving the problem, I intentionally resumed the CDN acceleration test, but I could not find out which configuration caused the problem, so I failed to resume the acceleration in the end.