preface

Nuggets hidden for a long time, travel a variety of technical articles, quietly praise are hidden, occasionally hit the boiling point, message interaction. I always wanted to write something, but what?? This is a question worth thinking about, I have been thinking about it for several years, but I have not come out, especially after reading other people’s articles, because I always feel that I cannot write anything, so I have not dared to write.

However, I have come to understand recently that I do not pursue how profound things must be output, but just write an article to record my own learning. There must be output if there is input, and there must be action if there is an idea. As long as I start, everything will gradually become better.

Much of the full text can be found on the official website (please call me a shuttle)

Write to write to find too much content, so open out two. This way, the article doesn’t have to be finished with one finger. Ha ha ha ha

And then there’s the code address.

Project Description:

Client:

Technical stack: wechat pure native small program description: Mainly for commodity display function, roughly the page is home page + list page + details page

Background:

Technical stack: NodeJS+Koa+mongodb (mongodb Node Driver) Description: Accepts data from the backend platform and saves it in the database

Background management terminal:

Vue+Element UI+VueRouter Description: the main functions are banner upload, commodity type add, commodity list, commodity details

rendering

  • Screenshot of the back tube and applet together.

Business flow chart

Tangle about selection

I’ve heard a lot about database and Node framework terms, but when it comes to choosing which one to use, it’s really confusing… MySQL and MangoDB are widely available on the web, so I won’t go into details here (koA and Express are the same), but here are my rough and simple reasons for choosing MySQL and MangoDB.

Database selection

MySQL? Or MangoDB?

Options: MangoDB

MySQL is a relational database. MangoDB is a document database. MangoDB is selected as the database type because images are stored in the database. (Don’t ask me why I put the image in the database)

Framework selection in Nodejs

Koa? Or express?

Koa, developed by Express, has a relatively new syntax and is small and compact. Since the project is small, KOA is sufficient.

NodeJs connect database tool mongodb VS Mongoose?

Mongoose is equivalent to a layer of packaging of mongodb, just like Koa express. Because Koa wants to learn basic operations, it chooses the official mongodb, namely mongodb Node Driver(alas, it pays a little too much later).

Official documentation of the MongoDB Node Driver

All right, we’ve got everything we need, so let’s go ahead and do it.

Operation… What ????

NodeJs? Database? NodeJs operating database? Or is NodeJs listening for front-end requests?

I felt that I had done a lot of things, but none of them had a clue, so I sorted out the operation steps.

Concept analysis of mongodb

SQL terms/concepts MongoDB terminology/concepts Explanation/explanation
database database The database
table collection Database tables/collections
row document Data record line/document
column field Data fields/fields
index index The index
table joins Table joins,MongoDB does not support
primary key primary key Primary key. MongoDB automatically sets the _ID field as the primary key

Download and Install mongodb (Windows version)

See this article for more details (the first of its kind) on the road to a front-end white database: download and install MongoDB and visualization tools

NodeJs operates the database

The second article that came out

Starting from 0: NodeJs to MongoDB

NodeJs listens for front-end requests and returns content

Instead of creating a separate Node project, the NodeJs project operates directly on the database.

  • Install required dependency packages
npm i koa koa-bodyparser koa-router
Copy the code
  • Create a new damo.js file
const Koa = require('koa'); const bodyParser = require('koa-bodyparser'); const app = new Koa(); app.use(bodyParser()); // request body const router = require('koa-router')(); router.post('/management-system/user/addUser', async (ctx) => { console.log(ctx); Console. log(ctx.request.body, 'Request parameters - user information '); ctx.status = 200; Ctx. body = {code: 200, data: {MSG: 'Request successful! '}}; }); app.use(router.routes()); app.listen(3031); Console. log(' Listening 3031... ');Copy the code
  • The front-end request effect is shown below

The first stage is almost complete and everything is going well.

The second stage is just the user information increase, delete, change, search, no other redundant steps, but also fairly smooth.

This is ok because the basic process logic can be implemented, but there is no restriction on the type of database passed in to mongodb. Type restrictions like the following.

let StudentSchema = mongoose.Schema({
    name: String,
    age: Number
})

Copy the code

That went well, but now for one that didn’t go well: photo uploads.

There are too many stuck places. Every time I get stuck, I will ask myself why I choose mongodb. Is mongoose not fragrant? Simple operation, FAQ search a large number of. Why get stuck here…

Although I have been stuck for many times, I still struggle with whether to change to Mongoose. After several times of struggle, I still decide not to change. I can solve problems when I encounter them.

Major problems encountered:

  1. How does Node get the file stream
  2. How do I convert the obtained file stream into a form acceptable to the database and upload it to the server
  3. How do I get images out of the database and into a form acceptable to the front end
  4. Asynchronous problem

1. How does Node get file streams

Request data is available in ctx.request.body, but the file data stream is not in ctx.request.body, but in ctx.request.files.

router.post('/management-system/user/addUser', async (ctx) => { console.log(ctx); // Request instance console.log(ctx.request.body, 'request parameters - user information '); The console. The log (CTX) request) files, '-- -- -- -- -- - the file data here')});Copy the code

With the first problem solved, let’s look at the second problem.

2. How to convert the obtained file stream to a form acceptable to the database and upload it to the server

The picture uploads found online or on the official website all look like the following example. Node reads the local image storage database, and reads the image from the database and stores it locally.

GridFS is the officially recommended format for uploading large data, which can be automatically sliced and stored.

Mongodb driver GridFS API documentation

Thecodebarbarian.com/mongodb-gri…

Upload and download functions can be realized according to the documents

const { MongoClient, GridFSBucket } = require('mongodb'); var fs = require('fs'); const assert = require('assert'); Const URL = 'mongo: / / 127.0.0.1:27017'; const MYDATA = 'MANAGEMENT_BUCKET'; const MYCOLLECTION = 'Bucket'; const client = new MongoClient(URL, { useNewUrlParser: true, useUnifiedTopology: true, retryWrites: true, }); async function run() { try { await client.connect(); const database = client.db(MYDATA); database.collection(MYCOLLECTION); const bucket = new GridFSBucket(database); // console.log('db===============', bucket); // upload the image to the database. PNG // fs.createreadstream ('./111.png') //.pipe(bucket.openUploadStream('111.png'))) //.on('error', function (error) { // assert.ifError(error); // }) // .on('finish', function () { // console.log('done! '); // process.exit(0); / /}); / / images are downloaded Read from the database 111 PNG, it changed to the output. The PNG output bucket. OpenDownloadStreamByName (' 111. PNG ') .pipe(fs.createWriteStream('./output.png')) .on('error', function (error) { assert.ifError(error); }) .on('finish', function (e) { console.log('done! ', e); process.exit(0); }); } finally { // Ensures that the client will close when you finish/error // await client.close(); } } run().catch(console.dir);Copy the code

But that’s not what I want, Node gets a stream of files, not images that can be read by the above method, and as a newbie I didn’t find a way to convert. It took a long time to find the following solution. One line of code

const { path, name } = ctx.request.files.files;

  • Draw the general process of image uploading (the specific network request process is ignored in the figure)

  • File upload
Router.post ('/file/uploadFiles', async (CTX) => {await client.connect(); const database = client.db(MYDATA); database.collection(MYCOLLECTION); const bucket = new GridFSBucket(database); Const {path, name} = ctx.request.files.files; const {path, name} = ctx.request.files.files; fs.createReadStream(path) .pipe(bucket.openUploadStream(name)) .on('error', function (error) { assert.ifError(error); }) .on('finish', function (e) { console.log('done! ------', e); }); });Copy the code

After the image is uploaded successfully, the database generates two collections, one for file information and the other for file slices. Internally, the group splits itself based on the ID.

That’s done.

3. How to take the image out of the database and convert it into a form acceptable to the front end

The solution is also a line of code.

const base64 = data.toString(‘base64’);

Ah ~ know the simple to die, do not know about to be difficult to die…

router.post('/file/downloadFiles', async (ctx) => { await client.connect(); const database = client.db(MYDATA); database.collection(MYCOLLECTION); const bucket = new GridFSBucket(database); JPG is the file name that has been uploaded to the database, Can according to the ID picture const result = await bucket. OpenDownloadStreamByName (' download. JPG) on (' data ', Const base64 = data.tostring ('base64'); const base64 = data.tostring ('base64'); // let img = 'data:image/ PNG; base64,' + base64; Console. log(img,' I got bese64 la la la la la la la la! ')}); });Copy the code

At this stage, the big problems are basically solved, happy to write code.

Happy for a second…

The front end could not get the picture of BESe64, and immediately came to the next problem.

4. Asynchronous problems

  • File download (asynchrony problem, execution order not as expected)
  • Symptom: Before the image is processed into Base64, the request subject CTX is returned to the front end, so that the front end cannot get the desired content.
router.post('/file/downloadFiles', async (ctx) => { await client.connect(); const database = client.db(MYDATA); database.collection(MYCOLLECTION); const bucket = new GridFSBucket(database); Const result = await bucket. OpenDownloadStreamByName (' download. JPG) on (' data ', (data) => { const base64 = data.toString('base64'); // let img = 'data:image/ PNG; base64,' + base64; ctx.status = 200; ctx.body = { code: 200, data: img, }; Console. log(' execute after ', ctx.status); return Promise.resolve('====pppp'); }); Console. log(' execute first ', ctx.status); // ctx.status 404 return result; });Copy the code

The code executes in the order of print:

Performed before 404

After the execution of 200

It’s the opposite of what you want.

  • The solution
/ / remove database GridFSBucketReadStream circulation for base64 async function GridFSBucketReadStreamToBase64 (database) {return new Promise(async function (resolve, reject) { const bucket = new GridFSBucket(database); Await the bucket. OpenDownloadStreamByName (' download. JPG) on (' data ', (data) = > {/ / base64 can get let base64Img = 'data: image/PNG; base64,' + data.toString('base64'); resolve(base64Img); }); }); } router.post('/file/downloadFiles', async (ctx) => { await client.connect(); const database = client.db(MYDATA); database.collection(MYCOLLECTION); let base64 = await GridFSBucketReadStreamToBase64(database); Console. log(ctx.status, 'ctx.stauts= run first ', base64); ctx.status = 200; ctx.body = { code: 200, data: base64, }; });Copy the code

An image asynchrony problem has been resolved. There are also multiple image processing problems. For example: request 3 banner images at a time.

  • Banner image storage idea
  1. The image is stored as a file stream in the database MANAGEMENT_BUCKET, and the file name is stored by type in the Banner set.

  2. First check all image names in the Banner set, then loop to get the converted image based on the image name.

An asynchronous loop was encountered while getting the picture stream.

And the following reference article encountered problems and solutions are exactly the same.

Asynchronous loop reference articles: zhuanlan.zhihu.com/p/70785259

Banner_controller.js file is resolved after the code

    let bannerList = [];
    promiseArr = bannerIdList.map(async (item) => {
      const base64 = await GridFSBucketReadStreamToBase64(bucket, item['_id']);
      console.log(base64.length);
      bannerList.push({ img: base64, id: item['_id'] });
    });
    await Promise.all(promiseArr);
    console.log(bannerList);
Copy the code

At this point, you can really have fun writing code. The rest of the process was quick, with all the code written in about two or three days.

The project address

Code is not reconstructed, code cleanliness does not like spray.

Github.com/MangoSeven/…

conclusion

The article dragged on for nearly a week and was finally finished. There is no advanced technology in the full text, most of which are basic content, and most of which may be the new course of the whole project.

I looked at the initial mini program folder. In July 2019, I thought I just wanted to write a simple mini program and do a product demonstration. It took about a week from the prototype to the first draft.

Found behind a simple small program, can’t solve the problem of a lot of pictures, between the cloud development and studying Nodejs tangle is a period of time, chose Nodejs, but then there’s nothing to do, has been put on hold for a long time, until the start of this year, a friend and says his idea, we hit it off, I wrote to the front, he wrote to the background.

Everything seems to be going smoothly, drawing requirement diagram, requirement explanation, respective work plan, small program and back-end management have been developed, victory is in sight. However, this is just the beginning. It took more than half a month to set up the remote joint adjustment environment before requesting the server of the other side. After another month or so we debugged the first login interface.

Well… And the last interface.

Half a year passed in a trance.

From the beginning of the day, it is like a thorn stuck there, can not swallow or spit out, very uncomfortable. It wasn’t until this year that I decided to unplug the thorn, and this article was born.

Now that we’ve started, let’s finish. Thank you very much for your patience in reading this article.