In the daily development process, there are many requirements related to image/file uploads, so how to implement Koa?

preface

As mentioned in previous lectures, the Koa framework is a middleware based framework, and some of the functionality required requires the installation of a corresponding middleware library. To upload files, there are many plugins:

  1. koa-body
  2. koa-bodyparser
  3. busboy
  4. koa-multer
  5. .

Koa-body is recommended! Let’s examine it!

koa-body

Previous koA2 uses koA-BodyParser for post requests and KoA-Multer for image uploads. The combination of the two is fine, but koa-multer and Koa-Route (note not koa-router) are incompatible.

The koA-body combines the two, so the KOA-body can replace it.

Basic use of koa-body

In using koa-body in KOA2, I used global import, not route level import, because there is no need to import only at the route level, considering that there are post requests or file upload requests in many places.

Depend on the installation

npm i koa-body -D
Copy the code

app.js

const koaBody = require('koa-body');
const app = new koa();
app.use(koaBody({
  multipart:true, // Support file upload encoding:'gzip',
  formidable:{
    uploadDir:path.join(__dirname,'public/upload/'), // Set the file upload directory to keepExtensions:trueOnFileBegin :(name,file) => {// Settings before file uploading // console.log(' name:${name}`); // console.log(file); }}}));Copy the code

Useful parameter

npm/koa-body

Obtain information about uploaded files

router.post('/',async (ctx)=>{
  console.log(ctx.request.files);
  console.log(ctx.request.body);
  ctx.body = JSON.stringify(ctx.request.files);
});
Copy the code

The results of

  • Size – File size
  • Path – The directory after the file is uploaded
  • Name – Original name of the file
  • Type – Indicates the file type
  • LastModifiedDate – Last updated time

Rich man’s Choice

Why this title, because many enterprise projects do not choose to store some image files in their own server, why?

  1. It takes up space and wastes resources
  2. Slow access
  3. Security is low
  4. To be added…

Usually choose Ali Cloud, Tencent Cloud, qiuniuyun and other object storage OSS functions.

Usually each platform comes with its own SDK, along with a variety of examples, for ease of mind. It is not suitable for us to study.

Let’s take a simple example

var OSS = require('ali-oss'Var client = new OSS({region:' ',
  accessKeyId: ' ',
  accessKeySecret: ' ',
  bucket: ' '
})
const uploadSDK = async (obj) => {
    var fileName = obj.files.file.name
    var localFile = obj.files.file.path
    try {
      var result = await client.put(fileName, localFile)
      console.log(result.url)
    }
    catch (e) {
      console.log(e)
    }
    return result.url
}
Copy the code

Other functions are shown in figure

Koa implements image upload

  • Fs – File system
  • Path – the path

Single file upload

var fs = require('fs')
var path = require('path') const uploadStatic = async (obj) => {// Upload a single file const file = obj.files.file const reader = fs.createReadStream(file.path);let filePath = path.join(__dirname, '.. /static/upload/') + ` /${file.name}`; Const upStream = fs.createWritestream (filePath); // Create a writable const upStream = fs.createWritestream (filePath); // read. Pipe (upStream);return "Upload successful!";
}

Copy the code

Multi-file upload

var fs = require('fs')
var path = require('path') const uploadStatics = async (obj) => {// uploadStatics = async (obj) const files = obj.files.filefor (letConst reader = fs.createreadStream (file.path); const reader = fs.createreadStream (file.path);let filePath = path.join(__dirname, '.. /static/upload/') + ` /${file.name}`; Const upStream = fs.createWritestream (filePath); // Create a writable const upStream = fs.createWritestream (filePath); // read. Pipe (upStream); }return "Upload successful!";
}

Copy the code

extension

Upload large files and obtain upload progress

When it comes to large file uploads, we can’t use the above method. Why? It takes a long time to upload large files, and operations such as refreshing or poor network speed can easily lead to file uploading failure. How can I avoid this problem?

The combination of sharding and concurrency can divide a large file into multiple pieces and upload them concurrently, greatly increasing the upload speed of large files. When a network problem causes a transmission error, only the wrong shard, not the entire file, needs to be retransmitted. In addition, sharding transmission can track the upload progress in more real time.

Take the VUE project as an example

Encapsulate Vue components based on Webuploader

Webuploader: a simple H5 based, FLASH supplemented modern file upload component


<vue-upload
        ref="uploader"
        url="xxxxxx"
        uploadButton="#filePicker"
        multiple
        @fileChange="fileChange"
        @progress="onProgress"
        @success="onSuccess"
></vue-upload>

Copy the code

The principle and process of sharding

When we upload a large file, it gets sharded by a plug-in, so Ajax has multiple files

  1. Multiple Upload requests are fragmented requests, which divide large files into multiple small pieces and deliver them to the server one by one
  2. After the shard is complete, that is, after the upload is complete, a merge request needs to be sent to the server to merge multiple shard files into one file

Principle:

  • Step 1: Encrypt the file with MD5, which has two advantages, that is, it can uniquely identify the file to prepare for the second transmission, and it can also be used for the background to verify the integrity of the file

  • Step 2: after getting the MD5 value, check whether the file has been uploaded. If it has been uploaded, there is no need to upload it again. That is to say, it can be uploaded in seconds

  • Step 3: Slice the file. If the file is 500M and a slice size is defined as 50M, then the whole file is divided into 100 uploads

  • Step 4: request an interface to the background, the data in the interface is the file block that the file has been uploaded, why have this request? We often use the cloud, the cloud has continuingly functions, a file to a half, due to various reasons, don’t want to pass, then upload again, the server should be kept before I upload a file, skip the blocks that have been uploaded, upload other file again, continuingly solution, of course, there are a lot of, now, It’s most efficient to send the request alone

  • Step 5: Start Posting blocks that have not been uploaded before

  • Step 6: When the upload is successful, notify the server to merge the files, so far, the upload is complete!

shard

Upload sends the following parameters:

The guID in the first configuration (Content-Disposition) and the access_token in the second configuration are formData that we pass to the server through the Webuploader configuration. The next few configurations are file contents. Id, name, Type, and size chunks are the total number of chunks, and chunk is the current number of fragments. The figures are 12 and 9. When you see an Upload request where chunk is 11, it means that this is the last upload request.