In normal application scenarios, it is rarely necessary to upload hundreds or thousands of megabytes of files to the browser, but if the browser needs to upload large files in special scenarios, how do we upload? How to optimize?

The following examples are all done by vueJs and nodeJs

So before we do that, a couple of things to think about

In general, the main reason is that the upload time is too long to cause uncontrollable accidents:

  • Network fluctuations cannot be controlled
  • Want to pause, how to continue

Let’s start with a simple upload

Front-end code:

<! -- App.vue page template -->
<template>
 <div>
   <input type="file" @change="uploadFile">
 </div>
</template>
Copy the code
// axios/index.js The axios request
import Axios from 'axios'

const Server = Axios.create({
 baseURL: '/api'
})

export default Server

// main.js
import Vue from 'vue'
import App from './App.vue'

import Axios from './axios'

Vue.config.productionTip = false
Vue.prototype.$http = Axios

new Vue({
 render: h= > h(App),
}).$mount('#app')
Copy the code
// App.vue js
<script>
export default {
  methods: {
    // input changes the event listener
    uploadFile(e) {
      const file = e.target.files[0]
      this.sendFile(file)
    },
    // File upload method
    sendFile(file) {
      let formdata = new FormData()
      formdata.append("file", file)

      this.$http({
        url: "/upload/file".method: "post".data: formdata,
        headers: { "Content-Type": "multipart/form-data" }
      }).then(({ data }) = > {
        console.log(data, 'upload/file')
      })
    },
  }
}
</script>
Copy the code

nodeJs:

const Koa = require('koa')
const router = require('koa-router') ()// KoA routing module
const koaBody = require('koa-body') // Parse the plugin for file uploads
const fs = require('fs') // nodeJs built-in file module
const path = require('path') // nodeJs built-in path module

const uploadPath = path.join(__dirname, 'public/uploads') // Define the file upload directory

// If the file directory is not changed initially, it will be created automatically
if(! fs.existsSync(uploadPath)) { fs.mkdirSync(uploadPath) }const app = new Koa() / / instantiate

// Some custom global request handling
app.use(async (ctx, next) => {
 console.log(`Process ${ctx.request.method} ${ctx.request.url}. `);

 if (ctx.request.method === 'OPTIONS') {
   ctx.status = 200
 }

 try {
   await next();
 } catch (err) {
   ctx.status = err.statusCode || err.status || 500
   ctx.body = {
     code: 500.msg: err.message
   }
 }
})

// Load file upload middleware
app.use(koaBody({
 multipart: true.formidable: {
   // keepExtensions: true, // keep the file suffix
   uploadDir: uploadPath, // Initially specify the file storage address, otherwise it will be put into the system temporary file directory
   maxFileSize: 10000 * 1024 * 1024    // Set the maximum size of uploaded files. The default size is 20M}}))// File upload processing
function uploadFn(ctx) {
 return new Promise((resolve, reject) = > {
   const { name, path: _path } = ctx.request.files.file // Get the uploaded file information
   const filePath = path.join(uploadPath, name) // Recombine the file name

   // Change the name and address of the temporary file
   fs.rename(_path, filePath, (err) = > {
     if (err) {
       return reject(err)
     }
     resolve(name)
   })
 })
}

// File upload interface
router.post('/api/upload/file'.async function uploadFile(ctx) {
 await uploadFn(ctx).then((name) = > {
   ctx.body = {
     code: 0.url: path.join('http://localhost:3000/uploads', name),
     msg: 'File uploaded successfully'
   }
 }).catch(err= > {
   ctx.body = {
     code: -1.msg: 'File upload failed'
   }
 })
})

app.use(router.routes())

// Start the service with port 3000
app.listen(3000)
Copy the code

This is a simple file upload where the front end uploads the file with an input element and then uses Ajax to transfer the entire file to the back end, which receives the file and saves it on the server.

For small files, this is fine, but for large files, it can cause the above problems.

To optimize the

Overall optimization idea:

  • Since small file upload handler have no problems, but large files have unpredictable problem and experience, we can’t control the network state of the user and the user’s idea, so let’s change the way of thinking, can put the whole big files in front split into small file, and then by the front end to upload these small files one by one, and all the small file upload is completed, The backend is then notified to merge these small files into larger files. If the user suddenly breaks the network or wants to suspend the network during the uploading process, when the user uploads again, the previously uploaded content will be directly skipped and only the previously uploaded content will be uploaded.

But here comes a new problem:

  • How can I be sure that different file uploads are not corrupted
  • When a large file is divided into many small files and then merged into a large file, the sequence must be the same as that of the split file
  • How do I know what was uploaded and what wasn’t before uploading

Before we try to solve these problems, we have one more optimization to speed up uploads: asynchronous concurrency control.

When you hear the word concurrency, the first thing that comes to mind is back-end servers responding to a large number of requests at the same time. However, when HTTP entered version 1.1, the browser realized TCP concurrent request processing, and 2.0 realized HTTP concurrent processing, and the current browser HTTP version is basically after version 1.1, so we can use this mechanism, for a large number of front-end requests, Requests can be processed concurrently.

Just the large file upload needs to be divided into dozens or hundreds of small file upload requests, which perfectly fits in with the scene required by the front-end concurrency control, plus concurrent processing, to accelerate the upload of the whole file.

Asynchronous concurrent control

Let’s get familiar with the front-end code implementation of asynchronous concurrency control for requests:

/** * asynchronous concurrency control * arr {Array} asynchronous task queue * Max {Number} Maximum Number of tasks allowed to be executed at the same time * callback {Function} After all tasks are completed */
function sendRequest(arr, max = 5, callback) {
  let i = 0 // Array subscript
  let fetchArr = [] // The request being executed

  let toFetch = () = > {
    // If all asynchronous tasks have started and the last group is left, the concurrency control is terminated
    if (i === arr.length) {
      return Promise.resolve()
    }

    // Perform asynchronous tasks
    let it = fetch(arr[i++])
    // Add completion processing for asynchronous events
    it.then(() = > {
      fetchArr.splice(fetchArr.indexOf(it), 1)})// Add a new task
    fetchArr.push(it)

    let p = Promise.resolve()
    // If the number of concurrent tasks reaches the maximum, wait for one of the asynchronous tasks to complete before adding
    if (fetchArr.length >= max) {
      p = Promise.race(fetchArr)
    }

    // perform recursion
    return p.then(() = > toFetch())
  }

  toFetch().then(() = > 
    // Execute the callback function when the last group is complete
    Promise.all(fetchArr).then(() = > {
      callback()
    })
  )
}
Copy the code

The principle of asynchronous concurrency control: first create a task queue, the capacity of the task queue can be set up, and then monitor each asynchronous task in combination with Promise, etc. If the queue is not full, we simply put a new task in it, and then set the completion callback for that task: when the task is complete, we remove the current task from the queue and add another task to the queue so that we can keep the queue full at all times. Execute the completed callback function until all pending tasks have been processed.

In the screenshot above, the maximum number of task queues is set to 5, and the image above shows that there are 5 requests executing at the same time.

Unique file ID

We need to make sure that each file has a unique ID, so that uploading to the back end can ensure that the file is not corrupted. How to generate a unique ID for each file, we can use an existing plug-in: SparkMD5.

After generating the file ID with SparkMD5, generate a folder on the server with the id as the name to hold all the shard small files.

import SparkMD5 from 'spark-md5'

export default {
  methods: {
    uploadFile(e) {
      const file = e.target.files[0]
      this.createFileMd5(file).then(fileMd5= > {
        // fileMd5 is the unique id of the file. As long as the contents of the file remain unchanged, this ID will not change
        console.log(fileMd5, 'md5')})},createFileMd5(file) {
      return new Promise((resolve, reject) = > {
        const spark = new SparkMD5.ArrayBuffer()
        const reader = new FileReader()
        reader.readAsArrayBuffer(file)
        
        reader.addEventListener('loadend'.() = > {
          const content = reader.result
          spark.append(content)
          const hash = spark.end()
          resolve(hash, content)
        })
        
        reader.addEventListener('error'.function _error(err) {
          reject(err)
        })
      })
    }
  }
}
Copy the code

Document segmentation

The first step is to split the large file into smaller files and then upload all the smaller files in turn to the back end.

// File split
cutBlob(fileHash, file) {
  const chunkArr = [] // All slice cache arrays
  const blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice
  const chunkNums = Math.ceil(file.size / this.chunkSize) // Total number of slices

  return new Promise((resolve, reject) = > {
    let startIndex = ' '
    let endIndex = ' '
    let contentItem = ' '

    for(let i = 0; i < chunkNums; i++) {
      startIndex = i * this.chunkSize // Start of the fragment
      endIndex = (i + 1) * this.chunkSize // End of segment
      endIndex > file.size && (endIndex = file.size)

      // Cut files
      contentItem = blobSlice.call(file, startIndex, endIndex)

      chunkArr.push({
        index: i, // The sequence index of the current file fragment is passed to the back end to determine the order
        chunk: contentItem // Current file fragment content})}this.fileInfo = {
      hash: fileHash,
      total: chunkNums,
      name: file.name,
      size: file.size
    }
    resolve(chunkArr)
  })
},
Copy the code

Order of merging small files

After the front end cuts the large file into small files, it adds an increasing index parameter to the request of the small file during upload. After receiving the data and generating the file, the back end uses index as the suffix of the file name and merges the file name according to the sequence of the suffix of the file name.

// Interface for uploading fragmented files
router.post('/api/upload/snippet'.async function snippet(ctx) {
  const { index, hash } = ctx.request.body

  // Slice the upload directory
  const chunksPath = path.join(uploadPath, hash, '/')

  if(! fs.existsSync(chunksPath)) { fs.mkdirSync(chunksPath) }// Slice the file, index as filename suffix to determine the order of future merge
  const chunksFileName = chunksPath + hash + The '-' + index
  
  await uploadFn(ctx, chunksFileName).then(name= > {
    ctx.body = {
      code: 0.msg: 'Section upload completed'
    }
  }).catch(err= > {
    console.log(err)
    ctx.body = {
      code: -1.msg: 'Section upload failed'}})})Copy the code

continuingly

When the user upload process in active suspension or network was broken and upload again, should not be uploaded completed before upload again, so we can before every start uploading sends a request to obtain the file upload is too small file, if it returns all the index of the small files uploaded prefix or the file name. In this way, small files that have been uploaded can be filtered out during front-end upload.

/ / the front-end js
// The file has been uploaded
getUploadedChunks(hash) {
  return this.$http({
    url: "/upload/checkSnippet".method: "post".data: { hash }
  })
}

// nodeJs
// Query whether the fragment file is uploaded
router.post('/api/upload/checkSnippet'.function snippet(ctx) {
  const { hash } = ctx.request.body

  // Slice the upload directory
  const chunksPath = path.join(uploadPath, hash, '/')

  let chunksFiles = []

  if(fs.existsSync(chunksPath)) {
    // Slice the file
    chunksFiles = fs.readdirSync(chunksPath)
  }

  ctx.body = {
    code: 0.data: chunksFiles,
    msg: 'Query successful'}})Copy the code

Merge files

When all file fragments have been uploaded, the back end needs to be notified to merge them on the server.

/ / the front-end js
// request merge
chunkMerge(data) {
  this.$http({
    url: "/upload/merge".method: "post",
    data,
  }).then(res= > {
    console.log(res.data)
  })
}

// nodeJs
// Delete the folder and all internal files
function deleteFiles(dirpath) {
  if (fs.existsSync(dirpath)) {
    fs.readdir(dirpath, (err, files) = > {
      if (err) throw err
      while(files.length) {
        fs.unlinkSync(dirpath + files.shift())
      }
      fs.rmdir(dirpath, () = >{})}}}/** * file asynchronous merge *@param {String} DirPath Fragment folder *@param {String} FilePath Target file *@param {String} Hash File hash *@param {Number} Total Total number of fragments *@returns {Promise}* /
function mergeFile(dirPath, filePath, hash, total) {
  return new Promise((resolve, reject) = > {
    fs.readdir(dirPath, (err, files) = > {
      if (err) {
        return reject(err)
      }
      if(files.length ! == total || ! files.length) {return reject('Upload failed, quantity of slices does not match')}const fileWriteStream = fs.createWriteStream(filePath)
      function merge(i) {
        return new Promise((res, rej) = > {
          // Merge complete
          if (i === files.length) {
            fs.rmdir(dirPath, (err) = > {
              console.log(err, 'rmdir')})return res()
          }
          const chunkpath = dirPath + hash + The '-' + i
          fs.readFile(chunkpath, (err, data) = > {
            if (err) return rej(err)

            // Append slices to the storage file
            fs.appendFile(filePath, data, () = > {
              // Delete the slice file
              fs.unlink(chunkpath, () = > {
                // recursive merge
                res(merge(i + 1))
              })
            })
          })

        })
      }
      merge(0).then(() = > {
        // By default there is no need to manually close the file, but the merge of some files does not automatically close writable streams, such as compressed files
        resolve(fileWriteStream.close())
      })
    })
  })
}

/** * File merge interface * 1, check whether there is a slice hash folder * 2, check whether the number of files in the folder is equal to total * 4, then merge slices * 5, delete slice file information */
router.post('/api/upload/merge'.async function uploadFile(ctx) {
  const { total, hash, name } = ctx.request.body
  const dirPath = path.join(uploadPath, hash, '/')
  const filePath = path.join(uploadPath, name) // Merge files

  // If the file already exists, the file has been uploaded successfully
  if (fs.existsSync(filePath)) {
    deleteFiles(dirPath) // Delete temporary file fragment package
    ctx.body = {
      code: 0.url: path.join('http://localhost:3000/uploads', name),
      msg: 'File uploaded successfully'
    }
  // Upload fails if there is no sliced hash folder
  } else if(! fs.existsSync(dirPath)) { ctx.body = {code: -1.msg: 'File upload failed'}}else {
    // Merge files
    await mergeFile(dirPath, filePath, hash, total).then(() = > {
      ctx.body = {
        code: 0.url: path.join('http://localhost:3000/uploads', name),
        msg: 'File uploaded successfully'
      }
    }).catch(err= > {
      ctx.body = {
        code: -1.msg: err
      }
    })
  }
})
Copy the code

Advanced optimization

The larger the file, the longer the hash value is computed, which is somewhat flawed. This section can be further optimized:

  • Using webWork mechanism, a new thread is opened to do this independently during calculation, and the result is returned to the main thread after calculation.
    // Worker-loader needs to be installed in vue
    npm install worker-loader -D
    
    // App.vue js
    import Worker from './hash.worker.js'
    
    createFileMd5(file) {
      return new Promise(() = > {
        const worker = new Worker()
    
        worker.postMessage({file, chunkSize: this.chunkSize})
    
        worker.onmessage = event= > {
          resolve(event.data)
        }
      })
    }
    
    // hash.worker.js
    import SparkMD5 from 'spark-md5'
    
    onmessage = function(event) {
      getFileHash(event.data)
    }
    
    function getFileHash({file, chunkSize}) {
      console.log(file, chunkSize)
      const spark = new SparkMD5.ArrayBuffer()
      const reader = new FileReader()
      reader.readAsArrayBuffer(file)
    
      reader.addEventListener('loadend'.() = > {
        const content = reader.result
        spark.append(content)
    
        const hash = spark.end()
        postMessage(hash)
      })
    
      reader.addEventListener('error'.function _error(err) {
        postMessage(err)
      })
    }
    
    Copy the code
  • Sampling hash is used for calculation, but the sampling method will lose some precision, and it may generate the same hash from a different file.
    • The method of sampling is that, after the large file is divided into small fragments, we can take the id generated by combining the beginning 10 characters, the middle 10 characters and the end 10 characters of each fragment respectively (rule customization). The efficiency of this approach is very fast.
    createFileMd5(file) {
      return new Promise((resolve, reject) = > {
        const spark = new SparkMD5.ArrayBuffer()
        const reader = new FileReader()
        reader.readAsArrayBuffer(file)
    
        reader.addEventListener('loadend'.() = > {
          console.time('Sampling hash :')
          const content = reader.result
          // Sample hash
          // Rule: take the first 10 slices for each half size
          let i = 0
    
          while(this.chunkSize / 2 * (i + 1) + 10 < file.size) {
            spark.append(content.slice(this.chunkSize / 2 * i, this.chunkSize / 2 * i + 10))
            i++
          }
    
          const hash = spark.end()
          console.timeEnd('Sampling hash :')
          resolve(hash)
        })
    
        reader.addEventListener('error'.function _error(err) {
          reject(err)
        })
      })
    }
    
    // This is the time calculated using a 9M file with a slice size of 100K.
    // Hash all contents :: 101.6240234375 ms
    // Sample hash calculation :: 2.216796875 ms
    Copy the code

The complete code

<! Page code -->
<template>
  <div>
    <input type="file" @change="uploadFile">
  </div>
</template>
Copy the code
// Front-end JS processing
// App.vue js
<script>
import Worker from './hash.worker.js'

export default {
  data() {
    return {
      fileInfo: null.chunkSize: 100 * 1024 // Slice size}},methods: {
    // input changes the event listener
    uploadFile(e) {
      const file = e.target.files[0]
      Shard upload is used only when the file size is 5 times larger than the shard size
      if (file.size / this.chunkSize < 5) {
        this.sendFile(file)
        return
      }
      this.createFileMd5(file).then(async fileMd5 => {
        // Check whether the server has uploaded file slices
        let {data} = await this.getUploadedChunks(fileMd5)
        let uploaded = data.data.length ? data.data.map(v= > v.split(The '-') [1] - 0) : []
        // Cut files
        const chunkArr = await this.cutBlob(fileMd5, file, uploaded)
        // Start uploading
        this.sendRequest(chunkArr, 5.this.chunkMerge)
      })
    },
    createFileMd5(file) {
      return new Promise((resolve) = > {
        const worker = new Worker()

        worker.postMessage({file, chunkSize: this.chunkSize})

        worker.onmessage = event= > {
          resolve(event.data)
        }
      })
    },
    // File split
    cutBlob(fileHash, file, uploaded) {
      const chunkArr = [] // All slice cache arrays
      const blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice
      const chunkNums = Math.ceil(file.size / this.chunkSize) // Total number of slices

      return new Promise(resolve= > {
        let startIndex = ' '
        let endIndex = ' '
        let contentItem = ' '

        for(let i = 0; i < chunkNums; i++) {
          // If it has been uploaded, skip it
          if (uploaded.includes(i)) continue

          startIndex = i * this.chunkSize // Start of the fragment
          endIndex = (i + 1) * this.chunkSize // End of segment
          endIndex > file.size && (endIndex = file.size)

          // Cut files
          contentItem = blobSlice.call(file, startIndex, endIndex)

          chunkArr.push({
            index: i,
            chunk: contentItem
          })
        }
        this.fileInfo = {
          hash: fileHash,
          total: chunkNums,
          name: file.name,
          size: file.size
        }
        resolve(chunkArr)
      })
    },
    // Request concurrent processing
    sendRequest(arr, max = 6, callback) {
      let fetchArr = []

      let toFetch = () = > {
        if(! arr.length) {return Promise.resolve()
        }

        const chunkItem = arr.shift()

        const it = this.sendChunk(chunkItem)
        it.then(() = > {
          // Successfully removed from the task queue
          fetchArr.splice(fetchArr.indexOf(it), 1)},err= > {
          // If it fails, it is put back in the queue
          arr.unshift(chunkItem)
          console.log(err)
        })
        fetchArr.push(it)

        let p = Promise.resolve()
        if (fetchArr.length >= max) {
          p = Promise.race(fetchArr)
        }

        return p.then(() = > toFetch())
      }

      toFetch().then(() = > {
        Promise.all(fetchArr).then(() = > {
          callback()
        })
      }, err= > {
        console.log(err)
      })
    },
    // The file has been uploaded
    getUploadedChunks(hash) {
      return this.$http({
        url: "/upload/checkSnippet".method: "post".data: { hash }
      })
    },
    // Small file upload
    sendChunk(item) {
      if(! item)return
      let formdata = new FormData()
      formdata.append("file", item.chunk)
      formdata.append("index", item.index)
      formdata.append("hash".this.fileInfo.hash)
      // formdata.append("name", this.fileInfo.name)

      return this.$http({
        url: "/upload/snippet".method: "post".data: formdata,
        headers: { "Content-Type": "multipart/form-data"}})},// File upload method
    sendFile(file) {
      let formdata = new FormData()
      formdata.append("file", file)

      this.$http({
        url: "/upload/file".method: "post".data: formdata,
        headers: { "Content-Type": "multipart/form-data" }
      }).then(({ data }) = > {
        console.log(data, 'upload/file')})},// request merge
    chunkMerge() {
      this.$http({
        url: "/upload/merge".method: "post".data: this.fileInfo,
      }).then(res= > {
        console.log(res.data)
      })
    }
  }
}
</script>

// hash.worker.js
import SparkMD5 from 'spark-md5'

onmessage = function(event) {
  getFileHash(event.data)
}

function getFileHash({file, chunkSize}) {
  const spark = new SparkMD5.ArrayBuffer()
  const reader = new FileReader()
  reader.readAsArrayBuffer(file)

  reader.addEventListener('loadend'.() = > {
    const content = reader.result
    // Sample hash
    // Rule: take the first 10 slices for each half size
    let i = 0

    while(chunkSize / 2 * (i + 1) + 10 < file.size) {
      spark.append(content.slice(chunkSize / 2 * i, chunkSize / 2 * i + 10))
      i++
    }

    const hash = spark.end()
    postMessage(hash)
  })

  reader.addEventListener('error'.function _error(err) {
    postMessage(err)
  })
}
Copy the code

NodeJs processing:

const Koa = require('koa')
const router = require('koa-router') ()// KoA routing module
const koaBody = require('koa-body') // Parse the plugin for file uploads
const fs = require('fs') // nodeJs built-in file module
const path = require('path') // nodeJs built-in path module

const uploadPath = path.join(__dirname, 'public/uploads') // Define the file upload directory

// If the file directory is not changed initially, it will be created automatically
if(! fs.existsSync(uploadPath)) { fs.mkdirSync(uploadPath) }const app = new Koa() / / instantiate

// Some custom global request handling
app.use(async (ctx, next) => {
  console.log(`Process ${ctx.request.method} ${ctx.request.url}. `);

  if (ctx.request.method === 'OPTIONS') {
    ctx.status = 200
  }

  try {
    await next();
  } catch (err) {
    ctx.status = err.statusCode || err.status || 500
    ctx.body = {
      code: 500.msg: err.message
    }
  }
})

// Load file upload middleware
app.use(koaBody({
  multipart: true.formidable: {
    // keepExtensions: true, // keep the file suffix
    uploadDir: uploadPath, // Initially specify the file storage address, otherwise it will be put into the system temporary file directory
    maxFileSize: 10000 * 1024 * 1024    // Set the maximum size of uploaded files. The default size is 20M}}))// File upload processing
function uploadFn(ctx, destPath) {
  return new Promise((resolve, reject) = > {
    const { name, path: _path } = ctx.request.files.file // Get the uploaded file information
    const filePath = destPath || path.join(uploadPath, name) // Recombine the file name

    // Change the name and address of the temporary file
    fs.rename(_path, filePath, (err) = > {
      if (err) {
        return reject(err)
      }
      resolve(filePath)
    })
  })
}

// Query whether the fragment file is uploaded
router.post('/api/upload/checkSnippet'.function snippet(ctx) {
  const { hash } = ctx.request.body

  // Slice the upload directory
  const chunksPath = path.join(uploadPath, hash, '/')

  let chunksFiles = []

  if(fs.existsSync(chunksPath)) {
    // Slice the file
    chunksFiles = fs.readdirSync(chunksPath)
  }

  ctx.body = {
    code: 0.data: chunksFiles,
    msg: 'Query successful'}})// Interface for uploading fragmented files
router.post('/api/upload/snippet'.async function snippet(ctx) {
  const { index, hash } = ctx.request.body

  // Slice the upload directory
  const chunksPath = path.join(uploadPath, hash, '/')

  if(! fs.existsSync(chunksPath)) { fs.mkdirSync(chunksPath) }// Slice the file
  const chunksFileName = chunksPath + hash + The '-' + index
  
  await uploadFn(ctx, chunksFileName).then(name= > {
    ctx.body = {
      code: 0.msg: 'Section upload completed'.data: name
    }
  }).catch(err= > {
    ctx.body = {
      code: -1.msg: 'Section upload failed'.data: err
    }
  })
})

// File upload interface
router.post('/api/upload/file'.async function uploadFile(ctx) {
  await uploadFn(ctx).then((name) = > {
    ctx.body = {
      code: 0.url: path.join('http://localhost:3000/uploads', name),
      msg: 'File uploaded successfully'
    }
  }).catch(err= > {
    ctx.body = {
      code: -1.msg: 'File upload failed'}})})// Delete the folder and all internal files
function deleteFiles(dirpath) {
  if (fs.existsSync(dirpath)) {
    fs.readdir(dirpath, (err, files) = > {
      if (err) throw err
      // Delete files
      while(files.length) {
        fs.unlinkSync(dirpath + files.shift())
      }
      // Delete the directory
      fs.rmdir(dirpath, () = >{})}}}/** * file asynchronous merge *@param {String} DirPath Fragment folder *@param {String} FilePath Target file *@param {String} Hash File hash *@param {Number} Total Total number of fragments *@returns {Promise}* /
function mergeFile(dirPath, filePath, hash, total) {
  return new Promise((resolve, reject) = > {
    fs.readdir(dirPath, (err, files) = > {
      if (err) {
        return reject(err)
      }
      if(files.length ! == total || ! files.length) {return reject('Upload failed, quantity of slices does not match')}// Create a file to write to the stream
      const fileWriteStream = fs.createWriteStream(filePath)
      function merge(i) {
        return new Promise((res, rej) = > {
          // Merge complete
          if (i === files.length) {
            fs.rmdir(dirPath, (err) = > {
              console.log(err, 'rmdir')})return res()
          }
          const chunkpath = dirPath + hash + The '-' + i
          fs.readFile(chunkpath, (err, data) = > {
            if (err) return rej(err)

            // Append slices to the storage file
            fs.appendFile(filePath, data, () = > {
              // Delete the slice file
              fs.unlink(chunkpath, () = > {
                // recursive merge
                res(merge(i + 1))
              })
            })
          })

        })
      }
      merge(0).then(() = > {
        // By default there is no need to manually close the file, but the merge of some files does not automatically close writable streams, such as compressed files
        resolve(fileWriteStream.close())
      })
    })
  })
}

/** * File merge interface * 1, check whether there is a slice hash folder * 2, check whether the number of files in the folder is equal to total * 4, then merge slices * 5, delete slice file information */
router.post('/api/upload/merge'.async function uploadFile(ctx) {
  const { total, hash, name } = ctx.request.body
  const dirPath = path.join(uploadPath, hash, '/')
  const filePath = path.join(uploadPath, name) // Merge files

  // If the file already exists, the file has been uploaded successfully
  if (fs.existsSync(filePath)) {
    // Delete all temporary files
    deleteFiles(dirPath)
    ctx.body = {
      code: 0.url: path.join('http://localhost:3000/uploads', name),
      msg: 'File uploaded successfully'
    }
  // Upload fails if there is no sliced hash folder
  } else if(! fs.existsSync(dirPath)) { ctx.body = {code: -1.msg: 'File upload failed'}}else {
    // Start merging
    await mergeFile(dirPath, filePath, hash, total).then(() = > {
      ctx.body = {
        code: 0.url: path.join('http://localhost:3000/uploads', name),
        msg: 'File uploaded successfully'
      }
    }).catch(err= > {
      ctx.body = {
        code: -1.msg: err
      }
    })
  }
})

app.use(router.routes())

// Start the service with port 3000
app.listen(3000)
Copy the code

GitHub:github.com/554246839/f…