Breakpoint continuingly

What is breakpoint continuation

Resumable upload means that the upload/download is interrupted due to network or other reasons during the upload/download. You can continue uploading and downloading unfinished parts from the uploaded or downloaded parts, without having to start uploading/downloading from scratch. Users can save time and speed up.

Overall thought flow

1. Select File-parse file

Start with an input selection file and an upload button

<div> <input type="file" ref={InputRef}/> <Button type="primary" onClick={uploadFile}>Copy the code

Select the file and click Upload

async function uploadFile() {
    const file = InputRef.current.files[0];   // Get the selected file
    const buffer = await fileParse(file);
}

// Parse the file into buffer data
function fileParse(file: any) {
    return new Promise((resolve) = > {
        const fileRead = new FileReader();
        fileRead.readAsArrayBuffer(file);
        fileRead.onload = (ev: any) = > {
            resolve(ev.target.result);
        };
    });
}

Copy the code

2. Generate an MD5 value

According to the file content, generate the MD5 value of the file, which is used to uniquely identify the file, and is also the basis for the subsequent implementation of breakpoint continuation.

The spark-MD5 package is used to generate MD5

import SparkMD5 from 'spark-md5';

function makeMd5(buffer) {
    const spark = new SparkMD5.ArrayBuffer();
    spark.append(buffer);
    const hash = spark.end();
}
Copy the code

3. File fragments

According to the size of each piece of data, calculate how many pieces of the current file can be divided.

The sharded data is saved in one data.

async function createChunkFile(file: any) {
    if(! file)return;
    const suffix = /\.([0-9a-zA-Z]+)$/i.exec(file.name)[1];  // Get the filename suffix
    const list = [];
    const count = Math.ceil(file.size / SIZE);  // SIZE refers to the specified SIZE of each piece of data
    const partSize = file.size / count;  // The size of each piece of data in time

    let cur = 0;
    for (let i = 0; i < count; i++) {
        let item = {
            chunk: file.slice(cur, cur + partSize),
            filename: `${hash}_${i}.${suffix}`.// Each piece of file is named in MD5_ index value}; cur += partSize; list.push(item); }}Copy the code

4. Determine the upload index value

So far we have a list of sharded data, but we can’t start uploading yet.

It is possible that the file has already uploaded some content, so just upload the rest.

Therefore, you need to use the MD5 value to check which slice should be uploaded from the beginning.

async function getLoadingFiles() {
    axios.get(`/file/uploaded/count? hash=${hash}`)
        .then((res: any) = > {
        if (res.code === 1) {
            const count = res.data.count;
            uploadFn(count);  // Upload the file}})}Copy the code

5. Upload files

You are now ready to upload your files

async function uploadFn(startIndex: number = 0) {
    if (list.length === 0) return;
    const requestList: any[] = [];
    list.forEach((item: any, index: number) = > {
        const fn = () = > {
            let formData = new FormData();
            formData.append('chunk', item.chunk);
            formData.append('filename', item.filename);
            return axios
                .post('/file/upload', formData, {
                headers: { 'Content-Type': 'multipart/form-data' },
            })
                .then((res: any) = > {
                const data = res.data;
                if (res.code === 1) {
                    setUploadedIndex(index);
                    setUploadProgress((data.index + 1) * 100 / partList.current.length);
                }
            })
                .catch(function () {
                setAbort(true);
                message.error('Upload failed')}); }; requestList.push(fn); }); uploadSend(startIndex, requestList); }// Upload a single slice
async function uploadSend(index: number, requestList: any) {
    if (abortRef.current) return;
    if (index >= requestList.length) {
        uploadComplete();  // Upload complete
        return;
    }
    await requestList[index]();
    uploadSend( ++index, requestList);
}
Copy the code

6. The upload is complete

When all data in the list has been uploaded and there is no end, you need to notify the back end to merge files.

// Upload complete
async function uploadComplete() {
    let result: any = await axios.get(`/file/upload/finish? hash=${hash.current}`);
    if (result.code === 1) {
        message.success('Upload successful'); }}Copy the code

At this point the front end is all done, post the complete code

import { useState, useRef, useEffect } from 'react'; import { Button, message, Progress } from 'antd'; import SparkMD5 from 'spark-md5'; import { formatFileSizeUnit } from '@/common'; import axios from '@/axios'; import { PlayCircleFilled, PauseCircleFilled } from '@ant-design/icons'; import './uploadingPanel.less'; const SIZE = 1024 * 1024 * 2; const UploadingPanel = (props: any) => { const {} = props; const [abort, setAbort] = useState<boolean>(false); const [uploadProgress, setUploadProgress] = useState<number>(0); Const [uploadedIndex, setUploadedIndex] = useState(0); // Upload process const [uploadedIndex, setUploadedIndex] = useState(0); Const [infoMsg, setInfoMsg] = useState<string>(" "); // // const [fileInfo, setFileInfo] = useState<any>({}); const InputRef = useRef<any>(null); const hash = useRef<any>(null); // File hash value const partList = useRef<any>([]); Const abortRef = useRef<any>(false); Async function uploadFile() {setUploadProgress(0); SetInfoMsg (' file decomposing '); const file = InputRef.current.files[0]; await createChunkFile(file); } function fileParse(file: any) { return new Promise((resolve) => { const fileRead = new FileReader(); fileRead.readAsArrayBuffer(file); fileRead.onload = (ev: any) => { resolve(ev.target.result); }; }); } async function createChunkFile(file: any) { if (! file) return; const buffer = await fileParse(file); const spark = new SparkMD5.ArrayBuffer(); spark.append(buffer); hash.current = spark.end(); const suffix = /\.([0-9a-zA-Z]+)$/i.exec(file.name)[1]; const list = []; const count = Math.ceil(file.size / SIZE); const partSize = file.size / count; let cur = 0; for (let i = 0; i < count; i++) { let item = { chunk: file.slice(cur, cur + partSize), filename: `${hash.current}_${i}.${suffix}`, }; cur += partSize; list.push(item); } partList.current = list; getLoadingFiles(); } async function getLoadingFiles() { axios.get(`/file/uploaded/count? hash=${hash.current}`) .then((res: any) => { if (res.code === 1) { const count = res.data.count; SetInfoMsg (' file uploaded '); setUploadProgress((count * 100 / partList.current.length).toFixed(2)); uploadFn(count); } }) } async function uploadFn(startIndex: number = 0) { if (partList.current.length === 0) return; abortRef.current = false; const requestList: any[] = []; partList.current.forEach((item: any, index: number) => { const fn = () => { let formData = new FormData(); formData.append('chunk', item.chunk); formData.append('filename', item.filename); return axios .post('/file/upload', formData, { headers: { 'Content-Type': 'multipart/form-data' }, }) .then((res: any) => { const data = res.data; if (res.code === 1) { setUploadedIndex(index); setUploadProgress((data.index + 1) * 100 / partList.current.length); } }) .catch(function () { setAbort(true); Message.error (' upload failed ')}); }; requestList.push(fn); }); uploadSend(startIndex, requestList); Async function uploadSend(index: number, requestList: any) {if (abortref.current) return; if (index >= requestList.length) { uploadComplete(); return; } requestList[index] ? await requestList[index]() : setInfoMsg(''); uploadSend( ++index, requestList); } // async function uploadComplete() {let result: any = await axios.get(' /file/upload/finish? hash=${hash.current}`); If (result.code === 1) {message.success(' upload successfully '); SetInfoMsg (' Upload completed '); Function changeFile() {const file = inputref.current.files [0]; setFileInfo(file || {}); } return ( <div className="uploading-panel"> <input type="file" onChange={changeFile} ref={InputRef}/> {infoMsg && ( <span> [{infoMsg}] </span>)} {fileinfo.size && (<span>{formatFileSizeUnit(fileinfo.size)}</span>)}< Button </Button> <Button type="primary" shape="circle" icon={abort? <PlayCircleFilled /> : <PauseCircleFilled />} onClick={() => { abortRef.current = ! abort; abort && uploadFn(uploadedIndex + 1) setAbort(! abort) }} /> <div> <Progress percent={uploadProgress} status="active"/> </div> </div> ); }; export default UploadingPanel;Copy the code

Interface implementation

Now look at the implementation of the back-end interface, which I use koA

1. Query the number of uploaded files

During sharding, the front-end uploads data in index order, and only after the previous piece is successfully uploaded, the next piece will be uploaded.

So we only need to calculate the number of currently uploaded fragments and return it to the front end, which only needs to upload from the corresponding index.

router.get('/uploaded/count'.async (ctx, next) => {
  const {
    hash
  } = ctx.query;
  const filePath = `${uploadDir}${hash}`;
  const fileList = (fs.existsSync(filePath) && fs.readdirSync(filePath)) || [];
  ctx.body = {
    code: 1.data: {
      count: fileList.length
    }
  }
})
Copy the code

2. Receive uploaded files

router.post('/upload'.async (ctx, next) => {
  const file = ctx.request.files.chunk // Get the uploaded file
  const {
    filename,
  } = ctx.request.body;
  const reader = fs.createReadStream(file.path);
  const [hash, suffix] = filename.split('_');
  constfolder = uploadDir + hash; ! fs.existsSync(folder) && fs.mkdirSync(folder);const filePath =  `${folder}/${filename}`;
  const upStream = fs.createWriteStream(filePath);
  reader.pipe(upStream);
  ctx.body = await new Promise((resolve, reject) = > {
    reader.on('error'.() = > {
      reject({
        code: 0.massage: 'Upload failed',
      })
    })
    reader.on('close'.() = > {    
      resolve({
        code: 1.massage: 'Upload successful'.data: {
          hash,
          index: Number(suffix.split('. ') [0])}})})})})Copy the code

3. Merge files

router.get('/upload/finish'.async (ctx, next) => {
  const {
    hash
  } = ctx.query;
  const filePath = `${uploadDir}${hash}`;
  const fileList = fs.readdirSync(filePath);
  let suffix = ' ';
  fileList.sort((a, b) = > {
      let reg = /_(\d+)/;
      return reg.exec(a)[1] - reg.exec(b)[1];
    }).forEach((item) = > {
      suffix = /\.([0-9a-zA-Z]+)$/.exec(item)[1] | |null;
      fs.appendFileSync(`${path.join(__dirname, '.. /public/file/')}${hash}.${suffix}`, fs.readFileSync(`${filePath}/${item}`));
      fs.existsSync(`${filePath}/${item}`) && fs.unlinkSync(`${filePath}/${item}`);
    });
    fs.existsSync(filePath) && fs.unlinkSync(filePath);
  ctx.body = {
    code: 1.msg: 'Upload successful'}})Copy the code

The breakpoint continuation function is now fully implemented and can be manually paused/resumed during the upload process.

Even after refreshing the page, when we re-select the unfinished file, we can continue uploading from where we left off last time.

Welcome to my personal blog website:www.dengzhanyong.com[Blog priority]