When I saw an interview question that required the implementation of breakpoint continuation, I thought about the implementation idea in my mind at that time, but I didn’t fully understand it. I felt that there was a lot of knowledge involved, so I spent some time to implement a simple version using React and NodeJS, and summarized the implementation idea and the knowledge point used.
Summary, the knowledge points used are as follows:
- Slice the uploaded file using FileReader
- The MD5 algorithm is used to obtain the unique identifier of the file
- Display upload progress with XHR
- Compare the file size to calculate the start node of the continuation
- Customize the file saving method to ensure that the unfinished file can be saved even in case of abnormal termination
The demo can be downloaded here, and it is recommended to use Chrome’s own network speed limit to easily test the breakpoint continuation feature.
The front part
React is used for front-end rendering, axois is used for network request, and JS-MD5 is used to obtain unique identification of files.
Say first breakpoint
The breakpoint is based on segmenting the file, which can be read as a Buffer using the FileReader class on the Web side and then segmented using the slice method on the prototype chain.
const reader = new FileReader()
reader.readAsArrayBuffer(uploadedFile)
Copy the code
In addition, since uploaded files are not submitted through HTML forms, uploads on the JS side require the FormData class to encapsulate uploaded data.
Besides, continuingly
The premise of the renewal is to be able to identify whether the file uploaded again is the same as the last one, that is to say, the unique identifier of the file must be obtained first, among which the MD5 algorithm can meet the requirements, so I used the third-party jS-MD5.
In addition, it is necessary to know where to start the continuation when the continuation starts. I provide an API interface on the back end to query the size of the corresponding file by the md5 value of the file, and then call it by the front end before uploading again, and calculate the start position of the continuation by comparison.
The implementation of the back end is described below.
Upload progress
At the beginning, I wanted to simply use FETCH to process requests, but soon found that fetch itself was not designed to get upload progress information, and the underlying implementation could only get it through XHR, so I introduced xHR-based AXois to process uploads.
The backend part
The back end uses the KOA plus FORMIDABLE component to handle upload requests. Although FORMIDABLE is a bit outdated, and multer is more recommended on the web, I compared the two and decided on Formidable because it meets the requirements and is well documented.
Custom FORMIDABLE middleware
For more flexible control over the data processing part, I implemented a custom KOA middleware by referring to koa-FORMIDABLE.
const koaMiddleware = opt= > {
const tempFileDir = `./upload/tmp/`
if(! fs.existsSync(tempFileDir, {recursive: true })) {
fs.mkdirSync(tempFileDir)
}
return async function(ctx, next) {
const form = formidable.IncomingForm()
for (const key in opt) {
form[key] = opt[key]
}
await new Promise((resolve, reject) = > {
form.parse(ctx.req, (err, fields, files) => {
if (err) {
reject(err)
} else {
ctx.request.body = fields
ctx.request.files = files
resolve()
}
})
})
await next()
}
}
export default koaMiddleware
Copy the code
Accept segmented data
Formidable itself provides a set of events to handle file uploads, such as fileBegin, File, Data, Aborted, End, and so on, but these are not sufficient for special situations where segmented data is stored, such as network outages or exceptions. The need to save unfinished files may be the value of the breakpoint continuation function in real-world scenarios where a more detailed receive event, onPart, is needed.
In the custom onPart event, the core method to save the file uses createWriteStream in NodeJS, using flags:’ A ‘to ensure that the file is created when there is no file, and the file is added when there is one.
In addition, aborted calls the end method to ensure that the contents of the uploaded file can be safely saved in the event of a network exception.
form.onPart = part= > {
const tempFilePath = `${tempFileDir}${part.filename}`
const writer = fs.createWriteStream(tempFilePath, { flags: 'a' })
form.on('aborted', e => {
writer.end()
})
form.on('end', () => {
writer.end()
})
part.on('data', buffer => {
writer.write(buffer)
})
}
Copy the code
Example Query the current file size
As mentioned in the previous section, you need to know where to start the extension before you start it by dividing the size of the uploaded file by the size of the entire file. The fs.statSync method in nodejs retrieves the current file status, and the size property retrieves the current file size.
These are then exposed to the front end via the API, enabling it to retrieve relevant information before continuing.
router.get('/get-tmp-file-size'.async ctx => {
const { name } = ctx.query
const filePath = `./upload/tmp/${name}`
try {
const instance = fs.statSync(filePath)
ctx.body = { size: instance.size }
} catch (err) {
ctx.body = { size: 0}}})Copy the code
The last
In general, in the process of trying to write the demo, the knowledge points involved gradually exceeded the initial expected, each point digging deeper can involve more content, such as the upload and cancel function, I tried to use axois CancelToken in the demo, but after using it, I found that it can only take effect before the file upload. Once a file has been transferred, it cannot be cancelled, so unuploading in the current demo is done with the simplest refresh page.
From the perspective of the interview, this is undoubtedly a very good topic, but if you do not contact or pay attention to upload related functions in the work, under the circumstances of limited time, its difficulty is also obvious, here I wish every interview encountered this topic friends can have good luck.