preface
It’s been a busy time for interviewers and blog posts, and while I don’t particularly want to be hot, I can’t think of a good headline. -, Dally Dally π
In fact, I was asked this question in the interview, and it was an online coding programming question. Although the idea was correct at that time, unfortunately, I did not completely answer it
After the end of a period of time to sort out the next train of thought, so how to achieve a large file upload, and how to achieve the function of resumable transmission in the upload?
This article will build front-end and server from zero, to achieve a large file upload and resumable demo
Front end: Vue element-UI
Server: nodejs
If there are any misconceptions in this article, please point them out. They will be corrected in the first place. If there are better ways to do this, please leave a comment
Uploading large Files
The overall train of thought
The front end
Most articles on the front end of large file uploads have provided solutions. The core of the solution is to use the blob. prototype.slice method, which is similar to the array’s slice method
In this way, we can divide the file into slices according to the preset maximum number of slices, and then upload multiple slices at the same time with the help of the concurrency of HTTP. In this way, we can transfer multiple small slices at the same time instead of one large file, which can greatly reduce the upload time
In addition, due to the concurrency, the order of transmission to the server may change, so we also need to record the order for each slice
The service side
The server is responsible for accepting these slices and consolidating the slices once it has received all the slices
This raises two more questions
- When are slices merged, i.e., when are slices transferred
- How to Merge slices
The first problem requires the cooperation of the front end, which carries the maximum number of slices in each slice. When the server receives this number of slices, it automatically merges them. It can also send an additional request to proactively notify the server to merge slices
Second question, how do you merge slices? You can use NodeJS to read and write streams (readStream/writeStream) to transfer all the sliced streams to the final file stream
Talk is cheap,show me the code, and then we implement the above idea with code
The front part
The front end uses Vue as the development framework, with no great requirements for the interface. Native can also be used, and element-UI is used as the UI framework for aesthetics
File upload
Start by creating a control that selects the file, listening for the change event and the upload button
<template> <div> <input type="file" @change="handleFileChange" /> <el-button @click="handleUpload"> </el-button> </div> </template> <script> export default { data: () => ({ container: { file: null } }), methods: { handleFileChange(e) { const [file] = e.target.files; if (! file) return; Object.assign(this.$data, this.$options.data()); this.container.file = file; }, async handleUpload() {} } }; </script>Copy the code
Request logic
For the sake of generality, instead of using a third-party request library, we use native XMLHttpRequest as a simple layer of encapsulation to make the request
request({
url,
method = "post",
data,
headers = {},
requestList
}) {
return new Promise(resolve= > {
const xhr = new XMLHttpRequest();
xhr.open(method, url);
Object.keys(headers).forEach(key= >
xhr.setRequestHeader(key, headers[key])
);
xhr.send(data);
xhr.onload = e= > {
resolve({
data: e.target.response
});
};
});
}
Copy the code
Upload section
Next comes the important upload function, which requires two things
- Slice the file
- Transfer the slice to the server
<template>
<div>
<input type="file" @change="handleFileChange" />
<el-button @click="handleUpload">δΈδΌ </el-button>
</div>
</template>
<script>
+ const SIZE = 10 * 1024 * 1024; // Slice sizeExport default {data: () => ({container: {file: null},+ data: []
}),
methods: {
request() {},
handleFileChange() {},
+ // Generate file slices
+ createFileChunk(file, size = SIZE) {
+ const fileChunkList = [];
+ let cur = 0;
+ while (cur < file.size) {
+ fileChunkList.push({ file: file.slice(cur, cur + size) });
+ cur += size;
+}
+ return fileChunkList;
+},
+ // Upload the slice
+ async uploadChunks() {
+ const requestList = this.data
+.map(({chunk, hash}) => {
+ const formData = new FormData();
+ formData.append("chunk", chunk);
+ formData.append("hash", hash);
+ formData.append("filename", this.container.file.name);
+ return { formData };
+})
+ .map(async ({ formData }) =>
+ this.request({
+ url: "http://localhost:3000",
+ data: formData
+})
+);
+ await Promise.all(requestList); // Concurrent slicing
+},
+ async handleUpload() {
+ if (! this.container.file) return;
+ const fileChunkList = this.createFileChunk(this.container.file);
+ this.data = filechunklist.map (({file}, index) => ({
+ chunk: file,
+ hash: this.container.file.name + "-" + index // File name + array subscript
+}));
+ await this.uploadChunks();
+}}}; </script>Copy the code
When the upload button is clicked, createFileChunk is called to slice the file. The number of slices is controlled by the file size. In this case, the size of the file is set to 10MB, which means that a 100 MB file will be divided into 10 slices
CreateFileChunk returns slices in a fileChunkList array using the while loop and slice method
When generating file slices, you need to give each slice an identifier as a hash. Here, you temporarily use file name + subscript so that the back end can know which slice the current slice is for later merge slices
Then call uploadChunks to upload all slices, put the slices, hash, and filename into FormData, call request to return a proimise, and finally call Promise.all to upload all slices concurrently
Send a merge request
The second way of merging slices mentioned in the general idea is used here, that is, the front-end proactively notifies the server to merge, so the front-end needs to send an additional request, and the server actively merges slices when it receives the request
<template> <div> <input type="file" @change="handleFileChange" /> <el-button @click="handleUpload"> </el-button> </div> </template> <script> export default { data: () => ({ container: { file: null }, data: [] }), methods: {request() {}, handleFileChange() {}, createFileChunk() {}, // upload slice, Async uploadChunks() {const requestList = this.data.map (({chunk, hash }) => { const formData = new FormData(); formData.append("chunk", chunk); formData.append("hash", hash); formData.append("filename", this.container.file.name); return { formData }; }) .map(async ({ formData }) => this.request({ url: "http://localhost:3000", data: formData }) ); await Promise.all(requestList);+ // Merge slices
+ await this.mergeRequest();
},
+ async mergeRequest() {
+ await this.request({
+ url: "http://localhost:3000/merge",
+ headers: {
+ "content-type": "application/json"
+},
+ data: JSON.stringify({
+ filename: this.container.file.name
+})
+});
+},
async handleUpload() {}
}
};
</script>
Copy the code
Server part
Simply use HTTP module to build the server
const http = require("http");
const server = http.createServer();
server.on("request".async (req, res) => {
res.setHeader("Access-Control-Allow-Origin"."*");
res.setHeader("Access-Control-Allow-Headers"."*");
if (req.method === "OPTIONS") {
res.status = 200;
res.end();
return; }}); server.listen(3000, () = >console.log("Listening on port 3000"));
Copy the code
Accept slice
Use the Multiparty package to process FormData from the front end
In the callback to Multiparty. parse, the files parameter holds the files in FormData, and the fields parameter holds the non-file fields in FormData
const http = require("http");
const path = require("path");
const fse = require("fs-extra");
const multiparty = require("multiparty");
const server = http.createServer();
+ const UPLOAD_DIR = path.resolve(__dirname, ".." , "target"); // Large file storage directory
server.on("request", async (req, res) => {
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader("Access-Control-Allow-Headers", "*");
if (req.method === "OPTIONS") {
res.status = 200;
res.end();
return;
}
+ const multipart = new multiparty.Form();
+ multipart.parse(req, async (err, fields, files) => {
+ if (err) {
+ return;
+}
+ const [chunk] = files.chunk;
+ const [hash] = fields.hash;
+ const [filename] = fields.filename;
+ const chunkDir = path.resolve(UPLOAD_DIR, filename);
+ // The slice directory does not exist. Create the slice directory
+ if (! fse.existsSync(chunkDir)) {
+ await fse.mkdirs(chunkDir);
+}
+ // fs-extra special method, similar to fs.rename and cross-platform
+ // The fs-extra rename method has permission issues on Windows platforms
+ // https://github.com/meteor/meteor/issues/7852#issuecomment-255767835
+ await fse.move(chunk.path, `${chunkDir}/${hash}`);
+ res.end("received file chunk");
+});}); Server.listen (3000, () => console.log(" listening on port 3000 "));Copy the code
In the Multiparty document, you can use fs.rename(since I’m using fs-extra, Its rename method is a Windows platform permission issue, so it is replaced with fse.move) to move temporary files, that is, to move file slices
Before accepting a file slice, you need to create a folder to store the slice first. Since the front end carries an extra unique value hash when sending each slice, hash is used as the file name and the slice is moved from the temporary path to the slice folder. The final result is as follows
Merge slice
After receiving the merge request from the front end, the server merges all slices under the folder
const http = require("http"); const path = require("path"); const fse = require("fs-extra"); const server = http.createServer(); const UPLOAD_DIR = path.resolve(__dirname, ".." , "target"); // Large file storage directory+ const resolvePost = req =>
+ new Promise(resolve => {
+ let chunk = "";
+ req.on("data", data => {
+ chunk += data;
+});
+ req.on("end", () => {
+ resolve(JSON.parse(chunk));
+});
+});
+ const pipeStream = (path, writeStream) =>
+ new Promise(resolve => {
+ const readStream = fse.createReadStream(path);
+ readStream.on("end", () => {
+ fse.unlinkSync(path);
+ resolve();
+});
+ readStream.pipe(writeStream);
+});// Merge slices+ const mergeFileChunk = async (filePath, filename, size) => {
+ const chunkDir = path.resolve(UPLOAD_DIR, filename);
+ const chunkPaths = await fse.readdir(chunkDir);
+ // Sort by slice subscript
+ // Otherwise, the order of obtaining the directory directly may be out of order
+ chunkPaths.sort((a, b) => a.split("-")[1] - b.split("-")[1]);
+ await Promise.all(
+ chunkPaths.map((chunkPath, index) =>
+ pipeStream(
+ path.resolve(chunkDir, chunkPath),
+ // Specify a location to create a writable stream
+ fse.createWriteStream(filePath, {
+ start: index * size,
+ end: (index + 1) * size
+})
+)
+)
+);
+ fse.rmdirSync(chunkDir); // Delete the directory where the slices are saved after merging
+};
server.on("request", async (req, res) => {
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader("Access-Control-Allow-Headers", "*");
if (req.method === "OPTIONS") {
res.status = 200;
res.end();
return;
}
+ if (req.url === "/merge") {
+ const data = await resolvePost(req);
+ const { filename,size } = data;
+ const filePath = path.resolve(UPLOAD_DIR, `${filename}`);
+ await mergeFileChunk(filePath, filename);
+ res.end(
+ JSON.stringify({
+ code: 0,
+ message: "file merged success"
+})
+);
+}}); Server.listen (3000, () => console.log(" listening on port 3000 "));Copy the code
Because the front end carries the file name when sending the merge request, the server can use the file name to find the slice folder created in the previous step
Then use fs.createWriteStream to create a writable stream with the name of the slice folder + the suffix
The slice is then traversed through the slice folder, creating a readable stream through fs.createreadStream, and the transfer is merged into the target file
This is controlled by createWriteStream’s second argument, start/end. The goal is to merge multiple readable streams into the writable stream concurrently, so that the streams are delivered to the correct location even if they are in different order. So you need to have the front end provide an additional size parameter when requesting it
async mergeRequest() {
await this.request({
url: "http://localhost:3000/merge",
headers: {
"content-type": "application/json"
},
data: JSON.stringify({
+ size: SIZE,
filename: this.container.file.name
})
});
},
Copy the code
In fact, you can also wait for the previous slice to be merged before merging the next slice, so you do not need to specify the location, but the transmission speed will be reduced, so the method of concurrent merging is used. Then, just ensure that the slice is deleted after each merge, and delete the slice folder after all slices are merged
Now that a simple large file upload is complete, let’s extend it with some additional features
The upload progress bar is displayed
There are two kinds of upload progress, one is the upload progress of each slice, and the other is the upload progress of the whole file. The upload progress of the whole file is calculated based on the upload progress of each slice, so we first realize the upload progress of the slices
Slice progress bar
XMLHttpRequest native support upload progress listener, only need to listen to upload. Onprogress can, we in the original request based on the onprogress parameter, to XMLHttpRequest registered listener event
// xhr
request({
url,
method = "post",
data,
headers = {},
+ onProgress = e => e,
requestList
}) {
return new Promise(resolve => {
const xhr = new XMLHttpRequest();
+ xhr.upload.onprogress = onProgress;
xhr.open(method, url);
Object.keys(headers).forEach(key =>
xhr.setRequestHeader(key, headers[key])
);
xhr.send(data);
xhr.onload = e => {
resolve({
data: e.target.response
});
};
});
}
Copy the code
Since each slice needs to fire a separate listener event, you also need a factory function that returns a different listener function based on the passed slice
Added the listening function section to the original front-end upload logic
Async uploadChunks(uploadedList = []) {const requestList = this.data+ .map(({ chunk,hash,index }) => {
const formData = new FormData();
formData.append("chunk", chunk);
formData.append("hash", hash);
formData.append("filename", this.container.file.name);
+ return { formData,index };
})
+ .map(async ({ formData,index }) =>This.request ({url: "http://localhost:3000", data: formData,+ onProgress: this.createProgressHandler(this.data[index]),})); await Promise.all(requestList); // merge the slice with await this.mergerequest (); }, async handleUpload() { if (! this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file); This.data = filechunklist.map (({chunk}, index) => ({chunk: file,+ index,
hash: this.container.file.name + "-" + index
+ percentage:0
}));
await this.uploadChunks();
}
+ createProgressHandler(item) {
+ return e => {
+ item.percentage = parseInt(String((e.loaded / e.total) * 100));
+};
+}
Copy the code
When each slice is uploaded, the listener function updates the percentage attribute of the corresponding element in the data array, and then displays the data array in the view
File progress bar
The upload progress of the current file can be obtained by adding up the uploaded parts of each slice and dividing by the size of the entire file, so Vue is used here to calculate the attributes
computed: {
uploadPercentage() {
if (!this.container.file || !this.data.length) return 0;
const loaded = this.data
.map(item= > item.size * item.percentage)
.reduce((acc, cur) = > acc + cur);
return parseInt((loaded / this.container.file.size).toFixed(2)); }}Copy the code
The final view is as follows
Breakpoint continuingly
The principle of breakpoint continuation is that the front-end/server needs to remember the uploaded slices, so that the next upload can skip the previously uploaded part, there are two ways to achieve the memory function
- The front-end uses localStorage to record the uploaded slice hash
- The server saves the hash of the uploaded slices, and the front end obtains the uploaded slices from the server before each upload
The first is the front-end solution, and the second is the server. The front-end solution has a drawback, if you change the browser, you will lose the effect of memory, so I choose the latter
Generate a hash
Both in front and the service side, must be generated files and slice the hash, before we use the file name + slice subscript as slice hash, do this file once modified loses the effect, in fact as long as the file content is constant, the hash should not change, so the right thing to do is according to generate the hash of the contents of the documents, So let’s change the hash generation rules
Another library, Spark-MD5, is used here, which can calculate the hash value of the file based on the content of the file. In addition, if you upload a large file, it is time-consuming to calculate the hash value after reading the content of the file, and it will block the UI, resulting in the suspended animation of the page. So we use the web-worker to compute the hash in the worker thread, so that the user can still interact normally in the main interface
Since the parameter is a JS file path and cannot cross domains when instantiating the web-worker, we create a separate hash. Js file and put it in the public directory. In addition, dom access is not allowed in the worker. However, it provides the importScripts function for importing external scripts, which is used to import Spark-MD5
// /public/hash.js
self.importScripts("/spark-md5.min.js"); // Import the script
// Generate the hash of the file
self.onmessage = e= > {
const { fileChunkList } = e.data;
const spark = new self.SparkMD5.ArrayBuffer();
let percentage = 0;
let count = 0;
const loadNext = index= > {
const reader = new FileReader();
reader.readAsArrayBuffer(fileChunkList[index].file);
reader.onload = e= > {
count++;
spark.append(e.target.result);
if (count === fileChunkList.length) {
self.postMessage({
percentage: 100.hash: spark.end()
});
self.close();
} else {
percentage += 100 / fileChunkList.length;
self.postMessage({
percentage
});
// Recursively calculate the next sliceloadNext(count); }}; }; loadNext(0);
};
Copy the code
In worker thread, accept file slice fileChunkList, use FileReader to read the ArrayBuffer of each slice and continuously pass it into Spark-MD5. A progress event is sent to the main thread via postMessage after each slice is computed, and the final hash is sent to the main thread when all slices are completed
Spark-md5 calculates a hash value based on all slices. Do not directly calculate the entire file. Otherwise, different files have the same hash value
spark-md5
Then write the logic that the main thread communicates with the worker thread
+ // Generate file hash (web-worker)
+ calculateHash(fileChunkList) {
+ return new Promise(resolve => {
+ // Add worker attributes
+ this.container.worker = new Worker("/hash.js");
+ this.container.worker.postMessage({ fileChunkList });
+ this.container.worker.onmessage = e => {
+ const { percentage, hash } = e.data;
+ this.hashPercentage = percentage;
+ if (hash) {
+ resolve(hash);
+}
+};
+});}, async handleUpload() { if (! this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file);+ this.container.hash = await this.calculateHash(fileChunkList);This.data = filechunklist.map (({file}, index) => ({+ fileHash: this.container.hash,Chunk: file, hash: this.container.file.name + "-" + index, // File name + array subscript Percentage :0})); await this.uploadChunks(); }Copy the code
The main thread uses postMessage to pass all slices of fileChunkList to the worker thread, and listens for postMessage events sent by the worker thread to hash the file
Add a bar that shows the progress of calculating the hash, and it looks like this
At this point, the front end needs to change the hash of the file name to the hash returned by the workder
The server uses hash as the slicing folder name, hash + subscript as the slicing name, and hash + extension as the file name. No new logic is added
A pass from the file
Before implementing breakpoint continuation, let’s briefly introduce the second file transfer
The so-called file transfer in seconds means that the uploaded resources already exist on the server. Therefore, when the user uploads the file again, the system prompts that the file is uploaded successfully
The second file transmission depends on the hash generated in the previous step. That is, before the file is uploaded, the hash is calculated and sent to the server for verification. Because of the uniqueness of the hash, once the server can find the file with the same hash, it directly returns the information that the file is uploaded successfully
+ async verifyUpload(filename, fileHash) {
+ const { data } = await this.request({
+ url: "http://localhost:3000/verify",
+ headers: {
+ "content-type": "application/json"
+},
+ data: JSON.stringify({
+ filename,
+ fileHash
+})
+});
+ return JSON.parse(data);
+},async handleUpload() { if (! this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file); this.container.hash = await this.calculateHash(fileChunkList);+ const { shouldUpload } = await this.verifyUpload(
+ this.container.file.name,
+ this.container.hash
+);
+ if (! shouldUpload) {
+ this.$message. Success (" upload: successful ");
+ return;
+}
this.data = fileChunkList.map(({ file }, index) => ({
fileHash: this.container.hash,
index,
hash: this.container.hash + "-" + index,
chunk: file,
percentage: 0
}));
await this.uploadChunks();
}
Copy the code
Second transmission is actually a smoke screen for users to see, in essence, there is no upload
π
The server logic is very simple. Add a validation interface to verify that the file exists
+ const extractExt = filename =>
+ filename.slice(filename.lastIndexOf("."), filename.length); // Extract the suffix nameconst UPLOAD_DIR = path.resolve(__dirname, ".." , "target"); Const resolvePost = req => new Promise(resolve => {let chunk = ""; req.on("data", data => { chunk += data; }); req.on("end", () => { resolve(JSON.parse(chunk)); }); }); server.on("request", async (req, res) => { if (req.url=== "/verify") {
+ const data = await resolvePost(req);
+ const { fileHash, filename } = data;
+ const ext = extractExt(filename);
+ const filePath = path.resolve(UPLOAD_DIR, `${fileHash}${ext}`);
+ if (fse.existsSync(filePath)) {
+ res.end(
+ JSON.stringify({
+ shouldUpload: false
+})
+);
+ } else {
+ res.end(
+ JSON.stringify({
+ shouldUpload: true
+})
+);
+}}}); Server.listen (3000, () => console.log(" listening on port 3000 "));Copy the code
pause
When we’re done generating hash and file seconds, we’ll go back to the breakpoint
Breakpoint resume is breakpoint + resume, so the first step is to implement the “breakpoint”, which is to pause the upload
The XMLHttpRequest abort method is used to cancel an XHR request. To do this we need to save the XHR object that uploaded each slice. Let’s modify the request method again
request({
url,
method = "post",
data,
headers = {},
onProgress = e => e,
+ requestList
}) {
return new Promise(resolve => {
const xhr = new XMLHttpRequest();
xhr.upload.onprogress = onProgress;
xhr.open(method, url);
Object.keys(headers).forEach(key =>
xhr.setRequestHeader(key, headers[key])
);
xhr.send(data);
xhr.onload = e => {
+ // Delete the XHR from the list
+ if (requestList) {
+ const xhrIndex = requestList.findIndex(item => item === xhr);
+ requestList.splice(xhrIndex, 1);
+}
resolve({
data: e.target.response
});
};
+ // Expose the current XHR to the outside
+ requestList? .push(xhr);
});
},
Copy the code
When passing the requestList array as an argument, the request method will store all the XHR in the array
Each time a slice is successfully uploaded, the corresponding XHR is removed from the requestList, so the requestList only holds the XHR for which the slice is being uploaded
Then create a new pause button. When clicked, the ABORT method of XHR stored in the requestList is called to cancel and clear all slices being uploaded
handlePause() {
this.requestList.forEach(xhr= >xhr? .abort());this.requestList = [];
}
Copy the code
Click the pause button and you can see that XHR is cancelled
Restore to upload
When introducing breakpoint continuations, we mentioned using a second server-side storage method to implement continuations
After the file slices are uploaded, the server will create a folder to store all the uploaded slices. Therefore, the front-end can call an interface before each upload. The server will return the name of the uploaded slices, and the front-end will skip the uploaded slices, so as to achieve the effect of “continuation”
This interface can be merged with the previous authentication interface that is transmitted in seconds. The front end sends a verification request before each upload and returns two results
- The file already exists on the server. You do not need to upload the file again
- The server notifies the front-end to upload the file if the file does not exist or some file slices have been uploaded, and returns the uploaded file slices to the front-end
So let’s change the server authentication interface of the previous file transfer seconds
const extractExt = filename => filename.slice(filename.lastIndexOf("."), filename.length); Const UPLOAD_DIR = path.resolve(__dirName, "..") , "target"); Const resolvePost = req => new Promise(resolve => {let chunk = ""; req.on("data", data => { chunk += data; }); req.on("end", () => { resolve(JSON.parse(chunk)); }); });+ // Return the list of uploaded slices
+ const createUploadedList = async fileHash =>
+ fse.existsSync(path.resolve(UPLOAD_DIR, fileHash))
+? await fse.readdir(path.resolve(UPLOAD_DIR, fileHash))
+ : [];
server.on("request", async (req, res) => {
if (req.url === "/verify") {const data = await resolvePost(req); const { fileHash, filename } = data; const ext = extractExt(filename); const filePath = path.resolve(UPLOAD_DIR, `${fileHash}${ext}`); if (fse.existsSync(filePath)) { res.end( JSON.stringify({ shouldUpload: false }) ); } else {res.end(json.stringify ({shouldUpload: true,+ uploadedList: await createUploadedList(fileHash)})); }}}); Server.listen (3000, () => console.log(" listening on port 3000 "));Copy the code
Then back to the front end, there are two places where you need to call the validated interface
- When clicking Upload, check whether you need to upload and upload slices
- Click resume upload after suspension to return to the uploaded slices
Add a restore button and modify the logic of the original upload slice
<template> <div id="app"> <input type="file" @change="handleFileChange" /> <el-button </el-button> </el-button> </el-button>+
restore
/ /... </div> </template>+ async handleResume() {
+ const { uploadedList } = await this.verifyUpload(
+ this.container.file.name,
+ this.container.hash
+);
+ await this.uploadChunks(uploadedList);}, async handleUpload() { if (! this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file); this.container.hash = await this.calculateHash(fileChunkList);+ const { shouldUpload, uploadedList } = await this.verifyUpload(this.container.file.name, this.container.hash ); if (! ShouldUpload) {this.$message. Success (" uploads: uploads successfully "); return; } this.data = fileChunkList.map(({ file }, index) => ({ fileHash: this.container.hash, index, hash: This.container. hash + "-" + index, chunk: file, percentage: 0}));+ await this.uploadChunks(uploadedList);}, // upload the slices and filter the uploaded slices+ async uploadChunks(uploadedList = []) {
const requestList = this.data
+ .filter(({ hash }) => ! uploadedList.includes(hash)).map(({ chunk, hash, index }) => { const formData = new FormData(); formData.append("chunk", chunk); formData.append("hash", hash); formData.append("filename", this.container.file.name); formData.append("fileHash", this.container.hash); return { formData, index }; }) .map(async ({ formData, index }) => this.request({ url: "http://localhost:3000", data: formData, onProgress: this.createProgressHandler(this.data[index]), requestList: this.requestList }) ); await Promise.all(requestList); Number of slices previously uploaded + number of slices uploaded this time = Number of all slices // Merge slices+ if (uploadedList.length + requestList.length === this.data.length) {
await this.mergeRequest();
+}
}
Copy the code
The uploadedList parameter is added to the function that originally uploaded slices. The uploadedList parameter is the list of slices returned by the server in the figure above. The uploaded slices are filtered out through the filter
At this point, the function of continuing the breakpoint is almost complete
Progress bar improvements
Although breakpoint continuation is implemented, the display rules of the progress bar need to be modified, otherwise the progress bar will be distorted when the upload is paused/the uploaded slice is received
Slice progress bar
Since the validation interface is called to return the uploaded slice when you click upload/resume upload, you need to make the progress of the uploaded slice 100%
async handleUpload() { if (! this.container.file) return; const fileChunkList = this.createFileChunk(this.container.file); this.container.hash = await this.calculateHash(fileChunkList); const { shouldUpload, uploadedList } = await this.verifyUpload( this.container.file.name, this.container.hash ); if (! ShouldUpload) {this.$message. Success (" uploads: uploads successfully "); return; } this.data = fileChunkList.map(({ file }, index) => ({ fileHash: this.container.hash, index, hash: this.container.hash + "-" + index, chunk: file,+ percentage: uploadedList.includes(index) ? 100:0
}));
await this.uploadChunks(uploadedList);
},
Copy the code
UploadedList returns the uploaded slices. You can check whether the current slice is in the uploadedList as you iterate through all slices
File progress bar
When I said earlier that the file progress bar is a computed property based on the upload progress of all slices, I ran into a problem
Clicking Pause will cancel and clear the XHR request for slices, and you will notice that the file progress bar is backward if a portion has already been uploaded
When you click Restore, the overall progress bar regains as the slice progress is cleared due to the XHR being recreated
The solution is to create a “fake” progress bar that is based on the file progress bar, but only stops and grows, and then shows the user the fake progress bar
Here we use the listener property of Vue
data: () => ({
+ fakeUploadPercentage: 0}), computed: { uploadPercentage() { if (! this.container.file || ! this.data.length) return 0; const loaded = this.data .map(item => item.size * item.percentage) .reduce((acc, cur) => acc + cur); return parseInt((loaded / this.container.file.size).toFixed(2)); } }, watch: {+ uploadPercentage(now) {
+ if (now > this.fakeUploadPercentage) {
+ this.fakeUploadPercentage = now;
+}}},Copy the code
When the uploadPercentage of a true progress bar increases, so does the fakeUploadPercentage. Once the progress bar goes back, the fake progress bar simply stops
At this point a large file upload + breakpoint continuation of the solution is complete
conclusion
Uploading large Files
- The front-end uses blob.prototype. slice to slice a large file, concurrently uploads multiple slices, and finally sends a merge request to notify the server to merge the slices
- The server receives the slices, stores them, and uses the stream to merge the slices into the final file after receiving the merge request
- Native XMLHttpRequest upload.onprogress monitors the upload progress of slices
- Use the Vue calculation attribute to calculate the upload progress of the entire file based on the progress of each slice
Breakpoint continuingly
- Use spark-md5 to calculate the hash of the file based on the file content
- The hash function determines whether the server has uploaded the file, and directly prompts the user to upload the file successfully (in seconds).
- The upload of the slice is paused by the XMLHttpRequest abort method
- Before uploading, the server returns the names of the uploaded slices, and the front end skips the uploading of these slices
The problem with feedback
Some functions are not convenient to test, here is a list of some questions collected in the comment area, interested friends can put forward your ideas/write a demo for further communication
- Failed upload of slices is not handled
- The web socket is used by the server to send progress information
- Open the page does not automatically get the upload slice, and need to upload again after the initiative to display
The source code
Source code adds some button state, interaction is more friendly, article expression is more obscure places can jump to the source code view
file-upload
Thanks for watching π
Bytedance EA (Enterprise Application) front end team is hiring
Shanghai/Beijing coordinates, campus recruitment club recruitment, HC no upper limit, interested in welcome to submit your resume to [email protected], campus recruitment code Q7QUGMV
The resources
For the novice front end of a variety of file upload strategy, from small pictures to large files resumable
Blob.slice