These links
– 01 Simple drag-and-drop uploads and progress bars
-02 Binary level format verification
-03 Two ways to compute hash
V1.4: Large file slice upload – slice upload and merge
In the previous part, we have finished the slicing and hash of the file, the next thing to do is to upload these slices to the back end, this part is not difficult, step by step.
Start with some customizations for chunks:
const chunks = fileChunks.map((c, i) = > {
// Slice name, subscript, content, hash
const name = `${fileHash}-${i}`;
return {
name,
index: i,
hash: fileHash,
chunk: c.fileChunk,
};
});
Copy the code
The logic of uploading can be divided into three parts:
- Wrap each slice
- use
Promise.all
Send the request - Issue a merge request for the back end to merge slices
Then the macro structure of the whole method is as follows:
const uploadChunks = async (chunks, hash) => {
const requests = chunks.map(/* do something */)
await Promise.all(requests);
await mergeRequest(hash);
};
Copy the code
Obviously, the most important thing here is how to pack the slices.
This can be done in two steps:
- Put the data into
formdata
中 - The initiating
In summary, do the following for requests:
const requests = chunks
.map(({ name, index, hash, chunk }) = > {
const formdata = new FormData();
formdata.append("name", name);
formdata.append("index", index);
formdata.append("hash", hash);
formdata.append("chunk", chunk);
return formdata;
})
.map((form) = > {
return axios.post("/dev-api/upload", form);
});
Copy the code
From here, we can go to the back end to complete the slice receiving work:
router.post("/api/v1/upload".async ctx => {
const { chunk } = ctx.request.files;
const { name, hash } = ctx.request.body;
const chunkPath = path.resolve(uploadPath, hash);
if(! fse.existsSync(chunkPath)) {await fse.mkdir(chunkPath);
}
const { path: cachePath } = chunk;
await fse.move(cachePath, `${chunkPath}/${name}`);
ctx.body = { msg: "revice chunks success!" };
});
Copy the code
The tests are as follows:
With these slices, we send a merge request from the front end to merge the slices on the back end.
What parameters does merge take?
In the upload process of slice, the back-end does not know the slice size and file name extension, which are obviously needed in the merge process. In addition, since merge is a new request, hash also needs to be obtained from the front end, so it is relatively easy to obtain the required parameters: size, hash, ext.
const mergeRequest = async (hash) => {
await axios.post("/dev-api/mergefile", {
ext: getFileExtension(fileRef.value.name),
size: CHUNK_SIZE,
hash: hash
});
};
Copy the code
On the back end, we need to get the address of the slice and the address of the target file, and then merge the slices, so the approximate code should be as follows:
router.post("/api/v1/mergefile".async ctx => {
let chunks;
const { ext, size, hash } = ctx.request.body;
const filepath = path.resolve(uploadPath, `${hash}.${ext}`);
const chunkpath = path.join(uploadPath, hash);
chunks = await fse.readdir(chunkpath);
chunks = chunks.map(c= > path.resolve(chunkpath, c));
await mergeChunks(chunks, filepath, size, chunkpath);
ctx.body = {
url: `upload/${hash}.${ext}`
};
});
Copy the code
You must sort the slices by index, otherwise the merged files will be completely different.
So sort the slices:
router.post("/api/v1/mergefile".async ctx => {
let chunks;
chunks = await fse.readdir(chunkpath);
+ chunks.sort((a, b) = > a.split("-") [1] - b.split("-") [1]);
chunks = chunks.map(c= > path.resolve(chunkpath, c));
});
Copy the code
Finally, the real merge operation is implemented by using a stream, read and write at the same time, and delete the directory where the slices are stored after all operations are completed:
const mergeChunks = async (chunks, dest, size, chunkpath) => {
const pipStream = (filepath, writeStream) = >
new Promise(resolve= > {
const readStream = fse.createReadStream(filepath);
readStream.on("end".() = > {
resolve();
});
readStream.pipe(writeStream);
});
await Promise.all(
chunks.map((chunk, index) = > {
pipStream(
chunk,
fse.createWriteStream(dest, {
start: index * size,
end: (index + 1) * size }) ); }));await fse.remove(chunkpath);
};
Copy the code
The results are as follows:
V1.4: Large file slice upload – Grid progress bar
Now that the whole process of uploading file slices has been completed, we can go back to the front to make some experience optimization, such as the grid progress bar.
The idea is not complicated, but the key points are as follows:
- The number of grids is the number of slices
- Individual grid progress can pass
onUploadProgress
To obtain - The mesh should be close to a square, so the overall width should be calculated
As the core above, we can further analyze:
- Success is calculated according to progress, and the color of the grid is changed to indicate success
- Also, the grid height is calculated according to Progress to achieve the effect of the grid gradually filling up
The HTML structure is as follows:
<div class="cube-container" :style="{ width: cubeContainerWidth }">
<div class="cube" v-for="chunk in chunks" :key="chunk.name">
<div
class="progress"
:class="{ success: chunk.progress === 100, uploading: chunk.progress > 0 && chunk.progress < 100, error: chunk.progress < 0 }"
:style="{ height: chunk.progress + '%' }"
></div>
</div>
</div>
Copy the code
The width of the whole container is actually easy to get, because it needs to be close to a square, so we can just take the root, but the root is not necessarily an integer, so we can just round up:
const cubeContainerWidth = computed(() = > {
const count = chunks.value.length;
return `The ${Math.ceil(Math.sqrt(count)) * 16}px`;
});
Copy the code
After that comes the simple styling:
.cube-container { .cube { width: 16px; height: 16px; border: 1px solid #000; float: left; > .success { background: #a7ff83; } > .uploading { background: #22d1ee; } > .error { background: #fc5185; }}}Copy the code
The results are as follows:
Careful readers may have found: after slicing, only the upload progress of a single slice can be obtained, so how does the overall progress come from?
In fact, we know the size of individual slices, the uploading progress of individual slices, and with the entire chunks, it’s not particularly difficult to calculate the overall progress:
const uploadProgress = computed(() = > {
if(! fileRef.value || ! chunks.value.length) {return 0;
}
const loaded = chunks.value
.map((chunk: any) = > chunk.chunk.size * chunk.progress)
.reduce((acc: number, cur: number) = > acc + cur, 0);
return parseInt((loaded / fileRef.value.size).toFixed(2));
});
Copy the code
conclusion
A simple large file upload demo has been basically implemented so far, but there are still some holes, and some functions can be extended, such as:
- If half of the upload fails, how to continue the breakpoint
- Sending all slice upload requests all at once will definitely have an impact on performance. How to control concurrency
- What about error retries for each upload request
These questions will be addressed in the next article, so that’s all for today, look forward to next time ~