These links

– 01 Simple drag-and-drop uploads and progress bars

-02 Binary level format verification

-03 Two ways to compute hash

– 04 slice upload and grid progress bar

V1.5: Resumable data

In the previous four chapters of iterative development step by step, our upload demo has begun to take shape, realizing simple drag-and-drop upload, binary level format verification, and the ability to slice upload large files. The next step is to further optimize the slice upload, realizing the functions of file transmission in seconds and resumable files.

In fact, the principles of both functions are very simple, and the detailed implementation will be described below.

A pass

As mentioned above, we use hash to determine whether the file exists on the server. After the front end calculates the hash, we just send hash and ext to the back end for query before uploading to know whether the file exists on the server. If so, the front end will directly indicate that the file has been successfully transferred in seconds. Otherwise, upload normally.

So here we define the back end interface, and what does that interface do?

  • Returns when the file existsuploadedtrueCan be

Based on the above ideas, it is easy to get the following interface code:

router.post("/api/v1/checkchunks".async ctx => {
  const { hash, ext } = ctx.request.body;
  const filepath = path.resolve(uploadPath, `${hash}.${ext}`);

  let uploaded = false;

  if (fse.existsSync(filepath)) {
    uploaded = true;
  }

  ctx.body = {
    uploaded
  };
});
Copy the code

After the event is complete, check whether the file is uploaded successfully.

const handleFileUpload = async() = > {/ *... * /
  const res = await axios.post("/dev-api/checkchunks", {
    hash: fileHash,
    ext: getFileExtension((fileRef.value as File).name)
  });

  const uploaded = res.data.uploaded;

  if (uploaded) return alert("Transmission succeeded in seconds");
	/ *... * /
};
Copy the code

Breakpoint continuingly

Compared with the second function, breakpoint continuation is a little more complicated, but after a slow analysis, the logic can be sorted out clearly, step by step to achieve this function.

The key to enabling breakpoint continuation is to know what slices are left on the back end and filter them out when uploading from the front end.

At the end of the event the chunks can be read at the end and the name of the chunk can be retrieved at the end.

const getUploadList = async chunkspath => {
  return fse.existsSync(chunkspath)
		// Filter out hidden files
    ? (await fse.readdir(chunkspath)).filter(filename= >filename ! = =".")
    : [];
};

router.post("/api/v1/checkchunks".async ctx => {
  / *... * /
  let uploadedList = [];

  if (fse.existsSync(filepath)) {
    uploaded = true;
  } else {
    uploadedList = await getUploadList(chunkpath);
  }

  ctx.body = {
    uploaded,
    uploadedList
  };
});
Copy the code

In this way, we can get the remaining slices from the server on the front end. There are a few things to note before uploading:

  • The first step is to filter existing slices
  • Set the grid progress bar corresponding to existing slices to 100%

Both of the above can be implemented through filter. The only thing to note is that one is set to 100% and the other is filtered out. The conditions are opposite:

// Set the progress bar
chunks.value = fileChunks.map((c, i) = > {
  const name = `${fileHash}-${i}`;
  return {
    name,
    index: +i,
    hash: fileHash,
    chunk: c.fileChunk,
    progress: uploadedList.includes(name) ? 100 : 0
  };
});

// Filter the remaining slices on the server
const requests = chunks.value
									      .filter(({ name }) = >! uploadedList.includes(name)) .map/ *... * /
Copy the code

In order to simulate the unstable network environment, we first upload slices to the server, then delete some slices randomly, and then upload the file again (of course, we can write a random to directly fail some slices randomly during the upload process, so as to avoid the test process of manually deleting slices). The results are as follows:

It can be seen that the progress bar display of the remaining slices is correct at the beginning, but it stops after the progress bar has partially gone. Then come to the server to check and confirm:

The file was successfully uploaded and merged, then the problem is the progress bar, so locate the grid progress bar position:

const requests =
 chunks.value
.filter(({ name }) = >! uploadedList.includes(name)) .map(({ name, index, hash, chunk }: ChunkRequestType) = > {
  console.log("[chunks]:", { name, index, hash, chunk });
  const formdata = new FormData();
  formdata.append("name", name);
  formdata.append("index", index);
  formdata.append("hash", hash);
  formdata.append("chunk", chunk);
  return form;
})
.map((form, idx) = > {
  return axios.post("/dev-api/upload", form, {
    onUploadProgress: progress= > {
      const { loaded, total } = progress;
      chunks.value[idx].progress = Number(
        ((loaded / total) * 100).toFixed(2)); }}); });Copy the code

As you can see, the index used in the progress bar is the index of the array, and this number always starts from 0 backwards, so we should use the chunk index here, and fix it slightly:

const requests = 
chunks.value
.filter(({ name }) = >! uploadedList.includes(name)) .map(({ name, index, hash, chunk }: ChunkRequestType) = > {
  console.log("[chunks]:", { name, index, hash, chunk });
  const formdata = new FormData();
  formdata.append("name", name);
  formdata.append("index", index);
  formdata.append("hash", hash);
  formdata.append("chunk", chunk);
M return { formdata, index };
})
M.map(({ formdata, index }) = > {
  return axios.post("/dev-api/upload", formdata, {
    onUploadProgress: progress= > {
      const { loaded, total } = progress;
M     chunks.value[index].progress = Number(
        ((loaded / total) * 100).toFixed(2)); }}); });Copy the code

The final effect is as follows:

conclusion

So that’s the end of today’s article, what else is worth noting?

  • Concurrency control: the previous slice upload is a direct creation of all requests, although the browser has request limits, but too many requests sent at the same time, will also cause a certain amount of pressure on the browser, resulting in a lag
  • Error handling: If a slice fails during the upload process, it will automatically try to repost, rather than cause the whole thing to fail

These will continue to discuss iteration in future articles, so the first day of the New Year, the New Year new atmosphere, good luck!