Recently, I learned about the concept of stream when I used the FS module in Node. It occurred to me that I could try to practice the node server receiving file

Train of thought

  1. The front-end cuts files based on the customized chunk size and uploads the files concurrently
  2. The back end receives the file data and stores the file and splices it according to the order

To prepare

Talk is cheap

The front-end part needs an input of the uploaded file, so I directly use @vue/ CLI to quickly create the project, and simply write an input

<div> <input type="file" :accept="accept" @input="getFile" /> <div @click="upload">Copy the code

In order to send the request, of course, we need to download the most common request library axios, after installation, to introduce, the front-end part of the tool is simply ready

In the back-end part, I chose express framework to quickly build node service, middleware multiparty to accept processing files, and Node’s own FS module. Everything is ready

The specific implementation

Going back to the front end, all we need to do is send requests with binaries to the back end, and there are a few things we need to do

Get the binary

The file itself can be retrieved from the Input event object in the Input box

this.file = e.target.files[0];
Copy the code

Files [0] is taken here because files is an array

Just upload one file and take the first item

Split into file slices

Once we have the file itself, we need to slice it into an array using the slice method of the BLOB prototype object

	createFileChunkList(file, size) {
		const fileChunkList = [];
		let curSize = 0;
		while (curSize < file.size) {
			fileChunkList.push({
				file: file.slice(curSize, (curSize + size))
			})
		curSize += size;
		}
		return fileChunkList
	}
Copy the code

Send the request correctly

Now we have got the array of files. After traversing the array, name each slice to facilitate the server to splice the slices in the correct order after receiving them, and use formData to store the file slices and corresponding information

	this.fileChunkList = this.createFileChunkList(this.file, this.chunkSize)
	.map((file, index) => {
		const formData = new FormData();
		formData.append('chunk', file.file);
		formData.append('hashname', this.file.name + '_' + index);
		formData.append('filename', this.file.name);
		return formData
	})
Copy the code

Note that when sending a request, you need to set the content-Type to multipart/form-data in the request header and then send the request through AXIos

	const headerConfig = {
		headers: {
			'Content-Type': 'multipart/form-data'
		}
	};
	return axios.post('http://localhost:4000/upload/chunk', formData, headerConfig);
Copy the code

The AXIos request returns a promise, and you can call the promise. all method to execute the request for a splicing slice after all the slices have been uploaded successfully

	Promise.all(requestList)
   	.then(() => {
		this.mergeChunk();
	})
Copy the code

The request for a splicing slice is simple, just send the file name and slice size to the server, and the code is no longer shown here

At this point, the front end has sent the slices to the server, and then we look at the server

The node part

Node Server uses Express to establish a service with port 4000. Two interfaces are required: /upload/chunk to obtain file slices, and /upload/merge to merge slices

The /upload/chunk interface needs to realize three main functions

  1. Create a temp folder in the server upload folder. Check whether this folder has been created before. Otherwise, skip it
const tempDir = path.resolve(__dirname, '.. /upload/temp'); if (! fs.existsSync(tempDir)) { fs.mkdirSync(tempDir) }Copy the code
  1. The second step is to process the file sent by the request through the multipart middleware
var multiparty = require('multiparty'); router.post('/upload/chunk', function(req, res, next) { const tempDir = path.resolve(__dirname, '.. /upload/temp'); if (! fs.existsSync(tempDir)) { fs.mkdirSync(tempDir) } const form = new multiparty.Form(); form.parse(req,function(err,fileds,files){ const [hashname] = fileds.hashname; const [filename] = fileds.filename; const [chunk] = files.chunk; })})Copy the code

Fileds (fields stored in formData) can be retrieved in the callback function of middleware’s parse method

And get the file in files

  1. Finally, I will write the slices into files and store them in the temp folder.
const chunkReadSteam = fs.createReadStream(chunk.path); Const chunkPath = tempDir + '/' + hashname; // Define the chunk storage location fs.writefilesync (chunkPath, null); // Write an empty file in the chunk location const chunkWriteSteam = fs.createWritestream (chunkPath); // Create a writable stream of empty files chunkReadSteam.pipe(chunkWriteSteam); // Write slices to an empty file through the pipeCopy the code

It is worth mentioning here that, after creating readable and writable streams, the content is written to a file through the pipe as shown in the figure below. Those who are interested in stream streams can find out for themselves

After these three steps are completed, the sliced file is obtained, and it can be found that the total size is basically the same as the uploaded file size

The upload/chunk interface has been successfully implemented, and the only thing left is to merge these slices. After reading all the slices in temp folder, sort them by filename and write to the same file

router.post('/upload/merge', function(req, res, next) { const tempDir = path.resolve(__dirname, '.. /upload/temp'); const filename = req.body.filename; const chunkSize = req.body.chunkSize; const size = req.body.size; const filepath = path.resolve(tempDir, '.. /', filename); const fileChunkList = fs.readdirSync(tempDir).filter(name => name.match(new RegExp(filename))).sort((a, b) => a.split( '_')[1] - b.split('_')[1]); fs.writeFileSync(filepath, null); fileChunkList.forEach((name, index) => { const chunkReadStream = fs.createReadStream(tempDir + '/' + name); const fileWriteStream = fs.createWriteStream(filepath, { flags: 'w', start: index * chunkSize, }) chunkReadStream.pipe(fileWriteStream); }) res.send(json.stringify ({MSG: 'merge successful'})); })Copy the code

Finally in the folder surprised to find the full thumbnail

conclusion

The form of the blob file is binary, and the content-Length in the request header indicates the size of the data to be sent. All this knowledge can help you locate problems more quicklyWell, I will write it here, it is very tiring. It is the first time to write an article, and I hope I can continue to update the content of practice and learning experience