This is the second day of my participation in the First Challenge 2022

How to get the hash value of a file to ensure that the file is unique

Let’s look at regular file uploads first

<div ref="dragDom" id="drag">
  <input type="file" name="file" @change="handleFileChange" />
</div>

import { defineComponent, ref, onMounted } from 'vue'
import { bindEvents, upload } from './upload'
 export default defineComponent({
    setup(props) {
      const dragDom = ref(null)
      onMounted(() => {
        bindEvents(dragDom, upload().file)
      })

      return {
        dragDom,
      }
    }
  })
Copy the code

Note that the dom refs that fetch tag instances do not need to use:, but must use the responsive data defined in setup. We then add a drag and drop function to the file, which is actually quite simple. Add some listening events

export const bindEvents = (dragDom, file) => {
  const drag = dragDom.value
  drag.addEventListener('dragover', (e) => {
    drag.style.borderColor = 'red'
    e.preventDefault()
  })
  drag.addEventListener('dragleave', (e) => {
    drag.style.borderColor = '#eee'
    e.preventDefault()
  })
  drag.addEventListener('drop', (e) => {
    const fileList = e.dataTransfer.files
    drag.style.borderColor = '#eee'
    file.value = fileList[0]
    e.preventDefault()
  })
}
Copy the code

Since we use the ref function provided by VUe3, we need to use the value of dragdom. value. We do some simple interactions and add some color changes when dragging and dropping

Ref API documentation point here

Takes an internal value and returns a reactive and mutable REF object. The ref object has only one.value property that points to this internal value.

Then when we upload, we need to get the information of the uploaded file, so we do a file upload first

let file = ref<any>('') const handleFileChange = (e) => { const firstFile = e.target.files[0] if (! firstFile) return file.value = firstFile }Copy the code

Now that we’ve got the information to upload the file, the next step is to upload the file and call the back-end interface to upload it. Yes, that’s right

  const form = new FormData()
  form.append('name', 'file')
  form.append('files', file.value)
  http.post('api/uploadFile', form)
Copy the code

This seems to be done, but there is a bit wrong, feel the content of the article and the title does not have anything to do, then we continue to read

If you are careful, you may have noticed that there is something wrong with the parameter name uploaded in our form form. How can the name of the file be written dead in the front end?

Yes, it is possible, but it requires our back-end students to work hard? Why do you say so? Usually when we are working on a project, we may only upload the name of the file, or even only upload a file. How can we ensure that it is the only one in the database?

Let’s look at the first question, how do we make sure it’s unique? We usually hash a file. We all know that JS works on a single thread, so if we have a large file, will it crash the browser, or will the user lose patience when using it? What should we do when the file is large

Some of you may think of the method web Worker. Web Worker is equivalent to opening a new thread in the browser. It can be understood that it has a dopant to do its own work alone.

In this project, the new Worker is built using Vite, so the new Worker() loads resources with @ will be 404. In this case, the file will be placed under the public file

export const calculateHashWorker = async (chunks: Array<chunksType>, hashProgress: Const worker = new worker ('/hash/index.js') const worker = new worker ('/hash/index.js') worker.postMessage({ chunks }) worker.onmessage = (e) => { const { progress, hash } = e.data hashProgress.value = Number(progress.toFixed(2)) if (hash) { resolve(hash) } } }) }Copy the code

Let’s create a new calculatehash.js for the worker to execute new FileReader(), which was described in the previous article for those who don’t remember

Here we use Spark-MD5

// Import spark-md5 // Synchronize one or more scripts to the current file. Self.importscripts ('/lib/spark-md5.min.js') self.onMessage = e => {// Receive data from the main thread const {chunks} = e.ata const spark = new self. SparkMD5. ArrayBuffer () let progress = 0 / / schedule information let count = 0 const loadNext = index = > {const reader = new FileReader() reader.readAsArrayBuffer(chunks[index].file) reader.onload = e => { count++ spark.append(e.target.result) if (count == chunks.length) { self.postMessage({ progress: 100, hash: spark.end() }) } else { progress += 100 / chunks.length self.postMessage({ progress }) loadNext(count) } } } loadNext(0) }Copy the code

I seem to have forgotten where the chunks in the calculateHashWorker method come from, so why not post this too

Let file: any = ref(") const CHUNK_SIZE = 0.01 * 1024 * 1024 // 0.01m const createFileChunk = (file: any, size: number = CHUNK_SIZE) => { const chunks = [] let cur = 0 while (cur < file.value.size) { chunks.push({ index: cur, file: file.value.slice(cur, cur + size) }) cur += size } return chunks }Copy the code

We will fill in the hole about how to deal with the large file upload later.

Summary: This paper mainly introduces the use of worker() function

  1. Self. importScripts imports the file
  2. Self. onMessage When an event of type MessageEvent bubbles to the worker, EventListener is called
  3. Self. postMessage sends a message to the nearest outer object, which can be composed of any JavaScript object.