As anyone who has watched the Avengers knows, Thanos, a staunch supporter and enforcer of the one-child policy, wiped out half of all life in the universe with a snap of his fingers. The scenes in the movie where life is wiped out and goes up in smoke are so cool that after the release of Endgame iv, Google, the defunct website, introduced an Easter egg that if you search for Thanos in the search box, a gloved button will appear, and when clicked, half of the web search results will disappear.



Thanos Snap Effect JavaScript Tutorial

Source code address

Ready to start

We use only one image element here, and the HTML structure is as follows

<body>
     <div class="content">
            <div id="image">
                <! -- Only if the image is a network address can be used locally by directly opening HTML debugging -->
                <img
                    src="https://i.loli.net/2019/05/06/5ccffa469ec52.jpg"
                    width="400"
                />
            </div>
    </div>
</body>
Copy the code

Implementation of animation

  1. Convert HTML to canvas for later processing. Here, we use html2Canvas library, which can draw THE DOM node in HTML into canvas. It is equivalent to taking screenshots of this DOM node. It is very simple to use, and the code is as follows (this library can be used to generate poster pictures for web pages).
const imageBox = document.querySelector('#image')
html2canvas(imageBox, {
    backgroundColor: 'transparent' // Set the background to transparent
}).then(canvas= >{
    // Canvas code (note that this is only available in the new version of HTML2Canvas)
});
Copy the code
  1. Here, after we get the canvas containing the picture information, we need to obtain the information of every pixel on the canvas through the canvas getImageData.
 // Handle canvas code
const ctx = canvas.getContext('2d');
const imageData = ctx.getImageData(
    0.0,
    canvas.width,
    canvas.height
);
const pixelArr = imageData.data; // Pixel information
Copy the code

The image information is stored in a Uint8ClampedArray (8-bit unsigned integer fixed array), the values in this array are integers from 0 to 255, according to the pixels in the image from left to right from top to bottom, each 4 numbers represent a pixel information, these 4 numbers respectively represent RGBA (R – red, G – green, B – blue, A – transparency). Like this picture

[0.0.0.255.255.255.255.255.255.0.0.255.0.255.0.255]
Copy the code
  1. The next crucial step is to create multiple arrays of the same length, fill them with transparent pixel information (all zeros in the array), and then distribute the image pixel information “randomly” into these arrays. These arrays contain part of the content of the original image. In the original tutorial, chance, a random number generation library, is used to control the occurrence probability of random numbers. There is no problem using completely random numbers directly.
// Create an array with the same length as the image information array and fill it with 0 (equivalent to a transparent image with the same size as the original image)
const data = pixelArr.slice(0).fill(0); 
// The number of transparent image arrays to be created cannot be too small or too large.
const canvasCount = 30;
// Copy multiple arrays of transparent images
const imageDataArray = Array.from({ length: canvasCount }, () =>
    data.slice(0));// The pixel information of the original image is randomly assigned to different transparent images, the position remains unchanged
for (let i = 0; i < pixelArr.length; i += 4) {
    const p = Math.floor((i / pixelArr.length) * canvasCount);
    //a is an array of randomly selected pixels to put in
    const a = imageDataArray[Math.floor(Math.random() * canvasCount];
    // Overlay pixel information into a random transparent image array
    a[i] = pixelArr[i];
    a[i + 1] = pixelArr[i + 1];
    a[i + 2] = pixelArr[i + 2];
    a[i + 3] = pixelArr[i + 3];
}
Copy the code

When canvasCount is 3, the effect is as follows:

Original canvas



Generated 3 canvas containing partial pixels of the original image







The larger canvasCount is, the more canvas will be generated, the fewer pixels will be allocated to each canvas, and the more scattered will float.

4. The next step is very simple: hide the original image and add the drifting animation to the generated canvas. The main components of the drifting animation are Gaussian blur, displacement, rotation and transparency changes.

Ring the trigger

Now that the animation is complete, here’s how to trigger the animation. As mentioned at the beginning of this article, the original effect on Google search is triggered by clicking a button, but we use the microphone to detect the input volume in real time and trigger the animation when we snap our fingers (the volume reaches a certain level).

  1. First, make sure your device has a microphone or other voice input device. Then monitor and obtain the device. If you have a microphone, an authorization message will pop up on the web page.
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
        // Get the user's media information
        navigator.mediaDevices
            .getUserMedia({ audio: true })
            .then(stream= > {
                // Audio processing code
            })
            .catch(error= > {
                mystatus.innerHTML = 'There seems to be a problem getting the audio. ' + error;
            });
} else {
        mystatus.innerHTML = 'Access media interface not supported';
}           
Copy the code
  1. Audio processing code
// When the input volume exceeds this value, it indicates that the detection of large volume input (finger snap)
const TRIGGER_VALUE = 0.9; 
// Input the sound of the microphone into this object
mediaStreamSource = audioContext.createMediaStreamSource(
    stream
);
// Create an audio analysis object with a sample buffer size of 4096. Input and output are mono
scriptProcessor = audioContext.createScriptProcessor(
    4096.1.1
);
// Connect the analysis object to the microphone audio
mediaStreamSource.connect(scriptProcessor);
// This has no effect except to fix a bug in Chrome itself
scriptProcessor.connect(audioContext.destination);

// Start processing audio
scriptProcessor.onaudioprocess = function(e) {
    // Get the input audio of the buffer, which is converted to a 32-bit floating-point array containing PCM channel data
    let buffer = e.inputBuffer.getChannelData(0);
    // Get the maximum volume value in the buffer
    let maxVal = Math.max.apply(Math, buffer);
    // Displays the volume value
    if (maxVal > TRIGGER_VALUE) {
        // Animationstart(); }};Copy the code

The last

It’s all done. It’s as simple as that and you can use it to make a dick, but my colleague asked me if I could only monitor the volume, so it could be a cough, a yell, a pop, can I really just detect the snap of my fingers. Well, that’s a good question. Let’s think about it.

Full list of articles

  • From modularization to NPM private warehouse construction
  • 10 minutes for Thanos to snap his fingers and disappear
  • It’s time to develop your own vscode extension
  • A quick guide to using Babel