“This is the sixth day of my participation in the First Challenge 2022. For details: First Challenge 2022”

preface

In the previous ecg desktop application, the function of in-app screenshot was realized, which involves canvas, animation, file download and other knowledge points. Here is a summary. The method in this article has only been tested in Window 10. The project uses vUE 2.6.10, VUE – CLI 3.12.1, Node v14.17.5, and Electron 11.0.0.

※ Note: “+” at the beginning of each line in the code area of this article means new, “-” means deleted, and “M” means modified; “… “in code Represents omission.

1. Pre-knowledge

1.1 Canvas API: CanvasRenderingContext2D drawImage ()

DrawImage writes an image file to the canvas by reading the image and placing it on the canvas using the drawImage() method.

CanvasRenderingContext2D. DrawImage () there are three kinds of use format.

ctx.drawImage(image, dx, dy);
ctx.drawImage(image, dx, dy, dWidth, dHeight);
ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
Copy the code

The meanings of the parameters are as follows:

  • Image: indicates an image element. For details, see developer.mozilla.org/zh-CN/docs/…
  • Sx: The internal abscissa of the image, used to map to the placement point of the canvas.
  • Sy: The vertical coordinate inside the image, used to map to the placement point of the canvas.
  • SWidth: The width of the image on the canvas, which will cause the zoom effect. If not specified, the image will not scale and will occupy the width of the canvas according to its actual size.
  • SHeight: The height of the image on the canvas. If not specified, the image will not scale and will occupy the height of the canvas according to its actual size.
  • Dx: The abscissa inside the canvas, used to place the upper left corner of the image
  • Dy: The vertical coordinate inside the canvas used to place the upper right corner of the image
  • DWidth: The width of the image inside the canvas, which will cause the zoom effect.
  • DHeight: The height of the image inside the canvas, which causes the zoom effect.

For the simplest scenario, place the image on the canvas with the top left corner aligned.

var canvas = document.getElementById('myCanvas');
var ctx = canvas.getContext('2d');

var img = new Image();
img.src = 'image.png';
img.onload = function () {
  ctx.drawImage(img, 0.0);
};
Copy the code

The above code places a PNG image on the canvas. At this point, the image will be its original size, and if the canvas is smaller than the image, only the upper left corner of the image will be displayed, which is exactly the size of the canvas.

To display the full image, set the width and height of the canvas using the width and height of the image.

var canvas = document.getElementById('myCanvas');
var ctx = canvas.getContext('2d');

var image = new Image(60.45);
image.onload = drawImageActualSize;
image.src = 'https://example.com/image.jpg';

function drawImageActualSize() {
  canvas.width = this.naturalWidth;
  canvas.height = this.naturalHeight;
  ctx.drawImage(this.0.0.this.naturalWidth, this.naturalHeight);
}
Copy the code

In the code above, setting the size of the

element to the original size of the image ensures that the image is fully displayed. Since the original size of the image is not available until the image is loaded successfully, resize the canvas must be placed in the image.onload listener.

This section is taken from the document of Ruan Yifeng: wangdoc.com/webapi/canv…

1.2 other

Other vUE and ELECTRON knowledge points please learn by yourself

2 How to Implement it

2.1 Display of screenshot effect

After a screenshot is captured, you can save the screenshot to a local PC.

2.2 Implementation Ideas:

  • Module in the electron main processdesktopCapturerCooperate withnavigator.mediaDevices.getUserMediaAPI to access information about media sources used to capture audio and video from the desktop. Capture desktop to get media stream data stream
const { remote, desktopCapturer } = require('electron')
// id: number - a unique identifier associated with the display
// size : specifies the screen size, including width and height
const { id, size } = remote.screen.getPrimaryDisplay()
const dialog = remote.dialog
const fs = require('fs')

const captureScreen = cb= > {
    // Darwin: apple; inux: linux; win32: windows
    if (process.platform === 'win32') {
        // Old pit: desktopCapture => Invalid under Linux
        desktopCapturer
            .getSources({
                // types: Lists an array of strings of desktop source types to capture, available as screen and window
                types: ['screen'].// thumbnailSize: The size to which the thumbnail of the media source should be scaled. The default is 150 x 150. When you don't need the thumbnail, set the width or height to 0. This will save processing time for retrieving each window and screen content
                thumbnailSize: { width: 0.height: 0}})// Sources 
      
       : Parse with an array of DesktopCapturerSource objects, each DesktopCapturerSource representing a screen or a separate window that can be captured
      []>
            // source.display_id 
      
        : a unique identifier returned by the Screen API that matches the id of Display. On some platforms, this corresponds to part XX of the ID field above, while others differ. It will be an empty string when unavailable
      
            .then(async sources => {
                for (let source of sources) {
                    if (parseInt(source.display_id) === id) {
                        try {
                            const stream = await navigator.mediaDevices.getUserMedia(
                                {
                                    audio: false.video: {
                                        mandatory: {
                                            chromeMediaSource: 'desktop'.chromeMediaSourceId: source.id,
                                            minWidth: size.width,
                                            maxWidth: size.width,
                                            minHeight: size.height,
                                            maxHeight: size.height,
                                        },
                                    },
                                }
                            )
                            // console.log(stream);
                            handleStream(stream, cb)
                        } catch (error) {
                            console.log(error)
                        }
                    }
                }
            })
    } else {
        //linux
        navigator.mediaDevices
            .getUserMedia({
                audio: false.video: {
                    mandatory: {
                        chromeMediaSource: 'desktop'.ChromeMediaSourceId: source.id, // NotReadableError occurs because getPrimaryDisplay() returns an incorrect ID
                        minWidth: size.width,
                        maxWidth: size.width,
                        minHeight: size.height,
                        maxHeight: size.height,
                    },
                },
            })
            .then(stream= > handleStream(stream, cb))
    }
}
...
Copy the code
  • Assign the media stream data stream to the srcObject property of video to play the media file.

  • Screenshots show: through CanvasRenderingContext2D drawImage () method will be streaming video image file written to canvas to display the screen capture.

  • Download screenshot: Convert the Canvas element to base 64 data and then to Buffer using the Buffer module in NodeJS. ShowSaveDialog in electron and fs.writefile in nodejs.
.let screenShootBlob
const handleStream = (stream, cb) = > {
    let video = document.getElementById('video')
    // The video. SrcObject property corresponds to a media File resource, which may be a MediaStream, MediaSource, Blob, or File object. Specify this property directly to play the media file
    video.srcObject = stream
    // Media file metadata is successfully loaded
    video.onloadedmetadata = () = > {
        video.play()
        // createSaveImageCanvas(video)
        let showScreenShootCanvas = document.getElementById('desktop_canvas')

        showScreenShootCanvas.width = size.width
        showScreenShootCanvas.height = size.height
        showScreenShootCanvas.style.width = size.width + 'px'
        showScreenShootCanvas.style.height = size.height + 'px'

        const ctx = showScreenShootCanvas.getContext('2d')
        // Erases the pixel color of the specified rectangle, which is equivalent to removing the previous drawing effect
        ctx.clearRect(0.0, size.width, size.height)

        // Convert to bitmap to improve performance and reduce canvas rendering latency
        createImageBitmap(video).then(bmp= > {
            ctx.drawImage(
                bmp,
                0.0,
                size.width,
                size.height,
                0.0,
                size.width,
                size.height
            )
            
            // Convert Canvas Data to an image in Data URI format
            let base64Data = showScreenShootCanvas.toDataURL('image/png')
            let data = base64Data.split('base64,') [1]

            // Create a new Buffer containing string. The encoding argument is the second argument that identifies the character encoding to use when converting string to bytes. Note that new Buffer(data, 'base64') is deprecated
            screenShootBlob =  Buffer.from(data, 'base64')        
            // 1080,558 is the width and height of the image area displayed in the dialog box after the screenshot
            ctx.drawImage(bmp, 0.0, size.width, size.height, 0.0.1080.558)

            stream.getTracks()[0].stop() // Close the video stream, the serial number is reversed, there is only one so 0

            cb && cb()
        })
    }
}

const saveScreenShoot = () = > {
    dialog
        .showSaveDialog({
            title: 'Save picture'.defaultPath: `The ${+new Date()}.png`.filters: [{name: 'Images'.extensions: ['jpg'.'png'] },],
        })
        .then(res= > {
            // console.log(res, screenShootBlob)
            if (res.filePath) {
                fs.writeFile(res.filePath, screenShootBlob, 'binary'.err= > {
                    if (err) {
                        console.log(err)
                    } else {
                        console.log('Saved successfully'}})}})}export { captureScreen, saveScreenShoot }
Copy the code
  • Use in xxx.vue components
.<template>	
  <div >
       <! -- Screenshot popbox -->
        <div class="dialog_mask" v-show="isShowMask"></div>
        <div class="dialog_mask_start" v-show="isShowMaskStart"></div>
        <div class="dialog_screen_shoot" v-show="isShowScreenShoot">
            <div class="title">Snapshot preview</div>
            <div class="screen_shoot_wrapper">
                <! -- <canvas class="canvas" id="save_image_canvas">
                <canvas class="canvas" id="desktop_canvas">Your browser does not support Canvas</canvas>
            </div>
            <div class="footer">
                <el-button
                    class="custom_button"
                    type="primary"
                    @click="handleSaveClick"
                    >Save < / el - button ><el-button
                    class="custom_button"
                    type="primary"
                    @click="cancelScreenshoot"
                    >Cancel < / el - button ></div>
        </div>

        <video id="video"></video>
  </div>
</template>	
Copy the code
  • Js code in the component
.import { captureScreen, saveScreenShoot } from '@/utils/captureScreen.js'.// Capture screen code:
   this.isShowMaskStart = true

                // Will capture the screen first, then execute the animation
                captureScreen(() = > {
                    this.isShowMaskStart = true

                    setTimeout(() = > {
                        // console.log(mask);
                        let mask = document.querySelector('.dialog_mask_start')
                        mask.classList.add('screen_shoot_last')
                        // After the animation ends, remove the animation mask to display the screenshot dialog
                        mask.addEventListener('transitionend'.() = > {
                            // console.log(this.isShowMaskStart);
                            this.isShowMaskStart = false
                            mask.classList.remove('screen_shoot_last')
                            this.isShowScreenShoot = true})},0)})// Save the image code:Just call saveScreenShoot()Copy the code
  • CSS code
// Capture the mask layer
.dialog_mask {
    position: fixed;
    width: 100%;
    height: 100%;
    left: 0;
    top: 0;
    background-color: rgba(0.0.0.0.5);
    z-index: 777;
}

// The style of the start of the screenshot animation
.dialog_mask_start {
    position: fixed;
    width: 100%;
    height: 100%;
    left: 0;
    top: 0;
    z-index: 888;
    background-color: rgba(255.255.255.0.1);
    transition: width 0.5 s, height 0.5 s;
    transform: translate3d(0.0.0);
    // The style of the end of the screenshot animation
    &.screen_shoot_last {
        border-radius: 10px;
        width: 1080px;
        height: 668px;
        left: 50%;
        top: 50%;
        transform: translate3d(-50%, -50%.0); }}.dialog_screen_shoot {
    z-index: 999;
    position: absolute;
    width: 1080px;
    height: 668px;
    left: 50%;
    top: 50%;
    transform: translate(-50%, -50%);
    border-radius: 10px;
    overflow: hidden;

    .title {
        width: 100%;
        height: 40px;
        background-color: $white;
        display: flex;
        align-items: center;
        justify-content: center;
    }
    .screen_shoot_wrapper {
        width: 100%;
        height: 558px;
        background-color: $black;
        overflow: hidden;
        position: relative;

        #save_image_canvas {
            position: absolute;
            z-index: -999; }}.footer {
        height: 70px;
        background: #eee;
        padding: 0 30px;
        display: flex;
        align-items: center;
        justify-content: flex-end; }}#video {
    position: absolute;
    top: 100%;
}
Copy the code