🚀 Author: Author of “Big Data Zen” in the field of big data, huawei certified cloud enjoy expert, aliyun expert blogger

🚀 Article introduction: the actual combat part of this article mainly used MediaPipe and OpenCv two libraries, to achieve the effect of space operation, mainly have ** space operation mouse, space painting, space control volume and space gesture recognition **💪

Project Demo link

1. Project effect display

The project is mainly divided into four parts, respectively

  • Air volume control
  • Painting from a distance
  • Telekinetic gesture recognition
  • Operate the mouse by air

Below is a demo of the four sections

1.1: Air volume control

1.2: Space painting

1.3: Gesture recognition

1.4: Mouse simulation

2. Libraries involved

The implementation of these applications mainly involves two libraries

  • OpenCv

  • MediaPipe

2.1: Introduction to OpenCv

OpenCV is a cross-platform computer vision and machine learning software library distributed under the Apache2.0 license.

Can run on a variety of operating systems, such as Linux, Windows, Mac OS and so on. It is lightweight and efficient — it consists of a series of C functions and a small number of C++ classes. It also provides interfaces to Python, Ruby, MATLAB and other languages and implements many common algorithms in image processing and computer vision.

2.2: Introduction to MediaPipe

MediaPipe is an open source data flow processing machine learning application development framework developed by Google.

It is a graph-based data processing pipeline for building data sources that use multiple forms, such as video, audio, sensor data, and any time series data.

MediaPipe is cross-platform, runs on multiple operating systems, workstations and servers, and supports mobile GPU acceleration.

With MediaPipe, machine learning tasks can be constructed as a data flow pipeline of graphical modular representations, including inference models and streaming media processing capabilities.

3. Project environment construction

The environment for these applications is simple and convenient. You can install the corresponding libraries directly in PyCharm to use them. If the library cannot be downloaded or timed out, you can download PIP source. This application is written in Python

4. Source code

Application involves more source code, here is not a paste out.

If you want to practice it, you can send me a private message, or click on the bottom of the article to follow the public account, and click on contact me to add remarks to the source code. Here are some intercepts of key parts of the code.

cap = cv2.VideoCapture(0)  # If using external camera, change to 1 or other number
cap.set(3, wCam)
cap.set(4, hCam)
pTime = 0
detector = handDetector()

success, img = cap.read()
    img = detector.findHands(img)
    lmList = detector.findPosition(img, draw=False)
    pointList = [4.8.12.16.20]
    if len(lmList) ! =0:
        countList = []
        if lmList[4] [1] > lmList[3] [1]:
            countList.append(1)
        else:
            countList.append(0)
        for i in range(1.5) :if lmList[pointList[i]][2] < lmList[pointList[i] - 2] [2]:
                countList.append(1)
            else:
                countList.append(0)
        count = countList.count(1)
        HandImage = cv2.imread(f'FingerImg/{count}.jpg')
        HandImage = cv2.resize(HandImage, (150.200))
        h, w, c = HandImage.shape
        img[0:h, 0:w] = HandImage
        cv2.putText(img, f'{int(count)}', (15.400), cv2.FONT_HERSHEY_PLAIN, 15, (255.0.255), 10)

Copy the code

5. To summarize

The above four projects mainly call some machine learning libraries for code writing. Interested partners can import the project on their own computer to practice.

🚀 Article content acquisition: Examples of articles can be obtained and communicated through Vx: R310623949a

Welcome friends to like 👍, collect ⭐, leave a message 💬