· Reply “IT” in the public account and send you an imagination chart

From the author: Duanyang Liu on GitChat: How to Build a Robot with Raspberry PI

Raspberry PI is designed for computer programming education. It is a small computer the size of a credit card. The original system was based on Linux, and with the release of Windows 10 IOT, raspberry PI can now run Windows as well.

Raspberry pie is the size of a credit card, but the heart is very strong, video, audio, and other functions are some raspberries pie 3 version now have 1 gb of memory, 1.2 GHZ, have raspberries pie reserved 40 of the operating system can drive all kinds of sensors and drive the I/O interface, so we use raspberry pie as the controller of the robot, Write the control software into the TF card of the Raspberry PI, and the software can control the driver and sensor through the GPIO interface.

The raspberry PI controller is fundamentally different from other robots, because the Raspberry PI has a complete operating system (the other ones are just control systems) and very good Python support. Therefore, it is possible to quickly develop software on Raspberry PI to control the sensor of the robot by using Python language. Raspberry PI also has another advantage that it can run artificial intelligence-related algorithms, such as SVM, which can easily classify data.

Using raspberry PI as the brain of robots is the trend of the future. This Chat focuses on how to use Raspberry PI to develop intelligent robot control systems, including the following content.

1. Introduction to Raspbian

The raspberry PI operating system, Raspbian, is developed by Mike Thomson and Peter Green. The OS is an officially recommended operating system. It cleverly combines the names Paspberry and Debian into one word. Raspbian is A Debian based operating system developed specifically for the Cotex-A family to run on raspberry PI.

Debian is the basis for another distribution, Ubuntu, the most popular Linux desktop system, which has good community support. Raspbain comes with more than 35, 000 software packages and a lightweight graphical interface called LXDE. Raspbian offers great functionality, is well organized, and supports the latest hardware and software.

With its widespread use in the geek community (12.5 million PI units have been sold so far), PI has become the third largest computing platform in the world (after Windows and MAC), so the PI Foundation (Note: The Raspbian PI is a tiny computer developed by the Raspbian PI Foundation, which designs circuit diagrams, designs the operating system and maintains the community. Production of the Raspbian PI is now carried out in two factories, RS and Ement14.) Raspbian Pixel is an operating system for desktop computers.

2. Use Python to develop sensor driver library GPIO library

As I said before, the robot built by Raspberry PI mainly uses GPIO interface to control the driver and various sensors of the robot. Because Raspberry PI has an operating system, control software can be developed by using Python. Now there are many library files to support, such as: https://github.com/RPi-Distro/python-gpiozero, this library file supports RASPberry PI GPIO very well.

Raspberry PI’s multiple programmable GPIO (General Purpose Input/Output) interfaces can be used to drive various peripherals (such as sensors, stepper motors, etc.). Currently, there are two popular GPIO development environments in Raspberry PI: Python GPIO and C-based wiringPi. We recommend Python GPIO because not only is Python easy to get started with, but its interpreted language nature allows programs to run without compiling and after making any changes to the code, making debugging much easier.

So I list some GPIO library files for your reference:

  • https://github.com/projectweekend/Pi-GPIO-Server

  • https://github.com/adafruit/AdafruitPythonGPIO

  • https://github.com/alaudet/hcsr04sensor

  • https://github.com/vitiral/gpio

Step motor and ultrasonic sensor

The robot is divided into four parts:

  • Robot control system, this part is called the robot brain part.

  • Mechanical parts, robots are to perform certain tasks, this part of the task is completed by the mechanical part.

  • In the perception part, the robot should constantly perceive the surrounding environment and transmit information or data through perception.

  • Driving part, the robot through the driving parts linked to the machinery and sensors, so as to drive the mechanical parts to perform certain tasks.

As the name implies, the stepper motor is a step by step driver. At present, there are many stepper motors on the market for small service-level robots. You can find the relevant stepper motors through search engines or e-commerce websites.

Stepper motors and ultrasonic sensors are important peripherals to help PI achieve its functions. By writing programs in Python that are burned into the TF card of the Raspberry PI with certain tools, we can drive the stepper motor to turn forward and reverse at different times to drive the relevant connection parts.

For example, the wheels of the smart car move forward or backward; Manipulator of different joints of the swing and clamping items; The synchronous rotation of the uav’s multi-axis propellers enables the launch.

Ultrasonic sensor is a sensor developed by using the characteristics of ultrasonic wave. The intelligent robot equipped with ultrasonic sensors, under the control of raspberry PI, can identify obstacles placed around it, move the ultrasonic rangefinder back and forth, and transmit the collected signals back to the data processing center, which will display the measured distance. It can fully realize the function of the car barrier.

Because we want to create an automatic barrier smart car, so we want to use ultrasonic sensors as sensors to identify objects, there are a lot of ultrasonic sensors on the market, the main technical indicators of the sensor is precision and cost-effective.

We take an example to see how to integrate stepper motor and ultrasonic sensor. Here, we will use raspberry PI to build a self-navigation tracking car. In the next chapter, we will explain how to use CNN to rewrite part of the code, so as to realize the function of automatic driving.

Here’s what you’ll learn from reading this chapter:

  • How to use GPIO interface to control the speed of DC motor

  • How to use raspberry PI programming to control mobile platform

  • How to plan a route for a tracking car

To complete this project, you must prepare the following hardware:

  • A raspberry pie

  • A TF card with at least 8G is class10

  • A small car site as shown below

The method of connecting the motor drive plate is as follows:

  • Mount the motor drive plate on the plum pie.

  • Connect the power cord on the power supply to the power input terminal on the drive board with a voltage of 6-7V. 4 AA batteries or 2SLiPo lithium batteries can also be used to connect the ground and power cables to the click drive board.

  • Next, connect one of the drive signals to the drive port of motor 1 on the drive board. Connect motor 1 to motor on the right and motor 2 to motor on the left.

  • Finally, link the second drive signal to the drive port of motor 2 on the drive board.

Add python code to raspberry PI to drive motors and ultrasonic sensors.

The second part of the code drives two motors to realize the control of the tracking car forward and backward and turning, as shown in the figure below.

Through the above code, we basically know how the code we need to drive the corresponding motor to complete the work.

According to the previous instructions, rR.set_motors () can be implemented to specify the speed and direction of each motor individually. Now that you have basic code to drive the trace car, you need to modify it further to call these functions in other Python programs. Some standard displacement is also required to enable the tracking car to turn at a specified Angle or move a certain distance. The code is shown below:

In the function turn_right(Angle) and turn_left(Angle), the instruction time.sleep(Angle /20) makes the tracking car move according to the size of the turning Angle for a certain time to reach the specified turning Angle. Depending on the situation, you may need to change the size of the denominator 20 to achieve accurate steering.

The command time.sleep(value) makes the tracking car move for a certain time according to the value to achieve the specified distance. Next, in order to realize the movement of the tracking car, the sensor needs to be connected to the tracking car so that the car can know the situation around it.

Finally, the sensor is connected to raspberry PI, and the software of raspberry PI is tested through the breadboard. The circuit diagram is as follows:

When the sensor is connected, a piece of code is required to read the value returned by the sensor, first holding the sensor in place (in the static test case), and then converting the program into a distance. Here is the Python code for the program.

Then run this part of the code, we see the ultrasonic sensor ranging results.

By the above code we basically know how by raspberries pie control motor, also know how we control the raspberries pie, by ultrasonic sensor data back to the tree blackberry pie control program, and raspberry pie to make a decision, and give the decision feedback again to drive motor, the motor make a forward or reverse, when the car is forward or backward.

Through the introduction of the above we understand that we need to use manual to control the car forward or backward, so how to choose the car road strength? The following focuses on setting dynamic planning road strength.

Automatic route planning means that it is impossible to fully anticipate obstacles until they are encountered. Your device needs to decide for itself how to proceed during operation. This is a complex question, but there are a few basic concepts that need to be understood and applied if you want your device to operate comfortably in an environment. Let’s first solve the routing problem when you know where you want the device to go, and then add some obstacles to the line.

Basic line planning

In order to learn dynamic route planning, which is the route planning problem when obstacles are not known in advance, the following framework is needed to understand the position of the equipment and the target position of the equipment to run to.

This grid has three key locations, explained as follows:

  • The point in the lower left corner is a fixed reference position, and the directions of X and Y axes are also fixed. The positions of other points can be determined according to this reference point and X and Y axes.

  • The second key position is the robot’s starting point. The tracking car will keep its track with respect to some fixed reference points on the X and Y axes up to the target position according to X and Y coordinates. It uses a compass to track those directions.

  • The third key position is the target position, which is also represented by X – and Y-axis coordinates relative to the fixed reference position. If you know the starting point and the Angle between the starting point and the target location, you can plan the optimal (shortest distance) route to the target location. The distance and Angle between the tracking car and the target position can be calculated according to the target position, the position of the tracking car and some simple mathematical formulas.

There is a formula for road force planning, you can refer to geometry books to find the relevant mathematical formula, we use a schematic representation.

We can do this with a very simple piece of Python code that makes the track car move forward and turn. We call this file Robotlib.py, and it contains all the servo initialization programs that make the track car move forward and turn. Then use the line from Compass import * to call the program and management of the path planning, we also use this line to import the compass program into it, the complete code is as follows:

barrie

Above we tell about barrier-free road force planning, is relatively simple, but when the tracking car needs to bypass obstacles, planning road force is more challenging, if an obstacle is located on the road force previously planned, as shown in the figure.

You can still use the same calculation method to get the initial travel Angle; Now, however, sonar sensors are needed to detect obstacles. When the sonar sensor detects an obstacle, the tracking car needs to stop and recalculate the path to avoid the obstacle and the path to the target position.

A very simple method is that when the tracking car spots an obstacle, it turns 90 degrees to the right, moves forward some distance, and then calculates the optimal path. After turning to the direction of the target position again, if there are no obstacles, the tracking car will proceed along the optimal route.

To detect obstacles, the library functions of the sensor need to be called. You can use the compass to determine the forward Angle more accurately, and use the from Compass import * command to import the compass library function to use the compass.

You can also use the time library function and the time.sleep command to control how long different instructions take to execute. The track.py library functions need to be modified so that the entire command group does not have a fixed end time, as shown in the following figure:

Through the above explanation we understand how to carry out path planning, but we want a smart car that can drive itself how to do it? In chapter 4, we focus on how to train an autonomous driving model using convolutional neural networks.

4. Use CNN as the automatic driving system of smart car

Convolutional Neural Network (CNN) is a deep Neural Network suitable for use on continuously valued input signals, such as sound, images, and video. CNN is widely used in unmanned 3D sensing and object detection.

In March 2016, Mohammad Rastegari et al. first proposed the concept of XNOR-NET in a paper. This paper aims to find the optimal simplified network by binarization operation, and introduces two effective Networks: binary-weight-networks and Xnor-networks respectively. Binary-weight-networks approximate binarization of all weights in CNN, saving 32 times of storage space. In addition, as the weight is binarized, the convolution process only has addition and subtraction algorithm, excluding multiplication operation, which can improve the operation speed by about two times. This makes CNN can be used on small storage devices, including portable devices, without sacrificing accuracy.

Xnor-networks algorithm performs approximate binarization for all weights and inputs in CNN at the same time. If all the operands in the convolution operation are binary, the dot product of two binary vectors can be equivalent to the same or operation and bit operation.

These operations are naturally supported by general-purpose computing devices such as cpus, so binary neural networks can run on conventional cpus, cheaper ARM chips and even devices such as raspberry PI. This provides a theoretical basis for CNN to be transplanted into the raspberry PI system.

Raspberry PI smart car detects the geographical environment and surrounding changes between the starting point and the end point, and then chooses the optimal route to avoid danger, so as to truly realize the automatic driving of smart car. It has a broader prospect for development.