Self-driving car

Driverless vehicle, also known as autonomous driving vehicle or wheeled mobile robot, is a transport-powered unmanned ground vehicle. Our ideal driverless car would be able to travel from point A to point B without human intervention, regardless of the conditions and weather conditions on the way. The core of driverless cars is driverless technology. If the automobile industry is the crown of the manufacturing industry, then driverless technology is the pearl in the crown.

It is not a single new technology, including radar, lidar, camera, GPS, computer vision, decision system, operating system, high-precision map, real-time positioning, mechanical control, energy consumption and heat dissipation management, etc. While driverless cars may seem like science fiction, the dream is actually becoming reality.

Autonomous driving classification

The degree of automation of driverless vehicles can be divided into six levels, from Level 0 to Level 5 in order of automation degree from low to high.

  • Level 0: There is no automatic driving function, and the driving process completely depends on the human driver to control the car, including the starting of the car, the observation of various environmental conditions during the driving process, various operating decisions, etc. Simply put, cars that require human control belong to this class.
  • Level 1: single-function automation. Part of the control is handed over to the machine while driving, but the driver still needs to control the whole. Such as adaptive cruise, emergency brake assist, lane keeping and so on. The driver can’t be out of control at the same time.
  • Level 2: Partial automation. During driving, the driver and the car share the control of the car. In some preset environments, the driver can completely leave the control system, but the driver needs to be on call at any time and take over the car in a short time.
  • Level 3: Conditional automation, autonomous driving in limited situations. For example, on the highway, the machine is completely responsible for the control of the whole car, and the driver can completely disconnect from the control system. The driver needs to be on call at all times, but has enough warning time.
  • Level 4: Highly automated, with no driver intervention during driving on certain road limits. All the driver has to do is set up the starting and ending points, and the car does the rest.
  • Level 5: Fully automated, driving in any environment without driver intervention. All the driver has to do is set up the starting and ending points, and the car does the rest.

Wireless radar

Radio Detection and Ranging, a common component in cars, works by shooting Radio waves and bouncing them back through a distant object. Wireless radar can obtain the number, size, speed, direction and other information of the object, and it is often used for adaptive cruise and automatic emergency braking in the field of unmanned driving.

The radar sends radio waves to the target area, and when an object bounces them back, the distance between the two can be calculated. 12. Distance D = C ⋅t/2, where T is the time interval from the emission of radio waves to their return, and C represents the speed of light (3⋅10 ^ 8 meters per second).

Radar can identify objects hundreds of yards away and detect their size and speed of movement. But it can’t capture the details of the object.

Laser radar

LiDAR, or Light Detection and Ranging, a radar system that fires laser beams to detect objects. Its working principle is to send a large number of laser beam detection signal to the target object, and then the receiver processing the target reflected back signal can obtain the target information, such as the target distance, azimuth, height, speed, attitude, and even shape information. The lidar of a driverless car is usually mounted on the roof and continuously rotates at high speed to scan the surrounding environment, through which it can obtain three-dimensional information of objects around it.

The principle of lidar measurement is relatively simple. For example, the lidar on a car in the image below fires a laser at a target object to calculate distance based on the speed of light, and the Angle of the beam can be added to get more indicators.

For real three-dimensional objects, a 3D point cloud can be formed by scanning the entire object with lidar. The lidar fires multiple beams of light at the target, and the receiver receives the reflected beams and processes the signal to form a 3D point cloud.

Lidar offers a higher resolution solution and can capture more information than radio radar. Lidar is expensive, requires constant rotation, and does not work in foggy, dusty weather.

camera

To capture more detail, we need to add cameras to driverless cars, such as road signs. The camera gets the most accurate view of the driverless car’s surroundings, providing the highest resolution images. Cameras are very affected by the weather, like at night.

For captured images, machine learning is needed to recognize the objects in them. At present, popular image recognition uses deep learning, the core of which is convolutional neural network. Relevant principles have been explained in the previous chapters of neural network Working Principle and Deep learning Principle, so deep convolutional neural network is the core of image processing collected by the camera.

Deep learning is used to identify objects in the image, such as pedestrians, driving vehicles and traffic signs, captured by the camera. For object detection and object classification tasks of computer vision, the classical algorithms include R-CNN, Faster R-CNN, SSD, YOLO and so on.

Ultrasonic radar

In addition to the above sensors, self-driving cars will generally be equipped with ultrasonic radar. Ultrasonic radar also known as reversing radar, obviously it is mainly for reversing to do auxiliary. Its working principle is through the ultrasonic transmitter to send ultrasonic waves, and then through the receiver to receive the rebound of ultrasonic waves, according to the time difference to calculate the distance. Distance D =343⋅ Time /2, where 343 bits of sonic velocity, 343m/s. Ultrasonic radar detection range is generally within a few meters, detection accuracy is high, suitable for parking.

GPS

GPS is the most common positioning technology used in self-driving cars. GPS updates at 10Hz, so it lacks real time. In addition, the civilian VERSION of GPS can be inaccurate by several meters, so relying solely on GPS for positioning and navigation can easily lead to traffic accidents.

GPS positioning uses the trilateral measurement method. The distance between the satellite and the receiving device is measured by transmission time, and then the position of the receiving device can be calculated by the position of multiple satellites. GPS generally uses more than 4 satellites to locate the 3D position information of the receiving device.

For example, if your position is 100km away from Satellite A, your possible position is on A circle centered on satellite A.

Then you send a radio signal to another satellite, B, at a distance of 75km, and you have two possible positions: the intersection of the two circles.

Finally, a wireless signal is sent to satellite C, and the measured distance is 200km. At this point, the intersection of the three circles can uniquely locate a position, which is your position. In other words, three satellites can determine a point on a plane, and once the coordinate system is set up, you can get a detailed xy value.

High precision map

High-precision map is an important support for driverless cars. It contains a lot of driving assistance information. In addition to providing accurate positioning, it can also do intelligent avoidance, intelligent speed regulation and so on. High-precision maps can provide static perception and global vision, such as road, traffic and infrastructure information, for unmanned vehicles.

The electronic map we now use daily is the traditional electronic map, which can be used for location query and navigation, mainly for human drivers. High-precision electronic maps provide more information than traditional electronic maps, mainly for driverless cars. The main range of information collected by the sensors on the body of the unmanned vehicle is very limited, but the range of perception of the unmanned vehicle can be greatly extended through the high-precision electronic map, and more accurate information can be obtained.

High precision maps contain a lot of ancillary information. Such as lane location, width, slope, type, curvature and other highway data information. Such as traffic signs, signal lights, obstacles, road height limits, guardrail, trees, fences, landmarks and other environmental data information. If the model of high-precision electronic map is further abstracted, more information between unmanned vehicles and lanes, traffic and infrastructure can be obtained.

Compared to GPS, high-precision electronic maps can achieve more than 10 times the accuracy of GPS, which is usually within a few meters, while high-precision maps can achieve the accuracy of centimeters in collaboration with sensors.

Inertial measurement unit

Inertial Measurement Unit (IMU) is a sensor that measures acceleration and angular velocity. Unmanned vehicles generally use mid-low level inertial sensors with an update frequency of 1kHz and a price of several thousand pieces. Inertia can help a car locate, but because its errors accumulate over time, it can only be used for a short period of time.

To understand the acceleration of an INERTIAL measurement unit, we can think of an accelerometer as a ball floating in a box in weightless space. When we apply an acceleration of gravity (1g) to the left, the ball exerts a force of 1g towards the wall plane in the X-direction, and then we are able to measure the acceleration along the X-axis as -1g.

The INERTIAL Measurement unit also incorporates angular velocity measurements, and the gyro can rotate three degrees of freedom around the fulcrum. The gyroscope below has a vertical axis in the middle through a metal disk, called a rotor, and the vertical axis is the axis of rotation. To increase inertia, the rotor is made of heavy metal. The outer side of the vertical axis is nested with three layers of rings of different sizes, with three degrees of freedom in the direction. Angular velocity is measured mainly by the conservation of angular momentum.

V2X interacts with the environment

V2X communication sensor is the communication protocol between unmanned vehicles and the surrounding environment, including vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (Vehicle to Infrastructure), V2I), Vehicle to Pedestrian (V2P).

V2V communication refers to the exchange of information between driverless vehicles, such as traffic conditions. V2I communication refers to the exchange of information between driverless cars and infrastructure, such as smart parking lots and driverless cars. V2P communication refers to the exchange of information between a driverless car and a pedestrian, for example via a smartphone app.

Path planning

Path planning mainly solves the problem of finding the fastest and safest path from the starting point to the end point. There are many mature algorithms in path planning, such as Dijkstra algorithm, A* algorithm, RRT algorithm and so on. The path planning of driverless vehicles needs to consider the influence of multiple factors, such as road accidents and traffic congestion.

conclusion

Perception is at the heart of driverless cars, which have four types of eyes with different fields of vision, including wireless radar, lidar, ultrasonic radar and camera, through which they can get different fields of vision. In terms of positioning, driverless cars can achieve very precise positioning by using GPS and inertial strategy devices, coupled with high-precision electronic maps. In addition, V2X has been proposed, including V2V, V2I and V2P, to enable unmanned vehicles to communicate and interact with the environment.

Focus on artificial Intelligence, reading and feeling, talk about mathematics, computer Science, distributed, machine learning, deep learning, natural language processing, Algorithms and Data Structures, Java depth, Tomcat kernel, etc.