Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.
Hope to realize traffic automation, many traffic accidents are caused by people’s negligence or not good at driving, every year caused by man-made accidents to bring huge economic losses.
Today it is a trend that computers take the place of brains to make correct driving decisions. Computers have many advantages. With low latency, 360-degree perception, the human brain has limitations in computing, so we need to constantly shift our attention to make sense of our surroundings. The computer will be absorbed in the whole process, and will not open chat software to check messages. So driverless cars can provide more convenient, safer and more economical automation of traffic.
Tesla’s unique approach to autonomous driving is that it takes an incremental approach. Customers are already using self-driving packages for them, and millions of cars are already enjoying the added safety and convenience of driverless cars. And the team is working toward full autonomy.
Next up is a driverless video, which is an FSD beta product, which has been sent to 2,000 customers, and this is a video of a customer driving around San Francisco, showing on a display screen the edge of the road, the driving range, the lines on the road, objects on the road. It was a zero intervention drive, and it was long enough to be zero intervention throughout, handing over control to the FSD.
Then there’s a video of Waymo driving, which is actually a pretty old video, recorded many years ago. Although the two scenes look the same, the perceptual systems behind them are very different.
Related and based on radar or camera to realize awareness, self-driving technology can be divided into two methods, this method of laser radar map of heightening the qing, this way, often an expensive car head laser radar, through continuous rotating scanning environment involves 360 cloud, through advance scans to map a high-definition, Then you need to fuse camera recognition of traffic signals and lane lines into the hd map, which can cause problems.
Tesla’s approach is largely based on vision. Based on the video of the eight cameras around the car to understand everything around, what lane line is driving now, where are the traffic lights, what is the relationship between the traffic signs and the car.
For vision to achieve a grasp of the surroundings is indeed a relatively difficult thing. Compared with the number of HD lidar, the use of neural network vision system has better versatility, but to draw hd maps requires a lot of basic work, and the number of HD lidar data from the collection, creation and maintenance are more difficult, and expensive.
As I mentioned, Tesla doesn’t use high-resolution maps or lidar, just video cameras. In fact, the visual driverless system That Tesla has built over the past few years has had such good results that it’s confident that other sensors have left it behind, and that the camera is doing most of the heavy lifting in terms of what you see in the car.