This article by”
AI the front“Original, original
Baidu Apollo2.0 data open platform: “cloud + terminal” research and development iterative new model


Author | Yang Fan from Baidu Autonomous Driving Division


A draft | Vincent


Edit | Emily

Dear developers, MY name is Yang Fan from Baidu Autonomous Driving Department. I’m glad to have the opportunity to share with you some of baidu and Apollo’s progress in autonomous vehicles.

My share is divided into the following parts:

  • The first is Apollo and the open end capability
  • Then it introduces the new mode of resource opening and r&d iteration
  • After that, the data open platform and training platform are introduced
  • Finally, it introduces the phased achievements based on capacity opening and resource opening


Open profile of Apoll capabilities

Let’s start with part one, a brief introduction to Apollo capabilities.

AI ecological opening strategy

Let us first introduce baidu to do automatic driving background. As Chief Operating Officer Lu Qi said again at CES, Baidu is already an AI company. And share with you how Baidu is accelerating AI innovation at a Chinese speed and how together we can change the world for the better through open platforms.

We can see the evolution of the technology tide, from the command line, client servers, the Internet, mobile Internet all the way to the AGE of AI.

In Baidu AI open ecological strategy, the system is divided into cloud and end. The supporting cloud technology is intelligent cloud and Baidu Brain, while the output of end is autonomous driving Apollo ecology and awakening DuerOS ecology.

And more importantly, and more exciting, AI is bringing the mobile Internet into a whole new era, which we call the new mobile.

In this new mobile era, mobile phones have strong perception and more AI-based production capabilities. Every mobile phone can hear, see, speak, learn and understand the user. Baidu’s core products, mobile Baidu, IQiyi, etc., will make full use of this new capability to comprehensively promote new technologies and products to lead the user experience of the new generation of mobile era. In particular, mobile Baidu will organically integrate search and personalization to create a new generation of more user – aware experience. Apollo ecology is one of the important and first landing ecology of Baidu AI.

The declaration of Apollo,

Since the announcement of the opening on April 19, we have received feedback from many partners, the core of which is summed up as the Apollo declaration. The autonomous driving industry is rapidly moving into the future, and the biggest pain point is that the technical barrier is too high. Every enterprise needs years of accumulation of technology and manpower before it can enter into substantive RESEARCH and development. Baidu started early, has nearly 4 years of technology accumulation, early investment, will open the ability to each partner, from 0 to 1, soon into the research and development of unmanned driving, so as to improve the speed of innovation in the unmanned driving industry, avoid duplication of wheel. Everyone is focusing on more effective innovation. Sharing resources, partners using Apollo technology resources, ready-to-use; At the same time, every partner can contribute resources, and the more they contribute, the more they get. Apollo benefits, and our partners benefit even more. Accelerating innovation: data plus innovation, pooling data resources, kilometers, the number of scenes covered, will be far greater than any closed system to continue win-win: Baidu’s business model, based on Baidu’s core capabilities. With complementary capabilities and partners, Apollo will be a milestone in the automotive industry, and Apollo will have a core impact.

Apollo Opening Roadmap

Apollo announced its opening plan in April 2017, followed by 1.0 in July, 1.5 in September, and 2.0 at CES in January 2018. The speed of iteration and release was very fast.

Apollo Roadmap can be seen that Apollo open consists of two parts, open capacity and open resources.

In July 1.0, 2017, automatic driving capability and resources for tracking of closed venues will be opened; in September 1.5, automatic driving capability and resources for fixed lanes will be opened; in January 2.0, 2018, automatic driving capability and resources for simple urban road conditions will be opened. This will be followed by the gradual opening of autonomous driving capabilities and resources on specific regional highways and urban roads in 2018, 2019 and 2020, as well as the Alpha version of autonomous driving capabilities and resources on highways, urban roads and the entire road network.

Apollo Technology Framework

An open framework is the common demand of academia, science and technology, and industry. The Apollo technology framework consists of four layers. , respectively,

  • Reference Vehicle Platform
  • Reference Hardware Platform
  • Open Software Platform
  • Cloud Service Platform

Basic introduction of each layer and module:

  • To be specific, first we need a car that can be controlled by our signals, so we call it a signal car. Then, we mainly promote the first layer together with many automakers and manufacturers of this scheme of Tier1.
  • Then comes Reference Hardware. In order to make the car move, we need the support of computing unit, GPS/IMU, Camera, liDAR, millimeter-wave radar, interpersonal interaction equipment, BlackBox and other Hardware.
  • Open software layer. It is divided into three sub-layers and has a real-time operating system. There is a framework layer that hosts all modules. The high-precision map and positioning tell the vehicle where it is, the perception module informs the surrounding environment, and the decision planning module determines the overall and detailed planning. The control module is responsible for generating the trajectory of decision planning output into specific control commands.
  • Baidu autopilot has a very strong ability from the cloud. High-precision maps are stored in the cloud, as are the simulation services used to simulate driving, accelerating the efficiency of autonomous driving research and development. We have accumulated a lot of data and understanding of data. We have also opened the data platform in the cloud to reflect the main line of data processing in our cloud. We also have a security service in the cloud that ensures automatic software updates.

New Apollo 2.0 modules include Security, Camera, Radar, and Black Box, meaning that all four modules of the Apollo platform — cloud services, service platform, reference hardware platform, and reference vehicle platform — have been activated. Apollo 2.0 is the first to open up security and OTA upgrades, allowing only correct and protected data to enter the vehicle and further enhancing self-location, awareness, planning decisions and cloud phalanx. The Black Box module includes the software and hardware system, which can realize safe storage and large-capacity data set transmission. It can help us detect abnormal situations in time and improve the security and reliability of the entire platform.

In terms of hardware, two forward cameras (telephoto + telephoto) are added to identify traffic lights, and a new millimeter wave radar is installed on the front bumper. So obviously, after Apollo 2.0 opened up the Camera and Radar modules, the whole platform has put more emphasis on the ability of sensor fusion, which has further increased its adaptability to simple urban road conditions day and night.

As Li Zhenyu, general manager of Baidu Intelligent Driving Business Group, said, low cost and low power consumption must be the goal of autonomous driving platform, but in fact, in the process of moving from laboratory to mass production, safety is the most important, and Baidu must first solve the safety problem.

Apollo new mode of resource opening and R&D iteration

The “cloud + terminal” research and development iterative new mode of intelligent vehicles

Smart self-driving cars need a smart onboard brain. We have explored all the way through our own research and development. The new iterative mode of “cloud + end” research and development of intelligent vehicles is our solution to accelerate the efficiency of autonomous vehicle research and development. We accumulate huge amounts of data on vehicles.

These accumulated data are used in cloud server clusters to efficiently generate models of artificial intelligence, namely vehicle brains. Update the car brain to the vehicle, giving the vehicle the ability to drive itself.

During our own research and development, we found that the development of L4 autonomous production vehicle required very complex algorithm strategies to solve very complex scenarios.

According to the REPORT of RAND Corporation, the mass production needs to accumulate 10 billion kilometers of experience in autonomous driving, and 100 cars will run 7*24 hours for 100 years. If it is the efficiency of the traditional direct research and development at the vehicle end and debugging at the vehicle end is not enough.

Therefore, the solution we propose is to improve the efficiency of R&D through the new mode of R&D iteration of “cloud + end”. We are very pleased to share this model with Apollo Ecology.

Here is the big data part of autonomous driving.

Automatic driving data classification

Autonomous driving data can be divided into four broad categories:

The data generated by autonomous vehicles is first and foremost raw data. Mainly sensor data, vehicle data, driving behavior data and so on. These data are characterized by large amount of data, diverse types, and mainly unstructured and semi-structured data. No matter to the storage, transmission, processing constitute a relatively big challenge.

In order to use data in deep learning, we also need a lot of annotated data. There are mainly traffic light data sets, obstacle data sets (2D and 3D), semantic segmentation data sets, free space data sets, behavior prediction data sets and so on.

To characterize autonomous driving behavior, we also need to abstract data into logical data. Mainly perfect perception data, environment abstract data, vehicle dynamics model and so on.

Finally, we will construct simulation data for simulation, mainly parameter fuzzy data, 3D reconstruction data, interactive behavior data and so on.

Autonomous driving data platform architecture

Data platform is the core platform of our “cloud + end” r&d iterative new mode supporting intelligent vehicles.

It consists of data acquisition and transmission, automatic driving data warehouse and automatic driving computing platform.

The first part is data acquisition and transmission. The use of data-Recorder will be generated according to Apollo Data specifications, complete, accurately recorded Data packets, can complete problem recurrence, but also complete Data accumulation. Through the transport interface, data can be efficiently transferred to operation points and clusters.

Then comes the automatic driving data warehouse, which will systematically organize all the massive data together for quick search and flexible use, providing data support for the data assembly line and various business applications.

As for the autonomous driving computing platform, it provides super computing power based on heterogeneous computing hardware of cloud resources and provides various computing models through fine-grained container scheduling to support various business applications. Such as training platform, simulation platform, vehicle calibration platform and so on.

Data open platform and training platform combat

Data Platform Overview

Apollo Open Source data set

The Apollo Open Source dataset is divided into three major parts:

  • Simulation data set, including automatic driving virtual scene and real road scene;
  • Demonstration data set, including vehicle system demonstration data, calibration demonstration data, end-to-end demonstration data, self-positioning module demonstration data;
  • Annotated data sets include six data sets: Laser point cloud obstacle detection classification, traffic light detection, Road Hackers, image-based obstacle detection classification, obstacle trajectory prediction, and scene analysis.

In addition to open data, it also supports open cloud services, including data annotation platform, training and learning platform, simulation platform and calibration platform, providing Apollo developers with a complete set of data computing power solutions and accelerating iterative innovation.

Apollo training platform


We also provide class-supporting computing power for each dataset through our Apollo training platform.

The training platform features:

Docker+GPU cluster provides the same hardware computing capability as the car terminal.

Integrate multiple frameworks to provide complete deep learning solutions. The interactive visualization result analysis facilitates the algorithm debugging and optimization

Cloud open platform architecture logic

One of the biggest pain points in developing algorithms for autonomous driving is the need for trial and error with huge data sets. By implementing the research and development process of deep learning algorithm (development, training, verification and debugging) in the cloud, we can make full use of a large amount of computing resources in the cloud and complete data flow only in the cloud server, thus greatly improving the efficiency of algorithm research and development.

To be specific, first developers develop algorithms based on Docker in the local developer machine and deploy the dependent environment.

Then push the developed environment to a private Docker Repository in the cloud.

Next, select data sets on the platform and initiate training tasks. The Apollo training platform’s cloud scheduler dispatches tasks to compute clusters for execution. In this process, inside the cluster, the developer’s program uses the data acquisition interface to retrieve the data set in the autonomous driving data warehouse.

Finally, the business management framework will return the execution process, evaluation results and Model to the visual platform to complete the visual debugging.

Next, I’ll show you the data platform in action. Go to Apollo. Auto, Apollo’s official website, and you can see the home page.

Apollo home Page

Check out the source code for Apollo open End capabilities on Github.

To access Apollo Data Open Platform, select “Data Platform” from the “Developer” menu on the top menu bar. It is recommended to use PC to open it for better effect.

The login

On the upper right corner of the Page of Apollo data open platform, there is a login menu. Click to log in to Baidu account, which will simplify the use process in the future.


Data open platform

The home page of the data open platform is composed of several sections, including simulation scene data, annotation data, demonstration data, related products and services, and upload my data.

Developers will be able to use Apollo’s already open Data directly or upload it to the cloud using Apollo’s Data-Recorder records.

By selecting specific data, you can enter the application of specific data.

Developers can calibrate vehicle parameters in the calibration platform, upload data, apply for data processing, use data annotation service, train Model in the training platform, combine the results of the first few steps of the application platform into Github Apollo code, and submit the compilation results or source code to the simulation platform for evaluation. In this way, the r&d iteration of its own vehicle system has been completed through “cloud + terminal”.

Simulation scenario data actual combat

First, you can see the simulation scenario data set.

Simulation scenario data set

Simulation scene data includes artificial editing and real collection scenes, covering a variety of road types, obstacle types and road environments. Meanwhile, the cloud simulation platform is opened to support concurrent online verification of algorithm modules in multiple scenes, accelerating algorithm iteration speed.

Click the “Use now” button under the two simulation data sets to enter the simulation scene data set details page.

Simulation scene data

In the detail page of simulation scene data, you can further view the details of the scene through condition filtering. Click the simulation platform button in the upper right corner to enter the simulation platform.

The simulation scene

In the open simulation platform, you can run simulation scenarios with the default Apollo module, or you can submit your own autonomous driving system to run simulation scenarios. The specific use of the Apollo simulation will be shared separately, which I won’t elaborate on here due to time constraints.

Annotation data practice

Next comes the annotation data.

Annotation data

Annotated data are generated by manual annotation to meet the training requirements of deep learning. At present, we open a variety of annotated data, and provide corresponding computing capacity in the cloud for developers to train algorithms on the cloud and improve algorithm iteration efficiency.

Apollo opens up six annotated datasets and popular algorithms in the community for developers to debug in the cloud:

  1. Laser point cloud obstacle detection classification, we provide rule-based algorithm Demo (traditional machine learning);
  2. For traffic light detection, we provide Demo (Paddle, Caffe) based on SSD algorithm.
  3. Road Hackers, we offer Demo based on CNN+LSTM (Keras, TensorFlow);
  4. Based on image obstacle detection classification, we provide Demo (Caffe) based on SSD algorithm;
  5. For obstacle trajectory prediction, we provide Demo (TensorFlow) based on MLP algorithm.
  6. Scenario analysis.

Next, let’s look at “Laser point cloud obstacle Detection classification”, and you can enter the data set details page. Other annotated data you can view by yourself.

Laser point cloud obstacle monitoring classification

On the Data set Details page, you can see an introduction to the data set.

Click a row of action buttons in the upper right corner.

Click “View User Manual” to view more detailed data set description and instructions. Here is the PDF link that opens after “View the Manual” :

Data. Apollo. Auto/static/PDF /…

Click on sample data to download a small amount of sample data to understand the data format.

Click “Apply to use”, you can apply to use a large amount of data in the data set in the cloud.

Application for online data use

This is the dialog box that pops up after clicking “Apply for use”.

At present, we open the cloud computing capability for scientific research institutions and enterprises, that is, you can use a large amount of annotated data online for model training and access open data through API. An Apollo commercial officer will contact you later after you apply.

Training platform actual combat

After the application is approved, the “Apply to Use” button will change to “Use online,” which will take you to a new task on Apollo Training Platform (Apollo Training platform has a high security verification mechanism). For the first time, you need to set cloud attributes and contact information, and sign the use agreement; If the login is not used for a long time, you need to log in again.

A new task

Before we create a new task, let’s take a quick look at the rest of the training platform.

  • The menu bar on the left has the following menu:
  • An overview of the platform
  • Create a task in task management
  • Getting started guide

User help, Apollo function is simple, we will not expand, the next four focus on the introduction.

Training Platform overview

The Platform overview page contains an overview of the training platform. On the training platform, we will open a large amount of data and provide corresponding computing resources for developers to train algorithms online on the deep learning platform. We are committed to enabling every partner with strong software and algorithm development capabilities to promote the popularization of autonomous driving technology.

The task list

The task list page has a list of tasks owned by individuals.

Apollo Training Platform instructions -1

The getting started guide has instructions for Docker creation and algorithm development.

User help has answers to some frequently asked questions.

If developers have more questions, they can contact us using the link in the lower left corner. You can also use the work order system in the upper right corner.

Next, let’s take traffic light detection as an example to explain the development of deep learning algorithm according to the introductory guide.

At the top is the Docker username and password assigned to each developer with access.

Next is an overview of the process:

  • Step 1: Set up a local development environment
  • Step 2: Obtain the Apollo Demo image
  • Step 3: Data usage method, interface specification
  • Step 4: Submit the mirror, submit the task, view the task, and view the result

Refer to the open source code in the Demo image corresponding to related tasks to learn how to use data, write your own code, and write it into the image.

Apollo Training Platform instructions -2

Step 1 To set up a local environment, download the OVA image package corresponding to VirtualBox and import the OVA to create a preconfigured VM.

Apollo Training Platform instructions -3

Step 2, Apollo official Demo image list, image obtaining instructions.

We chose this image as the basis for development:

Apollo Training Platform instructions -4

Step 3: Platform data usage method and interface specification.

Refer to the open source code in the demo mirror corresponding to relevant tasks, learn how to use data, and write your own code:

  1. The boot path of the training program is /admin/run_agent.sh. Users can modify the file content to control the behavior of the training program. The platform uses run_agent.sh to judge the status of the training task.
  2. The task runtime environment has data sets for testing in the /dataset_test/ directory. The data sets are automatically downloaded and deployed based on the task type selected when the task is submitted.
  3. Users need to refer to the demo image entry (/admin/run_agent.sh) provided by the platform to familiarize themselves with different data usage modes. Click on data Tools to learn how developers download and upload data through Apollo training tools in the computing platform cloud.

Let’s expand on the data tools.

Apollo Training Platform instructions -5

Data tool description.

Developers can use the following three interface programs to dynamically obtain data, output data, output logs, output evaluation, output charts and output forecast results in their algorithm programs:

Apollo_data_get dataSetId outputPath [tag] [offset] [limit] APOLlo_data_put…

For more details, see:

Console.bce.baidu.com/apollo/help…

Traffic light detection Demo algorithm

Our Demo provides the author’s original Caffe version, as well as the PaddlePaddle version. Interested friends can access the MIT paper and the author’s source code on Github.

Submit mirror

Commit the Docker image to the repository with the following command:

Submit training tasks

Enter the new task of the training platform, fill in the task information and submit the task.

View the task list and details

You can view your own tasks in the task list. Click the task details link to view the execution status and result of the task.

Training Mission Details

We provide developers with basic information such as task information, table information, charts, logs, etc., as well as displays of each data type, such as 3D point clouds.

Developers can learn about the task execution and the convergence of Loss on this page. Page load time As the amount of data grows, it may take a little bit of load time.

Demo data combat

Demo data is to match the car end code, through the demo data to experience the ability of each module.

Demo data set

At present, we have opened a variety of demonstration data, covering vehicle system demonstration data, self-positioning, end-to-end data and other module data, aiming to help developers debug the code of each module, ensure that the newly opened code module of Apollo can run successfully in the local environment of developers, and experience the ability of each module through demonstration data.

Demonstrate the use of data in Apollo

For example, by downloading on-board system demonstration data, you can experience the full Apollo on-board capabilities through Github’s Apollo source code compilation and execution steps.

Download the demo data set, which contains sensor data. Compile Apollo as instructed by Quick Start and play the dataset demo with the rosbag play -l command.

Related products and services

In addition to open data, it also supports open cloud services, including data annotation platform, training and learning platform, simulation platform and calibration platform, providing Apollo developers with a complete set of data solutions.

Related products and services

Data upload

Data upload actual combat entrance:

Data upload Battle-1: Improve information

The developer needs to fill in the name, device, collection area, and scene property information.


Data upload Battle-2: Online upload

We provide three data uploading modes. You can choose a proper uploading mode according to the size of the data to be uploaded and the broadband speed:

  • Online < 5G
  • When the client is less than 1 TB
  • Offline disks provide greater data

Data upload Battle-2.1: Upload data from the client

To use client upload, you need to download the upload tool and a transfer configuration file.

Data upload battle-3: Offline hard disk

After filling in the data information, Apollo commercial personnel will directly contact with the offline delivery.

Data upload Battle-3.1: Describes the data upload tool

For details about the upload tool, see the documentation.

Conclusion: The “cloud + end” r&d iterative new mode of intelligent vehicles

To sum up, developers can calibrate vehicle parameters in the calibration platform, upload data, apply for data processing, use data annotation service, train Model in the training platform, combine the results of the previous steps of the application platform into Github Apollo code, and submit the compilation results or source code to the simulation platform for evaluation. In this way, the r&d iteration of its own vehicle system has been completed through “cloud + terminal”.

Apollo Phase Results

It has brought a total of 10 smart vehicles to Xiongan New Area from partners such as Daimler, Ford, Chery, BAIC, Great Wall, Golden Dragon Bus and Chi Heng Technology. It shows Apollo’s multi-model, multi-scene and multi-dimensional application in passenger cars, commercial buses, logistics vehicles and road swept-swept-and can be called the epitome of Automatic driving in China.

Apollo Autonomous National Team

From left to right:

  1. Apollo unmanned minibus “Apolong” achieved mass production in 2018, the first to realize the commercialization of autonomous bus in China.
  2. Daimler V260L
  3. Lincoln MKZ ranged
  4. Baic EU260
  5. Chery EQ
  6. Chery Tiger 5X
  7. Chery Areze 5
  8. The Great Wall WEY
  9. Smart walker Road Sweeper “Worm little White”
  10. Smart Traveler logistics vehicle “Hubidda”

Apollo already has 90 partners, including more than 15 automakers (13 domestic auto brands and 2 overseas brands); 10 first-class Tier1 suppliers including Bosch, Continental and Delphi; Leading chip makers, including Intel and Nvidia; Mapping companies like Tomtom; Lidar companies such as Velodyne, Sagitar and Hesai; Horizon, Smartwalker and other startups; Shouqi Ride-hailing, Grab, Shenzhou Youche and other travel service companies; Tsinghua University, Tongji University and other university research institutions.

Apollo Partners

Check out Apollo.auto and GitHub for more information. You can also follow the Apollo Developer Community account for the latest developments. Thank you and that’s all for today’s sharing.

Weixin.qq.com/r/yzl5YfrES… (Qr code automatic recognition)

Apollo is 2.0

Q1: What is the position of Gaojing Map in Apollo2? What is the extent of baidu’s internal gaojing map?

A: High-precision mapping and high-precision positioning have been key modules in Apollo’s autonomous driving. As we know, in the driving behavior of ordinary people using navigation software, such as baidu Map, compared with not using, will also feel a huge difference; It is also necessary to use high-precision maps for today’s autonomous in-car brains. Using high-precision maps, not only can the most accurate information be provided to the in-car brain in advance, but also information about roads beyond the range of sensors can be provided to the in-car brain. Make up the hardware performance boundary of sensor;

Baidu in China is the first to achieve three-dimensional high-precision map & positioning, with centimeter-level accuracy, international leading, the largest automated driving fleet in China, and the highest degree of refinement, the highest production efficiency, the most extensive coverage.

One example I can give is the traffic light puzzle: at a junction, a specific stop line is associated with a traffic light. If there is no map information of the intersection, it is a very challenging topic to dynamically recognize all the semantics and make judgments in real time only by the camera perception. In high precision map, traffic lights include precise location, altitude, even outside the sensor detection range, we can know how far in advance there will be a traffic light, and to prepare for the drive to the junction, combined with the semantic map make judgments, traffic lights, can be easily detected significantly reduces the difficulty of perception, avoid false identification.

Q2: If a smart car drives to a place where there is no signal and the brain is removed from the cloud, can it still drive automatically?

A: First, it is necessary to explain the relationship between the drop-off terminal and the cloud. Cloud production training model, upgrade at the car end. Ensure reliable autonomous driving capability at the end of the vehicle;

The vehicle does need a small amount of wireless signals to communicate. For example, multi-sensor fusion positioning module, when there is communication conditions, through communication, can help us improve the accuracy of such positioning module. At the same time, the multi-sensor fusion positioning module provides redundancy in the absence of signal high precision positioning ability, can solve such as no signal in the tunnel, the avenue signal instability and other scenarios.

At the beginning of our research and development of the autonomous driving system, the most critical factors of Safety were taken into account in the overall design of the system. In future iterations, there will be redundant Safety safeguards. Take safe driving actions, such as finding a safe place to pull over, before leaving the restricted area or when danger signs are detected.

Q3: There is a follow-up to questions 1 and 2, which is that the high precision map or model has been upgraded, how can it be deployed to the vehicle side?

A: Models can be upgraded through security Upgrade Suite (SEC-OTA). Provide the car end and the cloud SDK and operating tools, easy to use, easy to operation, integration, deployment support installation package two-way authentication, encryption, demotion, tamper-proof failure mechanism, equipment certification, TOKEN support business data transmission electronic envelope, don’t rely on data security transmission on the TLS support calibration, installation package divided reduce memory footprint, Save hardware cost support for network upgrades and USB offline upgrades look at Apollo Automotive Information Security page: Apollo. auto/ Platform/SE… .

Q4: How to apply for partner status?

A: First of all, you can fill in the cooperation consultation form through the “Contact us” on the right side of apollo. auto home page, explaining the company’s brief information and cooperation needs. Our commercial staff will contact you as soon as possible.

After initial demand docking, the two sides will explore effective cooperation models to jointly prosper Apollo’s ecosystem and enhance Apollo platform capabilities. By signing the Apollo partnership agreement, you will be able to join Apollo as a member of the Alliance, enjoying the benefits and resources of the Apollo platform and participating in Apollo’s rapid growth.

More questions can be found in the Apollo Developer Community discussion

For more content, you can follow AI Front, ID: AI-front, reply “AI”, “TF”, “big Data” to get AI Front series PDF mini-book and skill Map.