Roundup: The imperfections of ARTIFICIAL intelligence in 2016 — self-driving cars, a game of Go between man and machine, and the us TV series Westworld, which describes robots’ self-awareness, were huge hits. In 2016, artificial intelligence and its related fields were hotly discussed by the industry, academia and even the whole society. Meanwhile, concerns about AI have never gone away: when and how will it go wrong?

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

Beijing, Dec. 12 (Xinhua) — Roundup: The imperfections of AI in 2016

Xinhua News Agency reporter Li Mi

Self-driving cars, go battle between man and machine, and the US TV series “Westworld” depicting the self-awareness of robots are all very popular. In 2016, artificial intelligence and its related fields were hotly discussed by the industry, academia and even the whole society. Meanwhile, concerns about AI have never gone away: when and how will it go wrong?

Near the end of the year, several AI experts analyzed typical AI “failures” in 2016, and concluded that these errors were concentrated in the machine learning and task execution stages of AI systems.

Is discrimination all people’s fault

Several AI experts believe that racism and racial discrimination are the main problems existing in ai systems, which may be related to the race of the system designers themselves.

In 2016, courts in many parts of the United States began using artificial intelligence systems to predict a criminal’s likelihood of reoffending as a basis for judges to give offenders, for example, probation. However, media studies of minority Report, a predictive system used in Florida courts, found that blacks were 45 percent more likely to be predicted to reoffend, and that blacks were 77 percent more likely to be predicted to reoffend violently. The data showed that the AI system was only 20 percent accurate. The only thing the system can predict is the race of the people being investigated, media critics say.

In 2016, a number of COMPANIES in the United States jointly launched the world’s first AI beauty contest, in which contestants uploaded photos on a website and ai algorithms “accurately” assessed their beauty. However, the winners of the contest were all white. “It’s clear that there aren’t enough training samples for ai to learn from,” said Jan Polisky, director of the Cybersecurity Lab at the University of Louisville. “Beauty is stuck in a fixed pattern.”

In an effort to reach out to young people, Microsoft this spring introduced Tay, an artificial intelligence chatbot on social network Twitter. But after less than a day online, Tay was described as a “Hitler-loving, misogynistic monster” and was taken offline by Microsoft.

Nintendo’s Pokemon go game took the world by storm after its release in July. Players soon discover that there are few pokemon to catch in the black community. In the Los Angeles area, there are an average of 55 pokestops in predominantly white neighborhoods and fewer than 19 in black neighborhoods, according to USA Today. In Detroit, Miami and Chicago, black players have a hard time playing games on their doorstep. The game’s developers acknowledge that the augmented reality intelligence game is based on a map system with a predominantly white audience, and that they didn’t spend much time with the black community.

Losing chess, hurting people and ai is not perfect

In march, alphaGo, an artificial intelligence system, beat South Korean go player Lee Se-dol 4-1. ‘AlphaGo’s loss shows that Lee sedol still found holes in the Monte Carlo search tree’ used by the system, and it can be seen as a defeat for AI in 2016, which is not perfect, ‘said John Walsh, a professor of artificial intelligence at the University of New South Wales in Australia. Yanpoliski believes that Lee se-dol won a round or within the scope of normal operation.

In May, a Tesla that was on autopilot collided with a tow truck in Florida, killing the driver. After the crash, Tesla released a major update to its autopilot software that CEO Elon Musk said could prevent similar accidents.

In July, a mother claimed that a 135-kilogram K5 security robot knocked down her 16-month-old baby and ran over her feet at Stanford Mall in Silicon Valley.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

At the China Hi-tech Fair in Shenzhen, China, in November, a netizen posted a picture on wechat with a caption saying that he saw the evolution robot chubby running away from the booth next door, knocking down the glass wall and injuring a passer-by. Pictures showed glass on the floor and an injured spectator being carried away on a stretcher. The robot’s manufacturer said the accident was caused by human error, with controllers mistaking the forward button for the back button. However, some experts point out that the service robot issue should not be viewed as a “gas pedal as a brake.” Although it is a rare incident, the safety risks should not be taken lightly.

These cases, He and others argue, show that maintaining the diversity of machine-learning data is key to preventing “bias” in AI systems, and that ethical research should precede technical research.