In recent years, ai projects AlphaGo and Libratus have become milestones in their victories over humans in go and poker, respectively. These incidents show that AI has made a lot of progress in recent years, but there have been a number of issues in the FIELD, particularly in the past year, that have led many to question the overall maturity of emerging technologies.

Here’s a look at the top 10 AI flops of 2017.

1. ID of the face whose mask is decrypted



Face ID, the iPhone X’s facial recognition unlocking technology, has been hailed as the most secure ai activation method to date, with Apple boasting an error rate of one in a million. But then BKAV, a Vietnamese company, cracked it using a $150 mask made of 3D-printed plastic, silicone and more. In the test, BKAV simply scanned the subject’s face, then used a 3D printer to create a model of the face, attaching paper-cut eyes and mouth and a silicone nose. The result of this test shows that the technology has great security risks and can not well protect the privacy of consumer devices.

2. Neighbors call the police because of the Amazon Echo

The popular Amazon Echo is considered one of the biggest smart speakers. A German man’s Echo started unexpectedly while he was away, waking his neighbours in the middle of the night with music playing. They called the police, who had to force the door open to close the speaker.

3.Facebook Chatbot was shut down



In July, two Facebook chatbots were shut down after they were widely reported to be communicating with each other in unrecognisable languages. The rumors flooded the discussion boards until Facebook officials explained that the mysterious exchange was the result of syntactic coding oversight.

4. Las Vegas self-driving bus crashes on first day of operation

In November, a driverless bus made its debut in Las Vegas, but during a two-hour run, it collided with a truck. Although technically, police believe the accident was caused by the van driver. The driverless bus was not responsible for the accident. But passengers on the smart bus have complained that it is still not smart enough to react quickly enough to avoid risks as the truck slowly approaches.

5.Google Allo uses the headscarf emoji in response to the gun emoji



CNN staffers received emoji suggestions for a response to the pistol emoji via Google Allo: an emoji of a man wearing a hijab. Google awkwardly assured the public that the problem had been fixed and issued an apology.

6. Twins fooled HSBC voice ids

HSBC’s Voice recognition ID is an AI-led security system that allows users to access their accounts via voice commands. Although the company claims it is as secure as fingerprint ID, the twin brother of a BBC journalist was able to access each other’s accounts by mimicking their voices. HSBC’s voice recognition system responded to the seven trials by closing the account limit after three failed attempts.

7.Google AI mistakes a rifle for a helicopter



A research team at THE Massachusetts Institute of Technology tricked the Google Cloud Vision API into identifying the rifle as a helicopter by tweaking photos of it slightly. This technique, also known as bad sampling, causes computers to misclassify images by altering them in ways that the human eye cannot detect. In the past, adversarial examples worked only if hackers knew the basic mechanics of the target computer system. The MIT team took a step forward by triggering misclassification, but did not have access to this system information.

8. Street signs fool motorists



The researchers found that by carefully applying stop signs with paint or tape, they could trick self-driving cars into misclassifying them. Using the words “love” and “hate” to adorn a stop sign fooled a self-driving machine learning system into misclassifying it as a “speed limit 45” sign in all test cases.

AI mistook Bank Butt for Sunset



Machine learning researcher Janelle Shan trained a neural network to “match” newly generated paint colors to the names of each color. The results were quite different. Even after a little training with color name data, the model still labeled sky blue as “gray pubis” and dark green as “Stoomy Brown.”

10. Ask Alexa with caution

Amazon’s Alexa virtual assistant could make online shopping easier. San Diego news channel CW6 reported in January that a six-year-old girl bought a $170 dollhouse after telling Alexa she wanted one. And when the on-air TV host repeated the girl’s words, Alexa devices in some viewers’ homes were activated again to order dollhouses.

This article is recommended by Beijing Post @ Love coco – Love life teacher, translated by Ali Yunqi Community organization.

10 AI Failures in 2017

Author: SYNCED

Translator: Wu La Wu La, edited by Yuan Hu.

The article is a brief translation. For more details, please refer to the original text