We search for content and socialize on screens, rely on GPS systems to suggest the best route, make purchase decisions with savvy algorithms that track our browsing habits, and ask our personal assistants on our phones or around the house. So the question is, what exactly is ARTIFICIAL intelligence?
Speech recognition, face recognition, search query AI, whether we think it’s helpful or intrusive, empowering or manipulative, technology is at our disposal, and it’s our choice how we use it. Researchers and entrepreneurs with decades of experience working in AI and robotics are helping us better understand the sometimes elusive nature of AI and why it won’t take over the world or us humans anytime soon, but its rise is worth watching.
Poor performance in the initial stage
The field of ARTIFICIAL intelligence is rife with hype, fear and misunderstanding. Experts say we need less hubris and more humility. Rodney Brooks, CEO of Rethink Robotics, said, “I think the biggest misperception is how far AI is, and we’ve been working on ARTIFICIAL intelligence, It’s been called artificial intelligence since 1956 [when John McCarthy invented the term]. It’s about 62 years old, but it’s much more complicated than physics, which takes a long time, and I think it’s still in the early stages of AI.”
Rethink Robotics aims to bring intelligent, affordable and easy-to-use collaborative robots to manufacturing. Brooks is also co-founder of iRobot and former director of the COMPUTER Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. Brooks is one of the founders of the Association for the Advancement of Artificial Intelligence (AAAI) and has made many achievements in the fields of computer vision, robotics and artificial intelligence.
He is so concerned about misinformation about AI and robots that he even started a blog to offer his opinion. A recent article made predictions about current technology trends, including self-driving cars, space travel, robotics and deep learning. Brooks believes that much of the hype surrounding AI in recent media coverage comes from amazing anthropomorphic and animal-inspired robots, or ai systems competing against humans in competitions. Artificial intelligence is now at this level, at an infant level.
Some misconceptions stem from equating machine performance with power. When we see a person complete a task, we assume that he has average ability, that the person must have it in order to complete the task. It’s not the same as artificial intelligence. “An AI system can play chess very well, but it doesn’t even know it’s playing a game,” Brooks said. “We mistake the machine’s performance for their ability.” When you see how a program learns something humans can learn, you make the mistake of thinking that it has the richness of understanding that you do.
Take SoftBank’s Boston Dynamics’ Atlas, which went viral last year when a video of Atlas doing a back flip sent the Internet into a frenzy, warning of an impending robo-ninja invasion, which artificial intelligence experts said was not the case. Brooks cautions that these types of demos are carefully written and must do a lot of calculations quickly, but this is set up very carefully. Atlas didn’t know it was doing a back flip or where it was, much less all sorts of things that someone doing a back flip would know, like, “Wow, I just went upside down!” “Robots don’t know what upside down is!” . It has some mathematical equations, and forces and vectors, but it doesn’t have any means of reasoning, which is very different from humans.
There is no environment and no competition
One important difference between human intelligence and machine intelligence is the environment. As humans, we have a greater understanding of the world around us, whereas AI does not.
Brooks says we’ve been working on ARTIFICIAL intelligence for 60 years and we’re nowhere near real AI, which is why he’s not worried about super-intelligent AI. We have succeeded in some very narrow ways, and that is what the revolution is now.
He cited Amazon’s Alexa as an example, along with Google’s Assistant and Apple’s Siri. “You say something to Alexa and it understands it even when music is playing or other people in the room are speaking. Its ability comes from deep learning. As a result, some narrow field becomes better, we will use these narrow part to create a better product. When I began to reflect on the robot technology, we studied all commercial speech understanding system. We think at that time, in the factory for speech recognition of robot is very funny. I think now has changed. It might have made sense, it just didn’t ten years ago.
Speech recognition compiles the right strings, precise strings can do a lot of things, but it’s not as smart as a person. That’s the difference, getting word strings is a very narrow feature, and we have a long way to go. These narrow capabilities have become the basis for many pollyanna-ish AI predictions that are too pessimistic about our human role in the future.
Artificial intelligence prediction? Fear and exaggeration
Some of the most respected figures in science, technology and business have warned that artificial intelligence is about to destroy humanity.
‘We can’t take those people at their word,’ said Ken Goldberg, professor and distinguished chair of industrial engineering and operations research at the University of California, Berkeley. ‘Robots and artificial intelligence are suddenly going to take over the world. These are smart people, so everyone thinks they know what they’re talking about.’ And the people who do use robotics realize that while the technology is making great strides, it’s nowhere near the human robots depicted in the movies.
He is the director of the CITRIS People and Robots Initiative and the Director of the Automation Science and Engineering Laboratory. He holds eight patents and has been widely published on the topics of robotics, automation and social information filtering algorithms. In 2000, Goldberg received RIA’s prestigious Engelberg Robotics Award for excellence in education, among other honors and appointments.
Both Goldberg and Brooks strongly disagree with the propagandists of ARTIFICIAL intelligence hyperbole. They warn us to be especially alert to anxieties like the A.I. apocalypse, rampant unemployment, or the idea that a swarm of super-intelligent killer robots are destined to take over the world.
Goldberg says the fear of robots has a long history, going back to ancient Greece, even when you think about the fear of how technology works. From Prometheus, to Frankenstein, to Terminator, he cites a recurring theme that is deeply rooted in the human soul. We are afraid of people who are not familiar to us. We fear what we don’t understand, and AI is just the latest manifestation of the same story that’s been drummed into us.
Experts say much of the fear comes from people who don’t work in artificial intelligence. Goldberg and Brooks echo what many automation and robotics insiders already know, that robotics is much more complex. “While robots are getting better and we’ve made a lot of progress, I think it’s important to temper those lofty expectations so we don’t repeat the AI winter of the ’70s and’ 80s, when expectations were high and robots didn’t deliver,” Goldberg said. At the same time, they don’t want to say there won’t be a robot revolution. Because they do think there will be more applications and applications for robotics, but it’s not like robots are about to take half of our jobs, as people say.
From research to the real world
Goldberg says a lot of the fear comes from the singularity, a hypothetical point in time when artificial intelligence and robots surpass human intelligence. He argues that instead of worrying about a hypothesis that is either too far away or unlikely, we should focus on the problem of diversity, where different combinations of people and machines can work together to solve problems and innovate.
Multiplicity has emerged behind search engines, social media platforms and a plethora of apps for movie buffs, shoppers and vacationers. When we interact with these supported services, each click or view sends a signal about our interests, preferences, and intentions. Better results are in line with our preferences and better predictors of what we want to do next. It’s an interdependent relationship. Each needs improvement. And the more diverse the interactions, the more comprehensive each becomes.
Diversity is important, from the lab to artificial intelligence applications in the real world. Another expert working to bring artificial intelligence to the industrial world, who also stresses the importance of humans and machines working together, is Pieter Abbeel, a professor in the Department of Electrical Engineering and Computer Science at the University of California, Berkeley, and director of the Robotics Learning Laboratory.
There was a lot of excitement in 2010 when Abbeel’s research team released a video showing a robot folding clothes. How can humans take this technology and use it to make themselves smarter, rather than having these machines separate from us? That’s part of the challenge. When machines become part of our daily lives, we can use them to improve our productivity, and that’s when it gets really exciting.
Abbeel is a pioneer in robotic reinforcement learning and was named one of the 35 Innovators under 35 by MIT Technology Review in 2011. He is also president and chief scientist of a company that recently started a new company in California that is developing artificial intelligence software, Allowing robots to learn new skills on their own. He’s also excited about the prospect of ARTIFICIAL intelligence, but he thinks some caution is in order. Abbeel thinks there’s been a lot of progress so far, so there’s a lot of excitement about AI, and in terms of fear, it’s best to keep in mind the most significant advances, like speech recognition, machine translation, and recognizing what’s in an image, which is called supervised learning.
It’s important to understand the different types of AI that are being built. There are three main types of learning in machine learning: supervised learning, unsupervised learning and reinforcement learning. Supervised learning is just pattern recognition, from speech to text, or from one language to another, which is a very difficult pattern, but ai doesn’t have any goals or purposes. Give it something in English and it will tell you its Chinese meaning. Give it a spoken sentence and it transcribes it into a series of letters. This is pattern matching. You give it a pattern, you give it data like images and labels, and it learns how to go from image to label.
Unsupervised learning is when you feed it images, no labels, and you hope to start by seeing a lot of images to understand what the world looks like, and then by building that understanding, maybe it can learn other things faster in the future. Unsupervised learning has no tasks. Just give it a lot of data.
Reinforcement learning is very different, more fun, but more difficult. When you give the system a goal, the goal might be to get a high score in a video game, or to win a chess match. That’s why some fear is justified, what would happen if the AI had the wrong target?
This is why humans and artificial intelligence will not evolve in a vacuum. As we build smarter and smarter machines, our abilities as humans will be enhanced.
Abbeel said, “What I’m really excited about is that what we’re doing right now is the latest event in AI to enable AI to understand what they see in pictures, rather than on a human level. If a computer can really understand what’s in an image, it might pick up two objects and assemble them, or it can sort bags, or pick things off a shelf. I think the big change in the near future is depending on understanding what the camera is giving you.”