If artificial intelligence (AI) can quickly eat away at software, Google may have the biggest appetite, according to technologyreview. At this year’s I/O developer conference, Google unveiled more powerful chips and machine learning-based supercomputers that will help it become an AI-focused hardware maker.
At the I/O developer conference, Sundar Pichai, Google’s chief executive, introduced a new computer processor developed by Google to support machine learning technology. In recent years, machine learning has taken the IT industry by storm. The move also reflects how rapidly advancing AI is changing Google itself, with credible signs that the company wants to lead the way in AI software and hardware.
Perhaps most important, or at least for those who study machine learning technology, Google’s new processor can not only perform tasks faster, it can also be trained with incredible efficiency. Google’s new processor is called the Cloud Tensor Processing Unit, named after TensorFlow, Google’s open-source machine learning framework.
Training is the most fundamental part of machine learning. For example, to develop an algorithm that can recognize hot dogs in photos, you might need to train the algorithm to recognize tens of thousands of photos of hot dogs until it learns to distinguish them. But training a large model is complex and can take days or even weeks.
Pichai also unveiled the machine learning supercomputer, or Cloud TPU Pod, at the developer conference, which is based on Cloud TPU clusters and can process data connections at high speeds. Google is also working on the TensorFlow Research Cloud, which is made up of thousands of Tpus, Pichai said. “We are building what we call AI-first data centres, and Cloud TPU is helping to optimise training and reasoning, which sets the stage for significant advances in AI,” Mr Pichai said. Google will build 1,000 Cloud TPU systems to support AI researchers who are willing to publicly share details of their development efforts.
Pichai also announced a number of AI research initiatives during his keynote, including efforts to develop algorithms that can learn how to do time-consuming tasks, including fine-tuning other machine learning algorithms. He added that Google is developing AI tools for medical image analysis, genome analysis and molecular discovery. Speaking ahead of the developer conference, Google senior researcher Jeff Dean said the projects help AI advance. “Many of the top researchers have not had the computing power they would like to have,” he said.
Google’s push into AI-focused hardware and cloud services is partly driven by an acceleration in its own business. Google already uses TensorFlow to support search, speech recognition, translation and graphics. Google also uses the technology in AlphaGo, an intelligent program developed by Alphabet subsidiary DeepMind.
But strategically, Google may be preventing other companies from gaining dominance in machine learning. Nvidia, for example, specializes in developing and manufacturing graphics chips. Its chips have begun to be used in deep learning and are becoming increasingly prominent in a variety of products. To provide some measure of the acceleration provided by its Cloud TPU, Google said its translation algorithms could be trained to be much faster with new hardware than existing hardware. With 32 best Gpus training for a full day, TPU Pods only need to perform 1/8 of their level in one afternoon.
Fei-fei Li, lead scientist on Google’s cloud computing team and head of Stanford University’s AI Lab, said: “These Tpus can deliver an amazing 128 teraflops. They are chips designed to drive machine learning technology.” By comparison, the iPhone 6 offers 100 teraflops. Google says it may also design algorithms for researchers that use other hardware, in what it calls “democratizing machine learning.” Since Google released TensorFlow in 2015, more and more researchers have started using it. Google claims TensorFlow has become the most widely used deep learning framework in the world.
Machine learning specialists are in short supply as companies in many industries look to tap into the advancing power of AI. One solution to this technology shortage, Pichai said, is to develop machine learning software that can take over some of the work of AI experts.
At Google’s developer conference, Pichai unveiled AutoML, a project being undertaken by Google Brain, the company’s AI research team, in which researchers have shown that their learning algorithms can automate the trickiest parts of designing machine learning software to perform specific tasks. In some cases, their automated systems can come up with solutions that match or even exceed those of human machine learning experts. “It’s very exciting that it can accelerate the field and help us solve some of the most challenging problems we face today,” Mr Pichai said.
Mr Pichai wants AutoML to expand the number of developers who can make better use of machine learning by reducing the expertise required. This fits well with Google’s positioning of its cloud computing services as the best platform to develop and host machine learning. Google is also trying to attract more new customers in the enterprise cloud computing market, where it lags behind Amazon and Microsoft.
AutoML aims to make deep learning easier to use, and Google and other companies are using it to support speech recognition, image recognition, translation and robotics research, among other things. Deep learning allows data to help make software smarter through a series of loose layers of mathematical calculations inspired by biology, known as artificial neural networks. Choosing the right framework for a mathematical model of a neural network is important, but not easy, says Quoc Le, a machine learning researcher at Google’s AutoML project.