People are more and more optimistic about the prospect of artificial intelligence and its potential explosive power, and whether to develop chips with ultra-high computing capacity and in line with the market has become a key battle of artificial intelligence platform. As a result, 2016 has become a year for chip companies and Internet giants to fully deploy in the chip field. And nvidia remains the absolute leader. But with giants including Google, Facebook, Microsoft, Amazon and Baidu entering the fray, the future of the ai field is still up in the air.
In 2016, everyone saw the prospects of artificial intelligence and its potential power, but whether AlphaGo or self-driving cars, if you want to make any sophisticated algorithm is implemented, it is based on the computing power of hardware: that is to say, can develop a high computing power and meet the market demand of chip becomes the key to the artificial intelligence platform game.
Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include information robot, editing robot, writing robot and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.
Therefore, there is no doubt that 2016 is also the year when chip companies and Internet giants are fully deployed in the chip field. First, CPU chip giant Intel made three big acquisitions in the field of artificial intelligence and GPU. Then Google announced its own processing system, followed by Apple, Microsoft, Facebook and Amazon.
The front-runner, Nvidia (NVDA), has become an absolute darling of the capital markets because of its strength in artificial intelligence: in the past year, shares of Nvidia, which used to specialise in gaming chips, have soared from a stable price of $30 in the past decade to $120.
While the capital markets were wondering whether artificial intelligence had inflated nvidia’s stock price, on February 10, the company reported a 55% increase in revenue and a 216% increase in net profit to $655 million for the fourth quarter of 2016.
“While giants like Intel and Microsoft are investing in AI-based chip technology, Nvidia, which has been investing in AI for nearly 12 years, is starting to reap significant profits with its Q4 results.” Veteran technology commentator Therese Poletti pointed out after the earnings release.
Research firm Tractica LLC estimates that hardware spending due to deep learning projects will rise from $43.6 million in 2015 to $4.1 billion in 2024, while enterprise software spending will rise from $109 million to $10 billion over the same period.
It is this huge market that has attracted giants such as Google, Facebook, Microsoft, Amazon and Baidu to announce a technological shift towards artificial intelligence. “For now, Nvidia remains the absolute leader in AI related technologies, but the future AI hardware landscape is still up in the air as TPU technologies, including Google’s, continue to come to market.” An unnamed European veteran told the 21st Century Business Herald.
Nvidia has a significant lead in gpus
In its most recent annual report, Nvidia reported double-digit growth in all of its top business areas. In addition to growth in its gaming business, where it has been leading, more of the gains actually came from two new business segments, data center and autonomous driving.
Data center business grew 138%, while autonomous driving grew 52%, according to the annual report.
“In fact, this is the most telling part of nvidia’s earnings, because the growth in data and autonomous driving is fundamentally driven by advances in ARTIFICIAL intelligence and deep learning.” “A US computer hardware analyst told 21st Century Business Herald.
In the current field of deep learning, putting neural networks into practical use goes through two stages: first, training, and second, execution. In today’s environment, the training phase requires gpus that can handle large amounts of data, an area where Nvidia has led since making graphics for games and highly graphical applications. The transition phase will require cpus that handle complex applications, an area where Microsoft has led for more than a decade.
“Nvidia’s current success actually represents the success of the GPU. It was one of the original GPU leaders.” The industry analyst said.
Deep learning neural networks, especially neural networks with hundreds or thousands of layers, have very high demand for high-performance computing, while GPU has natural advantages in processing complex operations: it has excellent parallel matrix computing capability, and can provide significant acceleration effect for neural network training and classification.
Instead of defining a human face at first, for example, researchers can display images of millions of faces and let the computer define what the face should look like. When learning such examples, gpus can be much faster than traditional processors, greatly speeding up the training process.
As a result, gPU-powered supercomputers have become the perfect choice for training deep neural networks of all kinds, such as Google Brain, which used Nvidia gpus for deep learning in its early days. “We were building a camera with tracking, so we needed to find the best chip, and GPU was our first choice,” Gunleik Groven, CEO of EU AR startup Quine, told the Reporter at CES in January.
Internet giants such as Google, Facebook, Microsoft, Twitter and Baidu are using these chips, called Gpus, to let servers learn from mountains of photos, videos and sound documents, as well as information from social media, to improve software functions as diverse as search and automated photo tagging. Some automakers are also using the technology to develop driverless cars that can sense their surroundings and avoid dangerous areas.
In addition to being a longtime leader in Gpus and graphical computing, Nvidia was one of the first tech companies to invest in ARTIFICIAL intelligence. In 2008, Ng, then a researcher at Stanford, published a paper on neural network training using CUDA on gpus. In 2012, Alex Krizhevsky, a student of Geoff Hilton, one of the “big three” of deep learning, used Nvidia’s GeForce graphics card to greatly improve image recognition accuracy in ImageNet, which is also the beginning of Nvidia’s focus on deep learning, which Is often mentioned by Nvidia CEO Jen-Hsun Huang.
According to a report, there are more than 3,000 AI startups in the world, most of which use the hardware platform provided by Nvidia.
“Deep learning has proven to be very effective.” Huang renxun said in the quarterly results on February 10 press conference. While citing the rapid adoption of the GPU computing platform in artificial intelligence, cloud computing, gaming and autonomous driving, Huang said deep learning will become a fundamental and core tool for computing in the coming years.
The evolution of AI at AMD and Intel giants
Investors and chipmakers watch every move by all the Internet giants. Just take Nvidia’s data center business, which has been providing data services to Google for a long time.
Nvidia isn’t the only leader in Gpus, as giants Intel and AMD both have different strengths in this space.
In November 2016, Intel released a Nervana AI processor that they said they would test by the middle of next year. If all goes well, the final form of the Nervana chip could be available by the end of 2017. The chip name is based on Intel’s earlier purchase of a company called Nervana. According to Intel, the company is the first in the world to build a chip specifically for AI.
Intel has revealed some details about the chip, which they describe as “Lake Crest” and Nervana Engine and Neon DNN software. The chip can speed up neural networks such as Google’s TensorFlow framework.
The chips are made up of arrays of so-called “processing clusters” that process simplified mathematical operations called “dots of activity”. This method requires less data than floating-point arithmetic, resulting in a 10-fold improvement in performance.
Lake Crest leveragesprivate data connections to create larger, faster clusters with a circular or other topology. This helps users create larger and more diverse neural network models. This data connection contains 12 100Gbps bidirectional connections, and its physical layer is based on 28G serial-parallel conversion.
Possible counterattack between TPU and FPGA
Beyond the gPUS enhancements of these chip giants, there are more companies trying to trigger a full-scale disruption. Google announced in 2016 that it would independently develop a new processing system called TPU.
TPU is a dedicated chip designed for machine learning applications. By reducing the chip’s computational accuracy and the number of transistors needed to perform each computation operation, allowing the chip to run a higher number of operations per second, fine-tuned machine learning models can run faster on the chip, giving users smarter results faster. Google embedded the TPU accelerator chip into the circuit board and used the existing hard disk PCI-E interface to access the data center server.
According to Urs Holzle, Senior Vice President of Google, Currently Google TPU and GPU are used together, and this situation will continue for a while. However, he said that Gpus can perform graphics operations and have multiple uses. TPU belongs to ASIC, that is, special specification logic IC designed for a specific purpose. Because it only performs a single job, it is faster, but the disadvantage is higher cost.
In addition to Google, Microsoft is also using a new type of processor called field Variable Programming Gate Array (FPGA).
The fpgas, which currently support Microsoft’s Bing, will drive new search algorithms based on deep neural networks that model artificial intelligence based on the structure of the human brain, executing several commands orders of magnitude faster than conventional chips, according to the company. With it, your computer’s screen will be empty for 23 milliseconds instead of 4 seconds.
In the third-generation prototype, chips sit at the edge of each server and plug directly into the network, but still create a pool of FPGas that any machine can access. This is starting to look like something Office 365 can use. Finally, Project Catapult is ready to go live. Also, Catapult hardware costs about 30 percent of the total cost of all other components on a server, requires less than 10 percent of the power to run, and is twice as fast.
In addition, some companies, such as Nervada and Movidius, emulate the GPU parallel model but focus on moving data faster, bypassing the functionality needed for graphics. Other companies, including IBM, which uses a chip called True North, have developed chip designs inspired by neurons, synapses and other brain features.
Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.
With the promise of deep learning and artificial intelligence in the future, big players are trying to take advantage of the technology. If one of these companies, like Google, were to replace an existing chip with a new one, it would essentially disrupt the entire chip industry.
“Whether it’s Nvidia or Intel or Google or Baidu, they’re all looking for a foundation on which ai can be widely used in the future,” Said Therese Poletti.
However, there are also many people who hold the same view as Urs Holzle, vice President of Google, that in the distant future of artificial intelligence, GPU has not replaced CPU, and TPU will not replace GPU, and the chip market will have greater demand and prosperity.