While any sufficiently advanced technology is indistinguishable from magic, current Deep learning technologies are nowhere near that, according to a recent Wired article.

Google’s artificial intelligence (AI) victory over Lee Se-dol with AlphaGo stunned the world, prompting some industry pundits to boast of deep learning as a human brain simulation. But the truth is that machine learning is far from being a genie to be let out of a bottle. It’s just a mathematical algorithm, a step forward in understanding intelligence and building human-level AI.

Deep learning is math

Deep learning is eating AI fast, but don’t overhype this nascent AI technology. Arthur C. Clarke, the famous British writer, once said that any sufficiently advanced technology was indistinguishable from magic. Deep learning is certainly an advanced technology that can recognize objects and faces in pictures, recognize speech content, translate one language into another, and even beat the best humans at Go. But deep learning is not magic yet.



As tech giants like Google, Facebook and Microsoft continue to integrate the technology into everyday online services, and as the world continues to marvel at Google’s AlphaGo victory over Lee Sedol, some industry pundits are touting deep learning as a human brain simulation. In fact, machine learning is simple math, but on a large scale.

In fact, depth computing is an algorithm that tweaks neural networks based on data. So what does this mean? Let’s explain: A neural network is a computer program (inspired by the structure of the brain) that consists of a large number of interconnected nodes (or neurons), each of which performs simple functional calculations (such as sums) based on the numeric input it receives. The nodes are much simpler than the neurons in the brain, and the number of them is much lower. Deep learning simply strengthens the connections between these nodes in the neural network.

Deep learning is a subfield of machine learning, which is a very active research branch in the field of AI. In theory, machine learning is a method of approximating functions based on the collection of data points. For example, if a set of numbers is 2, 4, and 6, the machine can predict that the fourth number should be 8 and the fifth number should be 10. The formula is 2X, where X is the position in the order. The algorithm has a wide range of applications, including in self-driving cars, speech recognition and predicting airline ticket price fluctuations.



Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

In a sense, deep learning is not unique or random. Any machine learning system, deep learning or not, consists of the following basic elements:

1. Execution element: the part of the system that takes action. For example, the part of the game of Go that is responsible for the moves.

2. Objective function: the function to be learned. For example, the mapping of board position or move selection in go.

3. Training data: a set of marked data points used to approximate the target function. For example, a collection of board positions in go, each marked with a human expert’s choice of moves at that position.

4. Data representation: Each data point is usually represented as a vector of a predetermined variable. For example, the position of each piece on a go board.

5. Learning algorithm: algorithm to calculate the approximate value of the objective function based on training data.

6. Hypothesis space: the space of functions that a learning algorithm may consider.



Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

This structure can be adapted to all machine learning methods, from simple linear regression methods to complex deep learning algorithms. Technically, we refer to supervised learning, in which every data point is labeled by a human. If the data is not tagged, it is unsupervised learning, which is much more difficult to process. If some data is marked, it is semi-supervised learning.

It is important to note that the first five parts of the machine learning architecture are all manual input. Human programmers build each element, but do not control the machine learning program. In fact, programmers often analyze the behavior of the learning program, find it less than perfect, and manually modify one or more of its elements. This is very hard work, and it can take years, if not longer, of repetitive work before the desired level is reached.

Help humans

We will find that a learning program’s ability is severely limited by this architecture. To be exact:

1. The learning program cannot modify any part of the architecture.

2. The learning program cannot modify itself.

3. Learning programs cannot learn functions outside the hypothesis space.

Because of this, a learning program like AlphaGo cannot learn to play chess or checkers without human help. Moreover, most programmers cannot successfully modify machine learning systems without extensive specialized training. Even highly trained data scientists need a lot of time and resources to successfully build a successful machine learning system.

The design and implementation of the AlphaGo system required more than 30 million training samples culled from the Internet, as well as years of effort by a huge team of researchers and engineers. In fact, it will take months of hard work just to raise AlphaGo’s performance against Fan Hui, the European go champion, to the level at which it beat Lee Sedol.

In addition, AlphaGo uses a series of machine learning methods called reinforcement learning, in which it constantly selects actions and observes the results to maximize its chances. In reinforcement learning, training data are not pre-labeled inputs. Instead, the learning program is provided with a reward function that assigns different rewards to different states. The reinforcement learning method obtains training data by performing certain actions and observing rewards. The machine learning analysis in this paper also applies to reinforcement learning, which is still limited by its objective function, data representation and hypothesis space.



Possibility space

The remarkable result of a very ordinary learning force is what we often call evolution. But it is important to recognize the difference between evolution by natural selection and computer simulation. Computer programs that simulate processes are called Genetic algorithms, and they haven’t been particularly successful.

Genetic algorithms modify the form of life, and the form of expression is very large. For example, the human genome estimate contains more than a billion bits of information, meaning that the number of possible human DNA sequences is two to the power of a billion. The cost of exploring a space of this size is very high, but the topology of the space prevents it from finding simple, suitable algorithms. Go, by contrast, has a much smaller possibility space, making it much easier to explore using machine learning methods.

It took a decade for computer scientists, researchers and statisticians to successfully define an objective function that turned a life task into a simple optimization problem. However, many problems require more analysis before they can be expressed in machine-actionable form. For example, how do you write down the meaning of a sentence in a language that a machine can understand? As Gerald Sussman, a professor at M.I.T., put it: If you can’t say it, you won’t learn it. In this case, choosing the appropriate presentation method is not even possible, let alone solving the problem.

Thus, deep learning (or machine learning more broadly) has proven to be a powerful AI approach, but current machine learning approaches still require a lot of human involvement to put some problems into machine-actionable form. After that, it takes a lot of skill and time to repeatedly define these problems until they are finally solved by machines. Most importantly, the process is limited to a very narrow scope and the machine has very little autonomy. Unlike humans, AI has no autonomy.

So machine learning is far from being a genie to be let out of a bottle. Rather, it is a solid step forward in understanding intelligence and building human-level AI.