As one of the world’s most respected artificial intelligence research institutes, Google DeepMind continues to surprise us. In its official summary last year, DeepMind said: “Another important area of research for us in 2016 was memory, specifically the difficult problem of how to combine the decision-making intelligence of neural networks with the ability to store and reason about complex structured data.”



Continuouslearning DeepMind published a new study that announced Continual Learning, a method that allows computers to learn continuously without forgetting what they’ve already learned. This paper combines biology, synaptic elasticity theory, and discusses the theory that synapses store not only weight but also uncertainty about this weight. The study obtained widespread attention, such as Bloomberg reports, wrote the study “will likely to be more easily applied to a variety of tasks of artificial intelligence system to open a new road, it should also can promote the artificial intelligence system’s ability to transfer knowledge between tasks and the ability to master a series of links to each step.” Heart of the Machine provides an overview of DeepMind’s official blog posts and abstracts related to the research.



Thesis Address:

http://www.pnas.org/content/early/2017/03/13/1611835114.full.pdf


Computer programs that learn to perform tasks often forget them quickly as well. Our research shows that learning rules can be modified so that the program remembers the old task while learning the new one. This is an important step towards smarter machines that learn incrementally and adaptively.


Deep neural network is the most successful machine learning technology, which can be used to solve a variety of tasks such as language translation, image classification and image generation. However, they can usually learn multiple tasks only if the data is presented all at once. As a network is trained on a particular task, its parameters will gradually adapt to the task. And when a new task is introduced, the new adaptation process overwrites the knowledge already gained by the network. This phenomenon is known in cognitive science as “catastrophic forgetting,” and is considered one of the basic limitations of neural networks.


Our brains, on the other hand, work differently. We can learn incrementally, we can learn one skill at a time, and we can apply what we’ve learned to new tasks. This is also the starting point for our recent paper on PNAS, Overcoming Catastrophic Episodes in Neural Networks. In this paper, we propose a method that can overcome the catastrophic forgetting of neural networks. Our inspiration comes from neuroscience, involving theories about how mammalian and human brains consolidate previously acquired skills and memories.


Neuroscientists have identified two types of consolidation in the brain: systems consolidation and synaptic consolidation. Systematic consolidation is the process of imprinting memories that have been acquired in the fast-learning part of our brain to the slow-learning part. This engraving process is thought to occur through conscious or unconscious recall — which might happen during dreams, for example. The second mechanism, synaptic consolidation, means that synaptic connections are less likely to be rewritten if they are important in a previously learned task. Specifically, our algorithm takes inspiration from this mechanism to solve catastrophic forgetting.


Neural networks are made up of many connections that work in much the same way as the connections in the brain. After learning a task, we calculate how important each connection is to that task. When we learn a new task, we protect the connection in proportion to how important it is to the old task. Thus, it is possible to learn new tasks without having to modify what has been learned in previous tasks, and this does not incur significant computational costs. We can think of the protection we apply to each connection as being connected by a spring to the previous value of protection, which is stiffness proportional to the importance of its connection. For this reason, we call our algorithm “EWC/Elastic Weight Consolidation”.


DeepMind’s new AI program handles the learning process of two tasks at once


To test our algorithm, we exposed an agent to Atari games in sequence. Mastering a single game based on scores alone can be challenging, but mastering multiple games sequentially is even more difficult because each game requires a separate strategy. As the chart below shows, without EWC, the agent will quickly forget each game after stopping the game (blue). This means that, on average, the agent has barely mastered any of the games. However, if we use EWC (brown and red), the agent will not easily forget the game and will be able to master multiple games one after another.



Today, computer programs cannot adapt to learn in real time from data. However, we have shown that catastrophic forgetting is not an unconquerable mountain for neural networks. We also hope that this study represents a step towards processes that can learn in a more flexible and automated way.


Our study also advances our understanding of how synaptic consolidation is formed in the brain. In fact, the neuroscience theory on which our research is based has so far only been proven in very simple cases. By showing that those same theories can be applied to more realistic and complex machine learning environments, we hope to give further importance to the idea that synaptic consolidation is key to retaining memories and methods.


Thesis: Overcoming catastrophic forgetting in Neural Networks

(Overcoming tilt in neural networks)




Abstract


The ability to learn tasks in a sequential manner is critical to the development of artificial intelligence. Until now, neural networks have not been capable of this, and catastrophic forgetting is generally considered to be an inevitable feature of the connectionist model. Our study shows that it is possible to overcome this limitation and train networks that can maintain expertise over long periods of time on tasks that they have not experienced. Our method can selectively slow down the learning rate of weights that are important for these tasks. By solving a series of sorting tasks based on handwritten numeric data sets and learning multiple Atari 2600 games sequentially, we showed that our approach was scalable and efficient.


Machine Heart reserves the right of legal action against those who violate the rules.

  1. Please note in bold at the beginning of the article: this article is reprinted from the heart of the machine, identify the author, and attach a link to this article.
  2. Please note in bold at the beginning of the wechat official account: This article is reprinted from the heart of machine, mark the author, and set the link to read the original text as the link of this article, please attach the link to the heart of machine synced.
  3. After the website and wechat reprint the specific article, please send a letter to [email protected] for explanation, and note the title of the reprint article, the name of the reprint wechat and the date of reprint.
  4. Machine heart only accept the above reprint methods, the rest are regarded as infringement, if you need business cooperation, please write to [email protected].