|
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In In Advances in neural information processing systems, 2012. [2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep resid- ual learning for image recognition. In In IEEE Conference on Com- puter Vision and Pattern Recognition, 2015. [3] Karen Simonyan and Andrew Zisserman. Very deep convolutional net- works for large-scale image recognition. In ICLR, 2015. [4] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for com- puter vision. In CVPR, 2016. [5] Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar,
Shubham Jaina, Jose Sotelo, Aaron Courville, and Yoshua Bengio. Samplernn: An unconditional end-to-end neural audio generation model. 2016. [6] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In CoRR, 2016. [7] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, An- dreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, and Shane Legg & Demis Hassabis. Human-level con- trol through deep reinforcement learning. In Nature, 2015. [8] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learn- ing. In Proceedings of the 33nd International Conference on Machine Learning, 2016.
[9] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepe, and Demis Hassabis. Mastering the game of go without human knowledge. In Nature, 2017. [10] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaob- ing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. 2016. [11] James McClelland, Bruce McNaughton, and Randall O’Reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist
models of learning and memory. Psychological Review, 1995.
[12] Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. The Psy- chology of Learning and Motivation, 1989. [13] Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum. One shot learning of simple visual concepts. In In Pro- ceedings of the 33rd Annual Conference of the Cognitive Science So- ciety, 2011. [14] Fei-Fei Li, Rob Fergus, and Pietro Perona. One-shot learning of object categories. In PAMI, 2006. [15] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In 33nd International Conference on Machine Learning, 2016. [16] Chelsea Finn, Pieter Abbeel, and Sergy Levine. Model-agnostic meta- learning for fast adaptation of deep networks. In International Confer- ence on Machine Learning, 2017.
[17] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In NIPS, 2016. [18] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Ve- ness Guillaume Desjardins, Andrei Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcom- ing catastrophic forgetting in neural networks. In Proceedings of the National Academy of Sciences, 2017. [19] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Life- long learning with dynamically expandable networks. In International Conference on Learning Representations, 2018. [20] Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. international conference on learning representations. In International Conference on Learning Representa- tions, 2018. [21] Alexander Gepperth and Cem Karaoguz. A Bio-inspired Incremen-
tal Learning Architecture for Applied Perceptual Problems. Cognitive
Computation, Springer, 2016.
[22] Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural net- works. In AAAI, 2018. [23] Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. In International Conference on Learning Representations, 2017. [24] German Parisi, Ronald Kemker, Jose Part, Christopher Kanan, and Ste- fan Wermter. Continual lifelong learning with neural networks: A re- view. 2018. [25] Daniel Kahneman. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. [26] Dharshan Kumaran, Demis Hassabis, and James L. McClelland. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends Cogn Sci, 2016. [27] Pablo Sprechmann, Siddhant Jayakumar, Jack Rae, Alexander Pritzel,
Adria Puigdomenech Badia, Benigno Uri, Oriol Vinyals, Demis Hass- abis, Razvan Pascanu, and Charles Blundell. Memory-based parameter adaptation. In International Conference on Learning Representations, 2018. [28] Sachin Ravi and Hugo Larochelle. Optimization as a model for few- shot learning. In ICLR, 2017. [29] Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremen- tal moment matching. In NIPS, 2017. [30] David Lopez-Paz and Marc’ Aurelio Ranzato. Gradient episodic mem- ory for continual learning. In NIPS,, 2017. [31] Zhizhong Li and Derek Hoiem. Learning without forgetting. In Euro- pean Conference on Computer Vision - ECCV,, 2016. [32] Xu He and Herbert Jaeger. Overcoming catastrophic interference us- ing conceptor-aided backpropagation. In International Conference on Learning Representations, 2018.
[33] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML, 2017. [34] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Lifelong few-shot learning. ICML, 2017. URL http://people. eecs.berkeley.edu/˜cbfinn/_files/icml2017_ llworkshop.pdf. |