C ONTINUAL FEW- SHOT LEARNING WITH H IPPOCAMPAL - INSPIRED REPLAY Gideon Kowadlo, Abdelrahman Ahmed, Amir Mayan, David Rawlinson Cerenaut ABSTRACT Continual learning and few-shot learning are important frontiers in the quest to improve Machine Learning. There is a growing body of work in each frontier, but very little combining the two. Recently however, Antoniou et al. [1] introduced a Continual Few-shot Learning framework, CFSL, that combines both. In this study, we extended CFSL to make it more comparable to standard continual learning experiments, where usually a much larger number of classes are presented. We also introduced an ‘instance test’ to classify very similar specific instances - a capability of animal cognition that is usually neglected in ML. We selected representative baseline models from the original CFSL work and compared to a model with Hippocampal-inspired replay, as the Hippocampus is considered to be vital to this type of learning in animals. As expected, learning more classes is more difficult than the original CFSL experiments, and interestingly, the way in which they are presented makes a difference to performance. Accuracy in the instance test is comparable to the classification tasks. The use of replay for consolidation improves performance substantially for both types of tasks, particularly the instance test. Keywords Continual learning · Few-shot learning · Continual few-shot learning · CFSL · Hippocampus · CLS 1 Introduction Over the past decade, Machine Learning (ML) has made impressive progress in many areas. The areas in which progress has been most dramatic share some common characteristics. Typically, a model learns from a large iid dataset with many samples per class and after a training phase, the weights are fixed i.e. it does not continue to learn. This is limiting for many applications and as a result distinct subfields have emerged which embrace different characteristics, such as continual learning and few-shot learning. In continual learning (also known as lifelong learning), the challenge is to continually learn new tasks while maintaining performance on previous ones. A well known difficulty is catastrophic forgetting [2] in which new learning disrupts existing knowledge. There are many approaches to tackle catastrophic forgetting that fall broadly into 3 categories [3]: Regularization-based methods, Parameter isolation methods and Replay methods which are inspired by Hippocampal replay [4]. To our knowledge, none of the reported works explore continual learning with few samples per class. In few-shot learning, only a few samples of each class are available. In the standard framework [5, 6], background knowledge is first acquired in a pre-training phase with many classes. Then one or a few examples of a novel class are presented for learning, and the task is to identify this class in a test set (typically 5 or 20 samples of different classes). Knowledge of novel classes is not permanently integrated into the network, which precludes continual learning. A special case of few-shot learning is reasoning about specific instances. This is easy for animals, but typically neglected by ML research. For example you usually know which coffee cup is yours, even if it appears similar to the cup of tea that belongs to your colleague. It is easy to see how this capability has applications across domains from autonomous robotics to dialogue with humans to fraud detection. Another enviable characteristic of human and animal learning, is the ability to perform both continual and few-shot learning simultaneously. We need to accumulate knowledge quickly and may only ever receive a few examples to learn from. For example, given knowledge of vehicles (e.g. trucks, cars, bikes etc.), we can learn about any number arXiv:2209.07863v2 [cs.NE] 19 Sep 2022