IMPLICIT RETRIEVAL OF SALIENT IMAGES USING BRAIN COMPUTER INTERFACE Ashkan Yazdani 1 , Jean-Marc Vesin 2 , Dario Izzo 3 , Christos Ampatzis 3 , Touradj Ebrahimi 1 1 Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL), Institute of Electrical Engineering, Multimedia Signal Processing Group 2 Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL), Institute of Electrical Engineering, Applied Signal Processing Group 3 European Space and Technology Research Center (ESTEC), Advanced Concepts Team ABSTRACT Space missions are often equipped with several high definition sen- sors that can autonomously collect a potentially enormous amount of data. The bottleneck in retrieving these often precious datasets is the onboard data storing capability and the communication band- width, which limit the amount of data that can be sent back to Earth. In this paper, we propose a method based on the analysis of brain electrical activity to identify the scientific interest of experts towards a given image in a large set of images. Such a method can be used to efficiently create an abundant training set (images and whether they are scientifically interesting) with a considerably faster image presentation rate that can go beyond expert consciousness, with less interrogation time for experts and relatively high performance. Index TermsEEG, BCI, Signal Processing, Machine Learn- ing, Implicit Retrieval 1. INTRODUCTION Autonomous decision making systems are becoming increasingly demanded for various purposes. Several research groups across the world are conducting extensive research to increase the performance and simultaneously reduce the cost and risk of the decision in such systems for miscellaneous applications such as space exploration, medicine and militaryapplications. Space exploration is by its very nature an expensive and risky endeavor. Various factors such as strin- gent communications constraints ( limited communication windows, long communication latencies, and limited bandwidth), limited ac- cess and availability of operators, limited crew availability, and sys- tem complexity restrain direct human oversight of many functions [1]. Therefore, autonomy in space research is far more than just a convenience and is critical to the success of the mission in some situations. Examples of some previous works in this context are [2] and [3]. In [4], an on-board autonomous system was developed and downloaded on Spirit and Opportunity to detect dust-devils and clouds in the Martian landscape. Another prevailing interest in space research is to assess the possibility of elaborating an intelligent mod- ule and incorporating it into space exploration robots in extra orbital missions to enable them to search in the immense image datasets that they autonomously collect using several high-definition sensors and select only the scientifically interesting images. Consequently, these robots will be able to discard the rest of images and to transmit only the selected images back to Earth. Such a module can also be used The authors wish to acknowledge the Swiss National Science Foun- dation grant no. 116253 and European Space Agency Ariadna scheme (www.esa.int/gsp/ACT/ariadna/index.htm) for having initiated and supported this research. for automatic detection of anomalies in medical images, interesting military images, etc. in large image sets. A key point, at the center of current technological developments, is the design of algorithms able to classify sensor readings (e.g. im- ages) according to their degree of scientific interest. The main dif- ficulty lies in the definition of what is scientifically interesting. Ma- chine learning algorithms could be trained directly to classify what is scientifically interesting and what is not, without further information about these two very broad classes. This could potentially allow for broader and more fuzzy classification borders, and could result in al- gorithms able to return not only the strictly defined and expected, but also a set of images with potentially unexpected, but relevant prop- erties. The challenge when following this approach becomes how to then create a training set for a classifier. One option is to resort to what is typically referred to as the interviewing or interrogation technique. Expert scientists would be interviewed on a particular set of pictures, being asked to simply classify or rank them; subsequently a computer would be trained to have a similar response to the one of the interviewed scientist. In this way, the computer has to automatically extract the relevant features that guided the expert’s decision-making and learn to use them in such a way so to mirror the expert’s classification. Despite the simplicity of this methodology, there are various drawbacks involved. For example, it requires the scientists to un- dergo long and time-consuming sessions of image classification that may prove to be particularly tiring and cumbersome, which in turn can result in the acquisition of a noisy training set. Moreover, this approach is subject to the fuzziness of the scientist’s reasoning when placing a highly cognitive judgment upon each picture. In other words, the scientist will repeatedly consciously filter the im- age, eventually merging even contradictory verdicts to one binary classification or a ranking. In this work, we propose an alternative approach to creating such a training set for a classifier; in particular, the information about the expert’s classification is extracted directly from the classification of his/her brainwaves. It is well known from neurophysiological studies that when sub- jects look at images which arouse mental response such as surprise, anticipation, and etc., their parietal cortex is excited in a very charac- teristic way: a synchronized peak in the global electrical activity of large groups of neurons in the parietal area arises after approximately 300 ms after the stimulus (image) presentation. This electrical activ- ity can be recorded with an Electro-Encephalography (EEG) instru- ment as an electric positive potential wave and is commonly referred to as P300 [5]. We propose to extract the picture rating information using the EEG signal recorded while the expert is presented with the pictures in a Rapid Serial Visual Presentation (RSVP) experiment. Our set-