Proceedings of the Fifth International Brain-Computer Interface Meeting 2013 DOI:10.3217/978-3-85125-260-6-62 Iterative EEG-Based Natural Image Search Under RSVP M. Uˇ s´ cumli´ c, R. Chavarriaga, J. del R. Mill ´ an Defitech Chair in Non-Invasive Brain-Machine Interface, Center for Neuroprosthetics, EPFL, Lausanne, Switzerland Correspondence: STI-CNBI ´ Ecole Polytechnique F´ ed´ erale de Lausanne, CH-1015 Lausanne, Switzerland. E-mail: marija.uscumlic@epfl.ch Abstract. This work extends previous studies on using EEG decoding for automatic image retrieval. We propose an iterative way to integrate the information obtained from the EEG decoding and image processing methods. In the light of real-world BCI applications, we demonstrated that a limited number of EEG channels provide sufficient information about the subject’s preference to be exploited in image retrieval by the proposed synergistic scenario. Furthermore, to meet a more realistic scenario we used natural images (i.e., images of objects in their natural environment). Keywords: EEG, Single-Trial Classification, RSVP, Image Retrieval, BCI 1. Introduction Humans ability to process visual information outperforms state-of-the art computer methods. For this reason, analysis of EEG responses to visual stimuli has been proposed as a complement to image recognition systems. In particular, using the rapid serial visual presentation (RSVP) protocol. In this scenario, the presented images are labeled based on the EEG activity as target/non-target. Then, the decoded labels are propagated to unseen images based on similarity and data mining methods [Pohlmeyer et al., 2011]. We propose an alternative iterative scenario for coupling the EEG decoding with automatic image labeling. An iteration consists of assigning the EEG-based labels to the presented images (i.e., RSVP sequence) and their propa- gation to the unseen images. This yields a set of probabilistic labels based on both brain signals and image features. Then, we fuse the labels obtained at each iteration before ranking the whole database and retrieve that target images. 2. Material and Methods 2.1. Experimental Setting Subjects (N = 15) were presented with sequences of natural images at a rate of 4 Hz. They were instructed to count images of a specified object. The experiment consisted of two phases: training and testing. Different sets of images from Corel database were used in the training (1600 images) and testing phases (1382 images). Four search tasks (Elephant, Car, Lion and Butterfly) were given in the training phase, and three search tasks (Eagle, Tiger and Train) in the testing phase. In the training sequences 10 % of images were the targets. The testing phase consisted of four iterations (200 images per iteration). In the initial iteration, a sequence of images was presented (10 % of them targets). The elicited EEG response to each image was decoded to obtain labels for the presented images (target/non-target). This information was used to label the remaining images in the database and obtain the image sequence that was shown in the next sequences. EEG data were recorded with a 64-channel BioSemi ActiveTwo system, at a sampling frequency of 2048Hz. The EEG signals were bandpass filtered [1 10 Hz] and downsampled to 32 Hz. The EEG signals were re-referenced by common average reference (CAR) based on 41 electrodes (the peripheral electrodes were excluded). 2.2. EEG-based Image Labeling The EEG signals from the training phase are used to train a Gaussian classifier (target vs. non-target trials) [Mill´ an et al., 2004], using four prototypes per class. The feature vector is obtained by concatenating samples in the interval from 200 ms to 700 ms after stimulus onset of a subset of 8 channels: Pz, PO3, POz, CPz, Cz, PO4, C3, C4. The feature dimensionality is reduced, keeping only the features with high discriminant power (DP) [Gal´ an et al., 2007]. 2.3. Automatic Image Labeling This step propagates the labels obtained from the EEG decoding to the remaining (unseen) images in the database. We used a semi-supervised approach for automatic image labeling, exploiting a visual similarity graph of the images in database [Yang et al., 2006]. Each node in the graph represents an image, while its state is the probability that the Published by Graz University of Technology Publishing House, sponsored by medical engineering GmbH Article ID: 062