Multi-task Learning via Non-sparse Multiple Kernel Learning Wojciech Samek 12⋆ and Alexander Binder 1 Motoaki Kawanabe 21 ⋆⋆ 1 Technical University of Berlin, Franklinstr. 28 / 29, 10587 Berlin, Germany, wojciech.samek@tu-berlin.de alexander.binder@tu-berlin.de, 2 Fraunhofer Institute FIRST, Kekul´ estr. 7, 12489 Berlin, Germany, motoaki.kawanabe@first.fraunhofer.de Abstract. In object classification tasks from digital photographs, multiple cat- egories are considered for annotation. Some of these visual concepts may have semantic relations and can appear simultaneously in images. Although taxonom- ical relations and co-occurrence structures between object categories have been studied, it is not easy to use such information to enhance performance of object classification. In this paper, we propose a novel multi-task learning procedure which extracts useful information from the classifiers for the other categories. Our approach is based on non-sparse multiple kernel learning (MKL) which has been successfully applied to adaptive feature selection for image classification. Exper- imental results on PASCAL VOC 2009 data show the potential of our method. Keywords: Image Annotation, Multi-Task Learning, Multiple Kernel Learning 1 Introduction Recognizing objects in images is one of the most challenging problems in computer vision. Although much progress has been made during the last decades, performance of state-of-the art systems are far from the ability of humans. One possible reason is that humans do incorporate co-occurrences and semantic relations between object cat- egories into their recognition process. On the contrary, standard procedures for image categorization learn one-vs-rest classifiers for each object class independently [2]. In this paper, we propose a two-step multi-task learning (MTL) procedure which can find out useful information from the classifiers for the other categories based on mul- tiple kernel learning (MKL) [6], and its non-sparse extension [4]. In the first step we train and apply the classifiers independently for each class and construct extra kernels (similarities between images) from the outputs. In the second step we incorporate infor- mation from other categories by applying MKL with the extended set of kernels. Our approach has several advantages over standard MTL methods like Evgeniou et al. [3], ⋆ n´ e Wojcikiewicz ⋆⋆ We thank Klaus-Robert M¨ uller for valuable suggestions. This work was supported by the Fed- eral Ministry of Economics and Technology of Germany under the project THESEUS (FKZ 01MQ07018) and by the FP7-ICT program of the European Community, under the PASCAL2 Network of Excellence (ICT-216886).