Transductive Transfer Machine Nazli Farajidavar, Teofilo deCampos, Josef Kittler CVSSP, Univeristy of Surrey, Guildford, Surrey, UK GU2 7XH Abstract. We propose a pipeline for transductive transfer learning and demonstrate it in computer vision tasks. In pattern classification, meth- ods for transductive transfer learning (also known as unsupervised do- main adaptation) are designed to cope with cases in which one cannot assume that training and test sets are sampled from the same distri- bution, i.e., they are from different domains. However, some unlabelled samples that belong to the same domain as the test set (i.e. the target domain) are available, enabling the learner to adapt its parameters. We approach this problem by combining three methods that transform the feature space. The first finds a lower dimensional space that is shared be- tween source and target domains. The second uses local transformations applied to each source sample to further increase the similarity between the marginal distributions of the datasets. The third applies one trans- formation per class label, aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that this combination leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features and a simple one-nearest-neighbour classifier. 1 Introduction In many machine learning tasks, such as object classification, it is often not possible to guarantee that the data used to train a learner offers a good repre- sentation of the distribution of samples in the test set. Furthermore, it is often expensive to acquire vast amounts of labelled training samples in order to provide classifiers with a good coverage of the feature space. Transfer learning methods can offer low cost solutions to these problems, as they do not assume that train- ing and test samples are drawn from the same distribution [1]. Such techniques are becoming more popular in Computer Vision, particularly after Torralba and Efros [2] discovered significant biases in object classification datasets. However, much of the work focuses on inductive transfer learning problems, which assume that labelled samples are available both in source and target domains. In this paper we focus on the case in which only unlabelled samples are available in the target domain. This is a transductive transfer learning (TTL) problem, i.e., the joint probability distribution of samples and classes in the source domain P (X src , Y src ) is assumed to be different, but related to that of a target domain joint distribution P (X trg , Y trg ), but labels Y trg are not available in the target set. We follow a similar notation to that of [1] (see Table 1). Some papers in the literature refer to this problem as Unsupervised Domain Adaptation.