Transfer Learning Algorithms for Autonomous Reconfiguration of Wearable Systems Ramyar Saeedi, Hassan Ghasemzadeh and Assefaw H. Gebremedhin School of Electrical Engineering and Computer Science Washington State University Emails: {rsaeedi, hassan, assefaw}@eecs.wsu.edu Abstract—Wearables have emerged as a revolutionary technol- ogy in many application domains including healthcare and fitness. Machine learning algorithms, which form the core intelligence of wearables, traditionally deduce a computational model from a set of training examples to detect events of interest (e.g. activity type). However, in the dynamic environment in which wearables typically operate in, the accuracy of a computational model drops whenever changes in configuration of the system (such as device type and sensor orientation) occur. Therefore, there is a need to develop systems which can adapt to the new configuration autonomously. In this paper, using transfer learning as an organizing principle, we develop several algorithms for data mapping. The data mapping algorithms employ effective signal similarity methods and are used to adapt the system to the new configuration. We demonstrate the efficacy of the data mapping algorithms using a publicly available dataset on human activity recognition. I. I NTRODUCTION Wearables have received tremendous attention due to their potential to fundamentally transform applications in healthcare, wellness, fitness, etc. The growing ubiquity of sensor-equipped wearables such as mobile devises, pedometers, EEGs (elec- troencephalogram), and smartphones has made it possible to capture information about human behavior in real-time. This growth is leading to increased development and deployment of mobile sensing applications and products [1]–[6]. Wearables need to cope with data streams of high hetero- geneity and volume that the transmission, storage, organiza- tion, and, most importantly, analysis and interpretation of the data call for tasks that can rightly be categorized as “big data” problems [7], [8]. As an example, consider a wearable inertial sensor which collects tri-axial accelerometer, gyroscope and magnetometer data at 50 Hz. Assuming a collection rate of two bytes per sample per signal, this will generate approximately 75 MB of data per day for just one user. Despite their enormous potential, however, currently exist- ing wearables are designed for controlled environments, lab settings, and small trials with configuration-specific protocols. Scaling these systems up and extending their applications in real-world environments brings about major challenges. Machine learning and signal processing algorithms form the core intelligence of wearables. Traditional machine learning-based approaches used in contexts involving wear- ables suffer from several limitations, however. First, the accu- racy of computational algorithms drops whenever the system specification or installation changes (e.g. sensor calibration, device orientation, sampling frequency). Second, model re- training can be expensive and time consuming. In particular, computational algorithms for such tasks as classification and template matching typically need training data to construct processing models for that specific configuration. Retraining the computational algorithms for every configuration requires collecting sufficient amount of labeled training data, a process that is known to be time consuming, labor-intensive, and expensive. Third, data collected for one setting (e.g. a specific type or brand of sensor) may no longer be suitable for a new setting. For example, the training data may be collected using shimmer sensors while the final data processing may be conducted on a different platform such as smart-phones and smart-watches. In this paper, we aim to address these limitations and make progress towards developing robust and efficient methods for autonomous reconfiguration of wearable systems. Next generation wearables need to be computationally autonomous in the sense that their underlying computational algorithms reconfigure automatically without the need for collecting new labeled training data. Within this broad aim, the focus of our current work is on design of algorithms for mapping sensor data from one setting to another related but different setting. We approach the data mapping problem from a transfer learning perspective. We show that the data mapping can be used a) to avoid dependence on labeled training data from the source domain and b) to integrate different datasets of the same sensor type. As accelerometers are among the most common sensors [9] and are widely used in human physical monitoring systems (e.g. activity recognition and gait analysis), we use accelerometer data to assess the efficacy of our algorithms. We outline our contributions as follows. a) Algorithms. We have developed three different algo- rithms for data mapping. All three algorithms are used for the common purpose of extracting knowledge from the training data. The first two algorithms are based exclusively on signal similarity while the third is based on signal motifs. We study the three algorithms and present the trade-off each offers. b) Evaluation. Using a dataset on human activity recogni- tion system, we show that signal variation can decrease activity recognition performance by as much as 25%. Signal variations include different subjects, data sampling frequency, and sensor calibration. Also, we assess the performance of the three data mapping algorithms and their effects on the activity recognition performance. We show that the accuracy of the data mapping algorithms is high (R-squared =0.59 to 0.79). We also show that it improves the accuracy of the activity recognition system by up to 15%. The remainder of the paper is organized as follows. In Section II, we provide necessary background on transfer