Autonomous Sensor-Context Learning in Dynamic Human-Centered Internet-of-Things Environments Seyed Ali Rokni, and Hassan Ghasemzadeh Embedded and Pervasive Systems Laboratory (EPSL) School of Electrical Engineering and Computer Science Washington State University, Pullman, WA 99164–2752 {alirokni, hassan}@eecs.wsu.edu ABSTRACT Human-centered Internet-of-Things (IoT) applications utilize com- putational algorithms such as machine learning and signal process- ing techniques to infer knowledge about important events such as physical activities and medical complications. The inference is typ- ically based on data collected with wearable sensors or those em- bedded in the environment. A major obstacle in large-scale utiliza- tion of these systems is that the computational algorithms cannot be shared between users or reused in contexts different than the setting in which the training data are collected. For example, an activity recognition algorithm trained for a wrist-band sensor can- not be used on a smartphone worn on the waist. We propose an approach for automatic detection of physical sensor-contexts (e.g., on-body sensor location) without need for collecting new labeled training data. Our techniques enable system designers and end- users to share and reuse computational algorithms that are trained under different contexts and data collection settings. We develop a framework to autonomously identify sensor-context. We propose a gating function to automatically activate the most accurate compu- tational algorithm among a set of shared expert models. Our anal- ysis based on real data collected with human subjects while per- forming 12 physical activities demonstrate that the accuracy of our multi-view learning is only 7.9% less than the experimental upper bound for activity recognition using a dynamic sensor constantly migrating from one on-body location to another. We also compare our approach with several mixture-of-experts models and transfer learning techniques and demonstrate that our approach outperforms algorithms in both categories. 1. INTRODUCTION Many emerging Internet of Things (IoT) applications, from med- ical monitoring and home automation to automotive engineering and automatic security surveillance, involve human subjects where humans and things operate synergistically towards satisfying ob- jectives of the application [1–4]. At the heart of these human- centered IoT systems is human monitoring where physiological and behavioral context of the user are assessed using wearable sensors or those deployed in the environment. Typically, sensors acquire Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. ICCAD’16, Nov 07 - 10 2016, Austin, TX. ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 physical measurements, use computational algorithms such as ma- chine learning and signal processing techniques for local data pro- cessing and information extraction, and communicate the results to their outside world, for example, the cloud. Computational algorithms offer core intelligence of these sys- tems by allowing for continuous and real-time extraction of clini- cally important information from sensor data. The generalizability of these algorithms, however, is a challenge due to the dynami- cally changing configuration of the system. In fact, the algorithms need to be reconfigured (i.e., retrained) upon any changes in con- figuration of the system, such as displacement/ misplacement/ mis- orientation of the sensors. Practically, development of the com- putational algorithms requires algorithm training using sufficiently large amount of labeled training data, a process that is deemed time consuming, labor-intensive, expensive, and a major barrier to personalized and precision medicine [5]. Therefore, it is im- perative to develop new methodologies for sharing already trained computational algorithms in order to prevent the costly process of collecting labeled training data for every sensor-context. The de- velopment of multi-view learning solutions that enable transfer of the machine learning knowledge from previously trained models to new physical contexts in human-centered IoT applications is an en- tirely new research area, which has remained virtually unexplored by the community. Our sensor-context learning approach presented in this paper contributes to development of generalizable and robust machine learning algorithms operating with high accuracy even in previously unseen context settings, such as utilization of the system by a new user or wearing the sensors on body-locations different than the data collection setting. Our pilot application in this study is activity recognition. Recent findings [6–8] suggest that one can develop computational algo- rithms that compensate for context dynamics (e.g., sensor displace- ment, misplacement, mis-orientation). These algorithms, however, are accurate only if we collect sufficient labeled training data for all possible sensor-contexts. In fact, an implicit assumption in de- velopment of current computational algorithms for human-centered IoT applications is that the training and future data are in the same feature space and have the same distribution [9]. Therefore, most algorithms require significant amounts of training data for each net- work configuration or sensor-context. In this paper, we take first steps in developing automatic and real-time training of sensor-context detection without labeled train- ing data. Specifically, we focus on cases where multiple context- specific algorithms (i.e., ‘expert models’) are shared for use in a dynamic view where the sensor is worn/used on various body lo- cations each representing one sensor-context. We propose an ap- proach for learning a gating function for choosing the most ac- curate expert model based on the observed sensor data. Our ap-