A Feature Extraction Method for Realtime Human Activity Recognition on Cell Phones Mridul Khan 1 , Sheikh Iqbal Ahamed 2 , Miftahur Rahman 1 , Roger O. Smith 3 1 Department of Electrical Engineering and Computer Science, North South University, Dhaka, Bangladesh 2 Department of Mathematics, Statistics and Computer Science, Marquette University, Milwaukee, USA 3 Department of Occupational Science & Technology, University of Wisconsin-Milwaukee, Milwaukee, USA Email: mridul.khan@gmail.com, iq@mscs.mu.edu, mrahman@northsouth.edu, smithro@uwm.edu Abstract—In this paper we contribute a novel linear-time method for extracting features from acceleration sensor signals in order to identify human activities. We benchmark this method using a standard acceleration-based activity recognition dataset called SCUT-NAA. The results show that the described method performs best when the training and testing data are from the same person. In this context, a linear kernel based support vector machine (SVM) classifier and a radial basis function (RBF) based one produced similar levels of accuracy. Finally we demonstrate an application of the proposed method for realtime activity recognition on a cell phone with a single triaxial accelerometer. This feature extraction method can be used for realtime activity recognition on resource constrained devices. Keywords-accelerometer; activity recognition; context-aware systems; machine learning; sensor signal processing I. INTRODUCTION The recent trend of embedding a large variety of sensors in consumer electronics has made many pervasive computing applications more practical than ever before. Smart phones and gaming console controllers now often have sensors to observe acceleration, location, orientation, ambient lighting, sound, imagery etc. [1, 2]. Among these sensors, accelerometers are now ubiquitous due to their inclusion in most mid-range and high-end cell phones. These accelerometers measure the amount of acceleration being felt by the device along all three dimensions. Machine learning can be used to infer human activities from accelerometer signals since similar types of motion cause similar acceleration sequences. Activity recognition has applications in healthcare and context-aware pervasive computing systems among others. It has been used to assess physical activity [3] and aid cardiac rehabilitation [4]. Cell phone based activity recognition systems [5] are an active area of research since they can lead to new types of context-aware mobile applications. Normally recognition is carried out in three steps. First small time segments or windows of the sensor signal are taken. Then some features that describe the general characteristics of each window are extracted. Finally a classification algorithm is used to infer the activity. Of course the classification algorithm has to be trained beforehand using a set of samples representing the activities that have to be recognized. The feature extraction step is possibly the most important part of the activity recognition problem since classification can be handled by any existing machine learning algorithm if the features are robust. In general frequency domain features have been found to perform best [6]. However oftentimes extracting these require too much computation to be feasible in realtime systems [7]. The feature extraction scheme that we devised is computationally efficient but less tolerant of person to person variations. We combined modified versions of techniques previously used in this domain with quantitative description methods used in electroencephalography (EEG) signal analysis. Our intended use case is activity recognition on cell phones. Important characteristics of that scenario are – minimal processing capability, only one 3D accelerometer, device is carried in a mostly static orientation in the user's pocket or purse, and that the system can be trained and used by the same person, namely the owner of the phone. Performance on the standard dataset and the prototype cell phone application proves that our method is applicable for the targeted use case. As a whole this work makes the following contributions: A novel linear-time feature extraction scheme that uses various disparate methods to identify human activities is presented. Accuracy of the proposed method is shown using various classification methods on a standard accelerometer-based dataset and realtime data on a cell phone. Prototype application demonstrates that activities can be detected on modern cellphones in realtime without help from any external sensing or computing device. The next section provides an overview of the related work. After that the method is presented. Then we describe how the benchmarking was performed. Finally the results are analyzed and conclusions drawn. II. RELATED WORK A review of the recent research done on activity recognition using ambient and body mounted sensors is available in [9]. The paper provides a holistic view of the field by summarizing a large number of publications. Covered classification methods include: Hidden Markov Model, Neural Network, Fuzzy Logic and Support Vector Machine. Performance in previous research projects is presented for each algorithm. Then a critical analysis is carried out about the recognition metrics and results. The findings are also summarized in tabular format. This makes it easy to compare the relative performance of the approaches.