Plan Recognition based on Sensor Produced Micro- Context for Eldercare C. Phua 1 , J. Biswas 2 , A. Tolstikov 2 , V. Foo 2 , W. Huang 3 , M. Jayachandran 2 and A. P. W. Aung 2 1 Data Mining Dept. 2 Networking Protocols Dept. and 3 Computer Vision and Image Understanding Dept. Institute for Infocomm Research, Agency for Science Technology and Research, 1 Fusionopolis Way, Connexis, Singapore 138632 {cwphua, biswas, atolstikov, sffoo, wmhuang, mjay, apwaung}@i2r.a-star.edu.sg P. C. Roy 4 , H. Aloulou 5 , M. A. Feki 6 , S. Giroux 4 , A.Bouzouane 7 and B. Bouchard 7 4 DOMUS lab. Université de Sherbrooke, Canada {patrice.c.roy, sylvain.giroux}@usherbrooke.ca 5 Ecole Nationale d’Ingenieure de Sfax, Tunesia 6 Alcatel-Lucent Bell N.V. Copernicuslaan 50, 2018 Antwerp, Belgium Mohamed_Ali.Feki@alcatel-lucent.com 7 LIAPA lab, Université du Québec à Chicoutimi, Canada {abdenour.bouzouane, Bruno.bouchard}@uqac.ca Abstract— This paper outlines an approach that we are taking for eldercare applications in the smart home, involving cognitive errors and their compensation. Our approach involves high-level modeling of daily activities of the elderly by breaking down these activities into smaller units, which can then be automatically recognized at a low-level by collections of sensors placed in the homes of the elderly. This separation allows us to employ plan recognition algorithms and systems at a high level, while developing stand-alone activity recognition algorithms and systems at a low level. It also allows the mixing and matching of multi-modality sensors of various kinds that go to support the same high-level requirement. I. INTRODUCTION Monitoring people, whether in isolation or in groups, is an important aspect of modern healthcare in general, and eldercare in particular. Until today, the painstaking and difficult task of monitoring had to be done by attendants and caregivers. However, with the confluence of technologies such as ubiquitous computing, low-cost multi-modal sensing and mobile wireless networking, it is becoming possible to monitor elderly people in so-called smart spaces equipped with Ambient Intelligence (AmI) making use of artificial intelligence techniques to recognize activities and behaviors of the elderly. In this paper, we discuss an application involving in particular, patients with initial stages of dementia, which is developed in the Ambient Intelligence for Home based Elderly Care (AIHEC) project at the Institute for Infocomm Research (I 2 R). This application requires high-level intelligence involving normal and erroneous plan recognition, as well as low-level intelligence involving the recognition of activities taking place within the home. With this application in mind, Section II presents a meal- taking behavior scenario involving some cognitive errors in order to illustrate this problem. Section III outlines the goals and challenges of this research, providing definitions of the terms plan, activity and behavior. Section IV provides a description of our system for information extraction from data gathered by sensors and the manner in which this information is used for intelligent decision-making. Section V presents related work and discusses how to bridge the gap between sensors and applications. Finally, we conclude this paper with future perspectives of our research. II. MEAL-TAKING SCENARIO The Patients with initial stages of dementia, like Alzheimer’s disease, who live by themselves, often exhibit behavioral problems while eating. They may have difficulty in beginning to eat, they may forget to resume eating after an interruption, and they may order activities wrongly, and so on. For this reason, a geriatrician is often interested in knowing if his living-alone elderly patient has eaten his meals well. To illustrate how the monitoring and recognition of the activities of daily living (ADLs) and the cognitive assistance for those ADLs to the patient are done inside an ambient intelligence smart home, a real case scenario about the eating meal ADL involving some cognitive errors is presented. From the perspective of automated recognition of behaviors and activities, the ADL of eating a meal presents concrete examples of high-level behavior (taking meals properly), and low-level actions (bringing food into the mouth). Meal-taking includes several sequential tasks and sub-tasks, while respecting certain ordering relationships between the tasks and sub-tasks. In addition, each task or sub-task should be completed within certain time-bounds and in an appropriate manner. If the observed behavior is correct, according to some defined metric of correctness, we may say that the behavior is well-done or coherent with the patient’s intention. Depending on the culture, one may start eating the salad, followed by main meal then dessert. Users have to prepare the lunch table, putting in place all needed utensils and food items, use the microwave oven to heat the meal and so on. Indeed, the number of possible meal-taking plans is very large, however as a concrete example to drive our work we have selected a particular targeted plan for eating. Table 1 outlines our targeted activities and sub-activities. Fig. 1 depicts a possible ambient sensor network that might be used in order to recognize tasks described in Table 1.