International Journal of Grid and Distributed Computing Vol. 11, No. 6 (2018), pp.69-78 http://dx.doi.org/10.14257/ijgdc.2018.11.6.07 ISSN: 2005-4262 IJGDC Copyright 2018 SERSC Australia Real-time Calorie Extraction and Cuisine Classification through Food-Image Recognition 1 Hye-Jun Suh 1 and Kang-Hee Lee 2* 1 Global School of Media, Soongsil University, Seoul, South Korea 2 Global School of Media, Soongsil University, Seoul, South Korea 1 hj@soongsil.ac.kr, 2 *kanghee.lee@ssu.ac.kr Abstract The self-monitoring of user-activity tracking is the most common method of both individuals and most health-care systems. A growing demand for services that can easily monitor food and calorie information through the use of food photographs has also emerged. In this paper, an existing food-recognition and calorie-extraction system is combined with a context-recognition system that recognizes the meal type. To recognize the food images, a preceding Tensorflow-based machine-learning process was performed, and an Expert.js-based semantic network was constructed for the meal-type recognition. The food-recognition accuracy rate is 55.3 %, and Korean, Chinese, and Italian food were recognized. As a result, the objective is the combining of the context awareness with the existing self-monitoring systems to enable the user to implement dietary adjustments. Keywords: Vision system, Cuisine classification, Calorie extraction, Image recognition, Machine learning 1. Introduction Health care is one of the most important social issues, and to improve personal health, people engage in exercise or dieting. The tracking of user activities through self- monitoring is the most common practice and is included in most health-care systems [1]. A system that can be easily self-monitored through the use of a vision system also exists [2, 3]. In this paper, a food-classification procedure is combined with an existing self- monitoring system using the vision system. By analyzing the food in real time through photos of the food that the user eats, the calories are extracted and the type of food (Korean, Chinese, etc.,) that the user has consumed is classified according to a maximum of three extracted food names. Accordingly, the service can recommend food depending on the context, thereby helping the dietary control of the user. 2. Design of the System In this paper, the functions of the food-type extraction in the existing calorie-extraction system are combined through the food-recognition process. Claus (a classifier that uses real-time images) recognizes the photos in the vision system in real time, extracts the food names and the calorie data in real time, classifies the food types in the classification system using the extracted food-name data, and the user interface (UI) makes its structure directly visible to the user. As shown in Figure 1, the system consists of a vision system, a classification system, and the UI. The overall system is made up of a Web platform that is easy to access. To construct the whole system, HTML and Received (January 5, 2018), Review Result (March 9, 2018), Accepted (March 12, 2018) * Corresponding Author