Demo: Chewpin: a wearable acoustic device for chewing
detection
Yang Chen
sonnechen95@gmail.com
Division of Industrial Design,
National University of Singapore
Singapore
Zhitong Cui
zhitongcui@zju.edu.cn
Zhejiang University
Hangzhou, China
Ching Chiuan Yen
didyc@nus.edu.sg
Division of Industrial Design and
Keio-NUS CUTE Center, National
University of Singapore
Singapore
ABSTRACT
Diet intervention has emerged as a promising strategy in obesity
prevention and treatment. Existing research predominantly focused
on macronutrient intake and food quantity restriction in diet ma-
nipulation. Eating habit is difcult to change due to its highly habit-
ual nature; therefore, it is essential to automatically detect eating
behavior and provide real-time intervention in unhealthy eating
patterns. In this study, we explored the possibility of designing
Chewpin, an easy-to-be-implemented and socially acceptable de-
vice for capturing eating behavior(i.e., chewing and swallowing) in
a controlled environment. We implemented a convolutional neural
network (CNN) for data classifcation. Overall, our system achieved
a promising accuracy of eating recognition of 98.23% on the test
set. In the future, we will evaluate its usability and feasibility in
real-life eating practices and use this system as a technical tool for
problematic eating intervention.
CCS CONCEPTS
· Human-centered computing → Interaction devices.
KEYWORDS
eating detection, acoustic sensing, wearable device, CNN
ACM Reference Format:
Yang Chen, Zhitong Cui, and Ching Chiuan Yen. 2021. Demo: Chewpin: a
wearable acoustic device for chewing detection. In Adjunct Proceedings of
the 2021 ACM International Joint Conference on Pervasive and Ubiquitous
Computing and Proceedings of the 2021 ACM International Symposium on
Wearable Computers (UbiComp-ISWC ’21 Adjunct), September 21–26, 2021,
Virtual, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/
3460418.3479290
1 INTRODUCTION
Overweight and obesity have become one of the most signifcant
public health issues in societies [4]. Accumulating evidence sug-
gested that macronutrient intake is one of the leading causes of
unhealthy weight gain [3]. Thus, researchers strived to monitor eat-
ing activity as a means to gather information on problematic eating
and implement dietary intervention to help people regularize eating
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
UbiComp-ISWC ’21 Adjunct, September 21–26, 2021, Virtual, USA
© 2021 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-8461-2/21/09.
https://doi.org/10.1145/3460418.3479290
habits and take back control of their health. One major challenge for
eating intervention is to understand when people eat. Researchers
in the area of eating event detection have proposed several tech-
niques to determine eating-related behaviors. For instance, audio
sensors were used for collecting acoustic information of chewing
and swallowing in the ear canal or on the throat [1, 2, 6], camera for
frst-person image analysis of eating behavior [5, 7, 9], wrist-based
eating gesture recognition [8, 10]. Although these systems have
been widely explored in eating detection, they have several practi-
cal limitations such as obtrusive, privacy-invasive, and not social
acceptability in real-life eating practice. In this preliminary study,
by weighing the pros and cons, we explored the usability of acoustic
sensing, an easy to be implemented and socially-acceptable tech-
nique for eating detection. Two major design considerations should
be met: 1. It should be suitable to be worn in the real world: comfort-
able and socially acceptable. 2. It should capture information-rich
features from audio and accurately distinguish eating episodes in a
given scenario. With these concerns in mind, we designed a wear-
able acoustic device, Chewpin, as a technical tool for audio data
collection. Then, we implemented this device in the eating sound
collection in a lab-controlled environment and extracted features
to train a convolutional neural network (CNN) to classify eating
and non-eating behavior. To the best of our knowledge, no study
has applied CNN for acoustic eating sound detection before. The
accuracy of our system showed a promising result of 98.23% on the
test set. The contribution of this ongoing exploratory study are:
• Design and implement a wearable acoustic sensor-based
device that can capture eating behavior (i.e., chewing, swal-
lowing).
• Develop and evaluate an eating detection classifer model
using CNN based on a set of acoustic features.
2 SYSTEM DESIGN
2.1 Hardware device
Acoustic microphone A dual-microphone expansion board (ReS-
peaker 2-Mics Pi HAT) was applied to capture eating sound. The
board is developed based on WM8960, low-power stereo codes,
and two microphones which are capable of collecting nuanced
acoustic sounds supporting a max 48kHz sampling. This device has
been used in various AI assistant and voice interaction applications.
Mini computer Raspberry Pi Zero W was chosen as a tiny, low-cost
mini-computer with on-board wireless LAN and Bluetooth 4.1 for
hardware prototyping. This mini-computer is compatible with a
dual-microphone and is feasible to be implemented in real-living
conditions as a wearable device for eating data collection (Figure 1).
11