Exploring Gesture-Based Tangible Interactions
with a Lighting AI Agent
Milica Pavlovic
1,2(&)
, Sara Colombo
1
, Yihyun Lim
1
,
and Federico Casalegno
1
1
Design Lab, Comparative Media Studies and Writing, Massachusetts Institute
of Technology, 20 Ames Street, Cambridge, MA 02142, USA
{milicap,scolombo,yihyun,casalegno}@mit.edu
2
Interaction and Experience Design Research Lab, Politecnico di Milano,
Via Durando 38/a, 20158 Milan, Italy
Abstract. The paper explores a gestural and visual language to interact with an
Artificial Intelligent agent controlling connected lighting systems. Six interac-
tion modalities (four gestural and two visual) were designed and tested with
users in order to collect feedback on their intuitiveness, comfort and engagement
level. A comparison between traditional voice-based interaction modalities with
AI and the proposed gesture-based language was performed. Preliminary results
are discussed, including the importance of cognitive metaphors in gesture-based
interaction, the relation between intuitiveness, innovation, and engagement, and
the advantages provided by gesture-based interactions in terms of privacy,
subtleness, and pleasantness, versus the limited options and the need to learn a
codified language. Insights will help designers in the development of seamless
interactions with AI agents for ambient intelligent systems.
Keywords: Ambient UX Á Tangible interactions Á AI agent Á Hybrid materials
1 Introduction
Ambient Intelligence (AmI) [1] system technologies are sensitive, responsive, adaptive,
transparent, ubiquitous, and context-aware environments. These systems can be
embedded with artificial intelligent (AI) agents, which perform front-end communi-
cation with the user. We are observing the rise of different kinds of AI agents designed
to communicate with humans through different languages [2].
Finding new languages to communicate with AI agents beyond voice interaction,
leaning towards multimodal interactions that engage diverse senses, is a wide and
prominent research area. We faced this topic in a project for a lighting AI agent. The
study presented in this paper is part of a concluded project aimed at designing a 10-year
future vision of a lighting AI agent, Phil. We crafted specific interaction modalities as
the communication language between Phil and the user, based on touch inputs rather
than the audio channel (used in current well-known personal AI assistants).
Touch-based interactions are perceived by users as natural and seamless [3]. Multi-
touch interactions have been analyzed for diverse applications, also in terms of user
performance and ergonomics, and design strategies have been proposed in this field [4].
© Springer Nature Switzerland AG 2020
T. Ahram et al. (Eds.): IHIET 2019, AISC 1018, pp. 434–440, 2020.
https://doi.org/10.1007/978-3-030-25629-6_67