J.A. Jacko (Ed.): Human-Computer Interaction, Part II, HCII 2009, LNCS 5611, pp. 66–74, 2009.
© Springer-Verlag Berlin Heidelberg 2009
Multi-modal Interface in Multi-Display Environment for
Multi-users
Yoshifumi Kitamura, Satoshi Sakurai, Tokuo Yamaguchi, Ryo Fukazawa,
Yuichi Itoh, and Fumio Kishino
Graduate School of Information Science and Technology
Osaka University
kitamura@ist.osaka-u.ac.jp
Abstract. Multi-display environments (MDEs) are becoming more and more
common. By introducing multi-modal interaction techniques such as gaze,
body/hand and gestures, we established a sophisticated and intuitive interface
for MDEs where the displays are stitched seamlessly and dynamically accord-
ing to the users' viewpoints. Each user can interact with the multiple displays as
if she is in front of an ordinary desktop GUI environment.
Keywords: 3D user interfaces, CSCW, graphical user interfaces, perspective
correction.
1 Introduction
A variety of new display combinations are currently being incorporated to offices
and meeting rooms. Examples of such displays are projection screens, wall-sized
PDPs or LCDs, digital tables, desktop and notebook PCs. We often use these multi-
ple displays simultaneously during work. Thus, MDEs are becoming more and more
common. We expect to work effectively by using multiple displays in such envi-
ronments; however, there are important issues that prevent users from effectively
taking advantage of all the available displays. MDEs include displays that can be at
different locations from and different angles to the user; as a result, it can become
very difficult to manage windows, read text, and manipulate objects. Therefore, it is
necessary to establish a sophisticated interface for MDEs where the displays are
stitched seamlessly and dynamically according to the users' viewpoints, and a user
can interact with the multiple displays as if she is in front of an ordinary desktop
GUI environment.
Therefore, we propose a system that includes multi-modal interaction techniques
utilizing multiple displays. The multi-modal interactions such as gaze inputs, finger
gestures, make tasks more comfortable and intuitive. Moreover, they can be used
for detecting context of the environment, so that it provides a perspective-correct
GUI environment for viewing, reading, and manipulating information for each
MDE user.