Freehand Gesture-Based 3D Manipulation Methods
for Interaction with Large Displays
Paulo Dias
1,2(
✉
)
, João Cardoso
1
, Beatriz Quintino Ferreira
2
,
Carlos Ferreira
2,3
, and Beatriz Sousa Santos
1,2
1
DETI/UA- Department of Electronics, Telecommunications and Informatics,
University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
{paulo.dias,joaocardoso,bss}@ua.pt
2
IEETA- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro,
Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
{mbeatriz,carlosf}@ua.pt
3
DEGEI/UA- Department of Economics, Management and Industrial Engineering,
University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
Abstract. Gesture-based 3D interaction is a research topic with application in
numerous scenarios which gained relevance with the recent advances in low-cost
tracking systems. Yet, it poses many challenges due to its novelty and consequent
lack of systematic development methodologies. Developing easy to use and learn
gesture-based 3D interfaces is particularly difficult since the most adequate and
intuitive gestures are not always obvious and there is often a variety of different
gestures used to perform similar actions. This paper presents the development
and evaluation of interaction methods to manipulate 3D virtual objects in a large
display set-up using freehand gestures detected by a Kinect depth sensor. We
describe the implementation of these methods and the user studies conducted to
improve them and assess their usability as manipulation methods. Based on the
results of these studies we also propose a method that overcomes the lack of roll
movement detection by the Kinect and makes simpler the scaling and rotation in
all degrees-of-freedom using hand gestures.
Keywords: 3D user interfaces · Free-hand gesture based interfaces · 3D object
manipulation · Large displays · User studies
1 Introduction
The recent developments in low-cost tracking systems such as the Wii Remote, Leap
Motion, Microsoft Kinect [1] and the advances in gesture recognition algorithms led to the
increasing popularity of gesture-based interfaces given their relevance for a growing
number of applications namely in gaming, Virtual and Augmented Reality [2, 3], and in
other scenarios [4]. However, the development of gesture interfaces poses several
usability and technical issues and challenges related to the lack of universal consensus
regarding gesture-function associations, and the need to tackle a variety of environment
types and technical limitations as mentioned by Wachs et al. [5] and Norman and Nielsen
[6]. 3D user interfaces are seen as the natural choice for large displays contexts as pointed
© Springer International Publishing AG 2017
N. Streitz and P. Markopoulos (Eds.): DAPI 2017, LNCS 10291, pp. 145–158, 2017.
DOI: 10.1007/978-3-319-58697-7_10