Comparing Gesture and Touch for Notification System Interactions Maria Karam Ryerson University Centre For Learning Technologies Toronto, Ontario, Canada maria.karam@ryerson.ca Jason Chong Lee Virginia Tech. Department of Computer Science Blacksburg VA, USA chonglee@vt.edu Travis Rose Virginia Tech. Dept. of Computer Science Blacksburg VA, USA rtrose@vt.edu Francis Quek Virginia Tech. Dept. of Computer Science Blacksburg VA, USA quek@cs.vt.edu Scott McCrickard Virginia Tech. Dept. of Computer Science Blacksburg VA USA dmccrick@cs.vt.edu Abstract We explore some of the characteristics of multimodal input interaction spaces for notification systems within a multi-tasking environment like a command and control cen- ter using two promising interaction methods: gestures and touch based input through a laboratory experiment compar- ing both techniques. Results of our study suggest that ges- tures are better suited for multi-tasking situations because they are less interruptive than touch interaction to users’ primary tasks and are subjectively preferred by users in cer- tain situations. 1 Introduction Notification systems are designed to provide users with often critical information to assist with their everyday tasks, while causing minimal interruption to their current tasks. Interactions with notification systems often only require users to acknowledge the notification, which takes the form of an acceptance or rejection of the notification alert. Such notifications can be executed as a secondary task to the user’s primary attention focus [12]. But even with the sim- plicity of the commands associated with notification system interactions, acknowledging or otherwise responding to a notification system alert can result in more disruptive inter- ruptions to a user’s primary task. For example, switching one’s interaction context from the keyboard to the mouse to respond to a notification alert can lead to unnecessary dis- tractions to the user’s primary task [12]. Gestures represent a natural way in which humans can dismiss or acknowl- edge notifications, both in real and digital worlds. They are an intuitive form of interaction that can readily be imple- mented using computer vision technology [10]. Although gestures offer a less precise form of interaction than di- rect input methods like the mouse or keyboard, most sys- tems do provide over 90% accuracy in detecting gestures [11]. While gestures are not a replacement for the mouse or keyboard, they hold the potential to effectively serve as an additional input modality that can easily be integrated into most existing systems [10]. Although we only see vision- based gestures used in a few gaming applications [1, 2], web cameras have effectively become ubiquitous components on personal computers and laptops, providing the infrastruc- ture necessary to implement gestures as another form of in- put that can be geared specifically at controlling notifica- tion system interactions. In this paper, we report on an ex- periment that compares semaphoric hand gestures to touch screen input for notification system interactions, suggesting that gestures offer a less distracting method of responding to simple notification system alerts than is possible using touch input. We measure the effectiveness of gestures over touch screen interactions within a multi-tasking situation, similar to a command and control center, for minimising the level of attention required to respond to a secondary task, while optimising utility for the user in maintaining focus on their primary task. Details and results of the experiment are presented, along with a discussion addressing the relevance of these findings towards promoting the adoption of ges- tures as a valuable contributor to multimodal interactions for improving notification system interactions.