[POSTER] Social Augmentations in Multi-User Virtual Reality: A Virtual Museum Experience Daniel Roth * University of W¨ urzburg Constantin Kleinbeck University of W¨ urzburg Tobias Feigl Fraunhofer IIS Christopher Mutschler § Fraunhofer IIS, Friedrich-Alexander University Erlangen-N¨ urnberg (FAU) Marc Erich Latoschik University of W¨ urzburg ABSTRACT This work in progress report demonstrates a novel approach for be- havioral augmentations in Virtual Reality (VR). Using a large scale tracking system, groups of five users explored a virtual museum. We investigated how augmenting social interactions impacts this experience, by designing behavioral transformations for behavio- ral phenomena in social interactions. Preliminary data indicate a reduction of perceived isolation, and a more thought-provoking ex- perience with active behavioral augmentations. Index Terms: H5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities—; H.4.3 [Communications Appli- cations]: — 1 I NTRODUCTION Social interactions strongly depend on human abilities to express and detect social signals through nonverbal channels such as prox- emics, body motion, facial expressions, and eye gaze. Humans dis- play, decode and process this information to establish and main- tain interpersonal relationships. VR applications seldomly support the tracking and reproduction of fine grained human behaviors, and often only rotational and translational data are available. To this point, we argue that VR simulations can be used to actively me- diate communication and to establish communicative possibilities beyond natural interactions [5]. For example, it was conceptuali- zed that social interactions can be transformed [2] by decoupling representations from behaviors. By developing social artificial in- telligences, these active mediations could, for example, be used to foster interpersonal understanding, to mediate inter-cultural com- munications or to help to integrate persons suffering from social disorders [5]. To achieve such an active mediation, behaviors could be modified or augmented on the level of user appearance, the re- presentation/display of behaviors, and the respective channels for transmission that could be changed. In this ongoing work, we ex- amine these possibilities by using data of the users’ position and rotation to modify the visual representations of prototypical beha- vioral patterns. Our research questions in this work are whether augmenting virtual social interactions is beneficial for group expe- riences (RQ1), and whether it fosters the quality of relationship, presence and interactivity (RQ2). 2 APPROACH In an initial step, we created a design space for potential augmen- tations that can be implemented with translational and rotational * e-mail: daniel.roth@uni-wuerzburg.de e-mail: constantin.kleinbeck@stud-mail.uni-wuerzburg.de e-mail: feiglts@iis.fraunhofer.de § e-mail: christopher.mutschler@iis.fraunhofer.de e-mail: marc.latoschik@uni-wuerzburg.de data (depicted in Figure 1) and which relates the input, the inter- mediate behavioral phenomena, as well as visual abstractions for the transformation, amplification, and substitution of the behavioral patterns. We decided upon three common behavioral phenomena of social interaction: (i) Mutual Gaze (directed gaze, eye contact) which usually signals that interactants pay attention to each other, (ii) Joint Attention which is a phenomenon of shared attention to- ward an object, and (iii) Grouping, which is derived from proxemics and encodes group affiliation, intimacy, or power [1]. Figure 1: Design explorations for augmenting of social behaviors. 2.1 Implemented Transformations Our final augmentations included visual feedbacks of substitutio- nary, amplifying and transformational character. As shown in Fi- gure 2, we included i) an approximation of eye contact, abstractly visualized by floating bubbles and evoked if users did look (head rotation) at the direction of each other, ii) a highlighting particle system on an object if users were close to each other and looked at the same object to signal joint attention, as well as iii) a grouping color system, activated if users were within a 4m distance to each other (which can be considered the ”personal space” [1]). To avoid any third variable bias from artificial social or behavioral cues such as postures or facial displays from static humanoid avatar models, users were represented as simple rectangular pillars in the simula- tion. Figure 2: Left: condition with transformations for eye contact (floating bubbles), joint attention (particle highlights on object) and grouping (avatar colors). Right: condition without transformations.