Eye on the Ball: The efect of visual cue on virtual throwing
Goksu Yamac
Trinity College Dublin
Ireland
yamacg@tcd.ie
Carol O’Sullivan
Trinity College Dublin
Ireland
Carol.OSullivan@tcd.ie
Figure 1: VR experiment: a participant grabbing (l), throwing (m) and receiving visual feedback (r)
ACM Reference Format:
Goksu Yamac and Carol O’Sullivan. 2022. Eye on the Ball: The efect of
visual cue on virtual throwing. In SIGGRAPH Asia 2022 Posters (SA ’22
Posters), December 06-09, 2022. ACM, New York, NY, USA, 2 pages. https:
//doi.org/10.1145/3550082.3564181
1 INTRODUCTION
Despite rapid developments in AR devices, their feld of view (FOV)
is still much lower than for VR headsets (e.g., the diagonal FOV
of 52
◦
for the Microsoft Hololens™ vs. 113
◦
for the HTC Vive™
Pro 2). This reduction in visual feedback can be problematic for
certain tasks, such as ball throwing. We present an experiment in
VR, where participants threw a virtual ball at virtual targets, with
diferent levels of visual feedback. The objective of the experiment
was to investigate how visual cues afect the way that participants
perform virtual throws, which may be useful for the design of
AR/VR systems. Eighteen participants used their own body motion
to throw a virtual ball at virtual targets and we simultaneously
captured their full body motion (mocap) using an optical motion
capture system for ofine analysis.
Previously, Zindulka et al. [2020] found that people are less
accurate when throwing in VR than for real throwing, while Butkus
and Ceponis [2019] found that throwing accuracy in VR increased
with distance and that throwing velocity was higher in VR than in
reality. These studies used a device to control the ball, whereas in
our study, we use VR gloves to emulate more closely the experience
of a real throw. Nusseck et al. [Nusseck et al. 2007] demonstrated
the difculty with predicting the properties of a bouncing ball
during manipulations of the trajectory’s visibility. We also vary the
visibility of a thrown ball’s trajectory in VR.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
SA ’22 Posters, December 06-09, 2022, Daegu, Republic of Korea
© 2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9462-8/22/12.
https://doi.org/10.1145/3550082.3564181
2 VR EXPERIMENT
We developed the experiment in Unity™ and used the HTC Vive™
headset for display. The physics of the VE were governed by Unity’s
built-in physics system, NVIDIA PhysX™. Our interaction and
capture methods are similar to those of Yamac and O’Sullivan [2021].
Interactions were visualized using the virtual hand rendered from
ManusVR™ glove data, which is generated from 10 sensors placed
on the glove. A Vive controller attached to the participant’s arm
was used for real-time arm tracking (to avoid interference from
Vive trackers with the mocap system), which left the hands free to
perform the throwing motions. A grey block, a 5cm diameter ball
and 50cm diameter targets are displayed (see Figure 1). Participants
stood next to the block (based on handedness), facing the target
spawn area.
For grabbing, the glove provides fnger fexion values for when
the hand closes around the ball. This was set for each participant as
they held a real tennis tennis ball. After a grab, the ball’s position
was interpolated towards the predefned center of the hand and
followed the hand thereafter. The velocity assigned to the ball at the
point of release was estimated over a window of the previous nine
frames. The release mechanism was implemented from the rate of
change in fnger phalanges, which proved to be a good indicator
of an opening hand. Once the rate of change passed a predefned
threshold of 3
◦
/sec, the ball was released from the hand and the
estimated velocity was assigned to the ball.
When a target was hit or missed, the target turned respectively
green or red. Two levels of visual feedback Mode were presented:
(i) in Full mode, the ball was visible throughout the full trajectory
of the throw; and (ii) in Minimal mode, the ball was only visible
until it was grabbed, after which it would gradually fade in the
participant’s hand and remained invisible for the duration of the
throw. Therefore, they only knew whether they hit the target or
not when it changed color. To vary the Distance of the targets, we
divided the foor surface into three regions with a width of 2m,
referred to as Near (1.25-2m), Mid (3.5-4.25m) and Far (5-5.25m).
The depth of the Far region was reduced in size during testing, as it
proved to be very difcult to hit any targets beyond that distance.
Targets were spawned randomly within these regions at run-time.