Computers & Education 146 (2020) 103756
Available online 15 November 2019
0360-1315/© 2019 Elsevier Ltd. All rights reserved.
How we trust, perceive, and learn from virtual humans: The
infuence of voice quality
Erin K. Chiou
a
, Noah L. Schroeder
b
, Scotty D. Craig
a, *
a
Arizona State University, Human Systems Engineering, 7271 E. Sonoran Arroyo Mall, Mesa, AZ, 85212, USA
b
Wright State University, College of Education and Human Services, Leadership Studies in Education and Organizations, 442 Allyn Hall, 3640
Colonel, Glenn Hwy, Dayton, OH, 45435, USA
A R T I C L E INFO
Keywords:
Virtual human
Pedagogical agent
Voice effect
Trust
Human computer interaction
ABSTRACT
Research has shown that creating environments in which social cues are present (social agency)
benefts learning. One way to create these environments is to incorporate a virtual human as a
pedagogical agent in computer-based learning environments. However, essential questions
remain about virtual human design, such as what voice should the virtual human use to
communicate. Furthermore, to date research in the education literature around virtual humans
has largely ignored one potentially salient construct – trust. This study examines how the quality
of a virtual human’s voice infuences learning, perceptions, and trust in the virtual human. Re-
sults of on an online study show that the voice quality did not signifcantly infuence learning, but
it did infuence trust and learners’ other perceptions of the virtual human. This study, consistent
with recent work around the voice effect, questions the effcacy of the voice effect and highlights
areas of research around trust to further extend social agency theory in virtual human based
learning environments.
1. Introduction
As artifcial intelligence becomes more prevalent in everyday technology use, understanding what impacts people’s trust in
technology becomes increasingly important for technology design. Trust guides people’s reliance on and compliance with technology.
Therefore, trust has important implications for technology acceptance, adoption, and appropriate use (Davis, 1985; Muir, 1987). This
may be particularly true in relation to learning technologies, where there are multiple stakeholders who must maintain a level of trust
in those technologies to realize their effective and sustained use.
Past research has investigated myriad factors that infuence trust in technology, including trust disposition, past experiences, task
characteristics, work environment factors, and technology characteristics (Hoff & Bashir, 2014; Lee & See, 2004). Technology char-
acteristics are of particular interest because they are often what designers have the most control over. One increasingly important
design component of learning technologies are virtual humans. Virtual humans, which can take the role of pedagogical agents
(Schroeder, Adesope, & Gilbert, 2013) or conversational agents (Graesser, Cai, Morgan, & Wang, 2017) depending on their specifc
implementations, were posited as a way to socially engage the learner with the learning system because they can add social presence,
dynamic interactions, and feedback to a wide range of computer-mediated tasks (Craig & Schroeder, 2018; Park & Catrambone, 2007).
As digital embodiments of artifcially intelligent agents, virtual humans may be used to mediate interactions between people and
* Corresponding author. 7271 E. Sonoran Arroyo Mall, Mesa, AZ, 85212, USA.
E-mail addresses: erin.chiou@asu.edu (E.K. Chiou), Noah.schroeder@wright.edu (N.L. Schroeder), scotty.craig@asu.edu (S.D. Craig).
Contents lists available at ScienceDirect
Computers & Education
journal homepage: http://www.elsevier.com/locate/compedu
https://doi.org/10.1016/j.compedu.2019.103756
Received 20 August 2019; Received in revised form 25 October 2019; Accepted 11 November 2019