Trusting in Machines: How Mode of Interaction Affects Willingness to Share Personal Information with Machines Juliana Schroeder Matthew Schroeder University of California, Berkeley jschroeder@berkeley.edu m.james.schroeder@gmail.com Abstract Every day, people make decisions about whether to trust machines with their personal information, such as letting a phone track one’s location. How do people decide whether to trust a machine? In a field experiment, we tested how two modes of interaction— expression modality, whether the person is talking or typing to a machine, and response modality, whether the machine is talking or typing back—influence the willingness to trust a machine. Based on research that expressing oneself verbally reduces self-control compared to nonverbal expression, we predicted that talking to a machine might make people more willing to share their personal information. Based on research on the link between anthropomorphism and trust, we further predicted that machines who talked (versus texted) would seem more human-like and be trusted more. Using a popular chatterbot phone application, we randomly assigned over 300 community members to either talk or type to the phone, which either talked or typed in return. We then measured how much participants anthropomorphized the machine and their willingness to share their personal information (e.g., their location, credit card information) with it. Results revealed that talking made people more willing to share their personal information than texting, and this was robust to participants’ self-reported comfort with technology, age, gender, and conversation characteristics. But listening to the application’s voice did not affect anthropomorphism or trust compared to reading its text. We conclude by considering the theoretical and practical implications of this experiment for understanding how people trust machines. 1. Introduction Every day, people make decisions about whether to trust machines with their personal information. From entering one’s credit card number into a company’s website to allowing a phone to track one’s location, these decisions require trusting machines with personal, and potentially sensitive, information. How do people decide whether to trust a machine? We explore how the modality by which people interact with machines can affect how much they are willing to trust them with personal information. Specifically we consider two criteria—whether the user is typing or talking to the machine (i.e., expression modality) and whether the machine is typing or talking back (i.e., response modality). We draw from two primary findings across the diverse fields of cognition, neuroscience, and social psychology to form predictions about the effect of expression and response modality on machine trust. First, expression modality should primarily affect the user’s cognitive state. Indeed, research on expression modality suggests that verbal (versus nonverbal or physical) modes of expression can reduce self-control behavior [1-3]. For instance, verbally expressing one’s choice (i.e., speaking) increases heuristic decision- making and indulgence, thereby reducing self-control, compared to physically expressing one’s choice (e.g., button pressing, pointing, typing) for identical self- control dilemmas [1]. As such, we expect that having a spoken conversation with a machine, as opposed to a typed conversation, may make users more likely to give up personal information, failing to exert control over their information. Second, response modality should primarily affect the user’s perception of the machine. A machine that can create speech should be judged as more human-like than a machine that creates text. One set of experiments illustrated this directly: when participants read a piece of text that had been created by either a human or machine, they were less likely to believe the text had been written by a human than those who heard the same text spoken aloud [4].Furthermore, anthropomorphizing a machine by assuming it is more humanlike (e.g., seems more rational, competent, thoughtful, and even emotional) may increase trust. For example, self-driving cars with human voices seem more human-like and are trusted more by users [5]. These data lead us to predict that users will trust talking machines more than texting machines. However, there are at least two important caveats that may exist in the relationship between response modality and trust. First, anthropomorphism is unlikely to always lead to trust. For instance, users feel threatened by machines that seem too intelligent [6]. Therefore, the level of machine competence, and whether or not the machine seems threatening, may matter. Second, the quality of the voice is also likely to matter when evoking anthropomorphism. Prior research suggests that only humanlike speech with voices that naturalistically vary in pitch, amplitude, and rate of Proceedings of the 51 st Hawaii International Conference on System Sciences | 2018 URI: http://hdl.handle.net/10125/49948 ISBN: 978-0-9981331-1-9 (CC BY-NC-ND 4.0) Page 472