Author:
(1) Katrin Fischer, Annenberg School for Communication at the University of Southern California, Los Angeles (Email: katrinfi@usc.edu);
(2) Donggyu Kim, Annenberg School for Communication at the University of Southern California, Los Angeles (Email: donggyuk@usc.edu);
(3) Joo-Wha Hong, Marshall School of Business at the University of Southern California, Los Angeles (Email: joowhaho@marshall.usc.edu).
Table of Links
Abstract Introduction & Related Work
Abstract— As social and socially assistive robots are becoming more prevalent in our society, it is beneficial to understand how people form first impressions of them and eventually come to trust and accept them. This paper describes an Amazon Mechanical Turk study (n = 239) that investigated trust and its antecedents trustworthiness and first impressions. Participants evaluated the social robot Pepper’s warmth and competence as well as trustworthiness characteristics ability, benevolence and integrity followed by their trust in and intention to use the robot. Mediation analyses assessed to what degree participants’ first impressions affected their willingness to trust and use it. Known constructs from user acceptance and trust research were introduced to explain the pathways in which one perception predicted the next. Results showed that trustworthiness and trust, in serial, mediated the relationship between first impressions and behavioral intention.
I. INTRODUCTION & RELATED WORK
Trust plays an important role in human relationships and is similarly important in establishing relationships between humans and robots. It is “the confidence that one will find what is desired from another, rather than what is feared” [1] and is crucial in the presence of (perceived) risk, e.g. when another’s abilities or actions cannot be foreseen [2]. Consistent and predictable behaviors as well as trust-building characteristics such as dependability are cornerstones that make up trust in human relationships [3]. Research on human-robot trust has generated a sizeable body of literature in which multiple definitions and frameworks of trust exist. For instance, Lee & See applied Rempel et al.’s [3] dimensions of human trust to interactions with robots in terms of their performance, behavior determining operations (algorithms) and the degree to which they are used within their designer’s intent [4]. Other research has considered the multidimensional nature of trust with a focus on gains and losses in human-robot trust due to robot behaviour change [5]. We follow the model of Mayer et al. [6] who argued that trust is built through evaluating trustworthiness characteristics of the trustee, which has been shown to apply to HRI contexts by evaluating the robot’s ability, integrity and benevolence [7].
Trustworthiness is also connected to social dimensions of human perceptions of robots such as warmth and competence which vary based on robot appearance [8]. According to the stereotype content model [9], these first impressions help us determine whether a new acquaintance is likely to constitute a friend or a foe. Warmth judgements (trustworthiness, helpfulness, perceived intent) occur first and decide affective and behavioral reactions, whereas perceived competence ascertains to what extent the other can act on their motives (perceived ability, efficiency, intelligence). Competence and warmth stereotypes predict emotions, which directly predict behaviors [10] and they apply to people as well as to social robots [11]. There is a relationship between warmth and trust in that somebody that is perceived to be warm is simultaneously considered trustworthy, or its inverse, cold and untrustworthy [9].
Trust plays a crucial role in HRI [12], [13] and has been recognized as a factor that not only predicts the quality of the interaction, but also how willing people are to use social robots for certain tasks [14]–[16]. Among the most commonly employed models to assess use and acceptance of new technologies are TAM (Technology Acceptance Model) [17] and UTAUT (Unified Theory of Acceptance and Use of Technology) [18]. TAM studies can explain approximately 50% of the variance in technology acceptance outcomes while UTAUT has been found to produce an adjusted R2 of over 69% [18]. Its outcome, intention to use, is relevant to HRI research as a predictor of robot acceptance, on which, together with trust, the success of socially assistive robots depends [19]. Past research has looked at trust and acceptance in HRI, but not explored the specific pathways of their relationship.
RQ1. How do the above identified constructs of trust, trustworthiness, warmth, competence, and intention to use social robots interrelate?
RQ2. What are the antecedents of trust? Of social robot acceptance?
This paper is available on arxiv under CC 4.0 license.