1. Trang chủ
  2. » Luận Văn - Báo Cáo

Quan điểm thực nghiệm và bằng chứng thực nghiệm về sự chấp nhận ưu tiên của robot xã hội ảo ChatGPT

23 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Experiential Perspective and Empirical Evidence of Virtual Social Robot ChatGPT’s Priori Acceptance
Tác giả Amelie Abadie, Soumyadeb Chowdhury, Sachin Kumar Mangla
Chuyên ngành Marketing
Thể loại Research Article
Năm xuất bản 2024
Định dạng
Số trang 23
Dung lượng 1,87 MB

Nội dung

Do những tiến bộ công nghệ gần đây, robot xã hội đang ngày càng trở nên phổ biến trong cuộc sống của người tiêu dùng. không gian. ChatGPT, một robot xã hội ảo, đã thu hút được sự chú ý đáng kể từ các phương tiện truyền thông đại chúng và giới học thuật những người sử dụng như nhau kể từ khi phát hành vào tháng 11 năm 2022. Sự chú ý này xuất phát từ những khả năng vượt trội của nó, như cũng như những thách thức tiềm ẩn mà nó đặt ra cho xã hội và các lĩnh vực kinh doanh khác nhau. Trước những diễn biến này, chúng tôi đã phát triển một mô hình lý thuyết dựa trên Lý thuyết thống nhất về chấp nhận và sử dụng công nghệ và người tiêu dùng loại hình giá trị tập trung vào trải nghiệm của người tiêu dùng để kiểm tra ảnh hưởng của các yếu tố kinh nghiệm đến có ý định sử dụng ChatGPT và sau đó cộng tác với nó để cùng tạo nội dung giữa các nhà quản lý doanh nghiệp.

Trang 1

Available online 18 January 2024

0040-1625/© 2024 Elsevier Inc All rights reserved

A shared journey: Experiential perspective and empirical evidence of

virtual social robot ChatGPT’s priori acceptance

Amelie Abadiea, Soumyadeb Chowdhuryb, Sachin Kumar Manglac,*

aMarketing Department, TBS Business School, Lot La Colline II, Route de Nouasseur, Casablanca, Morocco

bInformation, Operations and Management Sciences Department, TBS Business School, 1 Place Alphonse Jourdain, 31068 Toulouse, France

cResearch Centre - Digital Circular Economy for Sustainable Development Goals (DCE-SDG), Jindal Global Business School, O P Jindal Global University, Sonepat, India

Consumer value creation framework

Managerial usage intention

A B S T R A C T Due to recent technological advancements, social robots are becoming increasingly prevalent in the consumer space ChatGPT, a virtual social robot, has captured significant attention from the mass media and academic practitioners alike since its release in November 2022 This attention arises from its remarkable capabilities, as well as potential challenges it poses to society and various business sectors In light of these developments, we developed a theoretical model based on the Unified Theory of Acceptance and Use of Technology and a consumer value typology centred around consumer experiences to examine the influence of experiential factors on the intention to use ChatGPT and subsequently collaborating with it for co-creating content among business man-agers To test this model, we conducted a survey of 195 business managers in the UK and employed partial PLS- structural equation modelling for analysis Our findings indicate that the efficiency, excellence, meaningfulness

of recommendations, and conversational ability of ChatGPT will influence the behavioural intention to use it during the priori acceptance stage Based on these findings, we suggest that organisations should thoughtfully consider and strategize the deployment of ChatGPT applications to ensure their acceptance, eventual adoption, and subsequent collaboration between ChatGPT and managers for content creation or problem-solving

1 Introduction

Since the inception of robotic applications, such as factory robots in

product assembly lines, scientific exploration robots in space and under

the ocean, military robots for surveillance and diffusing bombs, and

transportation drones, which were primarily operated by human

con-trollers, these have been transformed into autonomous or semi-

autonomous robots that are designed to alleviate pressing challenges

of society (Sheridan, 2020) In this context, the term ‘social robots’ has

been coined in scientific literature While there is no consensus over the

definition of social robots, they are usually defined as autonomous

agents that can communicate with humans and can act in a socially

appropriate manner Ideally, the existing definitions consider the

char-acteristics of social robots, with the assumption that they will operate in

public spaces and therefore should have a physical body (Fong et al.,

2003) The current definitions have yet to consider social robots in

virtual space, such as the metaverse or any online platform

Amidst the COVID-19 outbreak, social robots were deployed with

varied uses in actual scenarios (Aymerich-Franch, 2020) The pandemic-

induced restrictions, such as social distancing and quarantine, led to the use of social robots as supportive aids in healthcare services (Aymerich-

transmission of COVID-19 by performing specific tasks, such as toring patients and aiding healthcare personnel, thus minimizing the spread of the disease (Javaid et al., 2020) Additionally, studies have shown that quarantine measures and isolation have had adverse effects

moni-on people’s mental health and overall well-being (Violant-Holz et al.,

2020) Hence, social robots could potentially assist in promoting well- being during a pandemic (Yang et al., 2020) This led to the popu-larity of these applications in the mass media as well as in academic literature

OpenAI’s ChatGPT was released in November 2022 and raised considerable interest for its groundbreaking approach to AI-generated content, which produced complex original texts in response to a user’s question ChatGPT is a cutting-edge AI language model that leverages generative AI techniques to provide algorithm-generated conversational responses to question prompts (van Dis et al., 2023) The outputs from generative AI models are almost indistinguishable from human-

* Corresponding author

E-mail addresses: sachinmangl@gmail.com, smangla@jgu.edu.in (S.K Mangla)

Contents lists available at ScienceDirect Technological Forecasting & Social Change

journal homepage: www.elsevier.com/locate/techfore

https://doi.org/10.1016/j.techfore.2023.123202

Received 1 March 2023; Received in revised form 19 October 2023; Accepted 27 December 2023

Trang 2

generated content, as they are trained using nearly everything available

on the web (e.g., around 45 terabytes of text data in the case of

ChatGPT) The model can be trained to perform specific tasks, such as

preparing slides in a specific style, writing marketing campaigns for a

specific demographic, online gaming commentary, and generating high-

resolution images (Chui et al., 2022a) The recent widespread global

adoption of ChatGPT has demonstrated the tremendous range of use

cases for the technology, including software development and testing,

poetry, essays, business letters, and contracts (Metz, 2022; Reed, 2022;

impact over organisations, ranging from enabling scalable and

auto-mated marketing personalisation, to generating optimized operation

sequences, coding information systems, quickening and scaling up

simulations and experiments for research and development, or simply

answering complex risk and legal interrogations of managers (Chui

et al., 2022b) ChatGPT prospects take a wide space of applications

across industries and businesses (Table 1) while demonstrating the

transformative potential of globally increasing the efficiency and

di-versity of human value production (Zhang et al., 2023; Shaji et al., 2023;

Frederico, 2023)

The launch of ChatGPT has caught the attention of all scholars,

regardless of discipline The popular press has also engaged in

discus-sions around the implications of ChatGPT, and, more broadly, on

generative AI, highlighting the many potential promises and pitfalls of

these systems The definitions of social robots found in most academic

articles appear to lack consistency, leaving the theoretical framework

within this context somewhat ambigous (Sarrica et al., 2020)

Predom-inantly, social robots are defined as autonomous agents that can engage

in significant social interactions with humans Their engagement style

depends on their applications, roles within specific environments, and

adherence to established social and cultural norms These definitions

portray social robots as complex machines, with analytical and

computational abilities surpassing those of humans, designed to

emotionally engage with people through communication, play, and

facial cues ChatGPT is a form of virtual social robot which possesses

autonomy, can sense and respond to environmental prompts (like

answering questions), can interact with humans (through virtual

con-versations), and understands and adheres to societal norms (due to its

programming that incorporates, to a certain extent, societal values,

barring instances of security breaches or algorithmic failures)

Conse-quently, ChatGPT fulfills four aspects of social robots encapsulated in

most foundational definitions (Fong et al., 2003; Duffy, 2003; Sarrica

current definitions overlook the concept of social robots in a virtual space, and they are neither comprehensive nor unequivocal For instance, the prevailing definition neglects to consider the cultural background and context in which social robots will be deployed For instance, the prevailing definition neglects to consider the cul-tural background and context in which social robots will be deployed Given the exponential popularity of virtual social robots like ChatGPT, which can be extensively used for natural language processing tasks such as text generation, language translation, and generating answers to

a plethora of questions, and can disrupt various sectors in business such

as the service industry, managerial decision making in different texts, marketing and sales, and human resource management, it is necessary to understand the perception of managers regarding this new technology For the technology to successfully operate in and be inte-grated into the business environment, it has to be accepted by managers, because they will be responsible for developing strategies to facilitate ChatGPT–employee collaboration based on their own perception Existing studies have shown that social robots can transform the service encounter of a consumer through novel and emotionally charged interactive experiences (Larivi`ere et al., 2017), and such is the case with ChatGPT Therefore, ChatGPT, like virtual social robots leveraging generative AI capabilities, will be a crucial technology that can be potentially considered the workforce of the future in a wide range of business settings and operations For example, existing studies have discussed compelling cases where social robots will find their way within organisations as AI technology evolves (Henkel et al., 2020), and may improve the working conditions of employees (Goeldner et al.,

con-2015), which will significantly enhance employee productivity, business productivity and the competitive advantage of firms (Kopp et al., 2021) Considering the virtual nature and generative AI capabilities of ChatGPT, its diffusion is likely to be faster within business organisations because of its superior analytical and computational capabilities compared to humans, interactive features, and ability to solve problems Since ChatGPT is a very new social robot, we have not yet come across any studies in the literature on its adoption by managers within orga-nisations, which leads us to the following research question:

RQ: Which factors will drive and inhibit the intention to use ChatGPT

by business managers that will facilitate ChatGPT-manager tion for content creation?

collabora-Grounded in the bodies of literature dealing with technology tion and consumer value creation, we integrate the unified theory of

adop-Table 1

Sectoral applications of ChatGPT (Derived from Richey et al., 2023; Dwivedi et al., 2023 and Budhwar et al., 2023)

Education Support to teaching and learning activities as ChatGPT fosters more

engaging and interactive educational tools and spaces Incorrect information taught and learnt due to data-driven biases

Gaming Unsupervised generation and personalization of sandbox game

scenarios and spaces Higher unpredictability of game use and less controllable gaming outcomes and purposes Media Diversification of media content and format and increasing

productivity in content creation Lower monitoring of information quality and truthfulness

Advertising Automation of ad generation (synthetic advertising) and

personalization of ads, supporting and extending the capabilities of

E-commerce Increasing easiness of platform and website development and

responsivity of e-services and chatbots to customer requests Increasing risks of e-commerce misuse and fraud

Healthcare Increase in accuracy and efficiency of electronic health record

systems and access to medical services (from AI caregivers) Increasing responsibility of machines over human patients’ health

Finance Increase in responsiveness and reliability of customer service

More accurate identification of aberrant transactions and fraud Reduction in human contact and monitoring of financial services

Logistics Increase in data-driven supply chain management efficiency and

Trang 3

acceptance and use of technology (UTAUT) (Venkatesh et al., 2003) and

the marketing paradigm of experiential consumer value (Holbrook,

1999) to develop a model This model aims at classifying antecedents to

the acceptance of social robots into experiential features and viewing

value as co-created, respectively, to (1) provide the experiential

expla-nation and make sense of the relevant interpersonal factors discussed in

the existing literature on the use of social robots and (2) anchor the view

that humans and AI collaborate with the intention of co-creating value

By doing so, we aspire to bring a realistic perspective of the acceptance

of social robots closer to the current context and provide future research

with a comprehensive framework we believe to be consistent with the

development of a society where robots and humans would associate as

counterparts and neighbours (Kanda et al., 2004) In this study, we

examine the a priori acceptability phase, when users form their first

judgments about the technology after first/initial interactions (Bobillier-

acceptance (Terrade et al., 2009) and appropriation phases (Barcenilla

yet to see full-fledged business applications In this context, it is worth

pointing out that a review on the acceptability of social robots

con-ducted by David et al (2022) found that around 74 % of articles in their

database examined social robots’ a priori acceptance phase, despite

these robots (such as NAO, Era-Robot, Pepper) already being

opera-tional for several years in health, education, and service domains (Al-

2020)

This study broadens research on social robots by first building on and

expanding the theoretical developments from Duffy et al (1999)

introducing AI self-awareness, DiSalvo and Gemperle (2003)

consid-ering their anthropomorphia, Kreijns et al., 2007 demonstrating their

usefulness to collective goals or Lee and Lee (2020) presenting social

robots as service providers as efficient as humans to integrating social

robots into the context of an experience of co-creation of consumption

between an artificially intelligent service staff and a human customer In

this line, we synthesize findings about social robots to incorporate them

into a comprehensive and versatile framework, consumer experience

(Holbrook, 1999) Secondly, with this theoretical lens, the present

research enlarges studies of social robots to include the influence of

symbolic individual and social goals of human users as hedonic

moti-vations, seeking recognition of how they interact with robots In this

matter, our research extends research on the ethics of social robots to

their co-existence with other intrinsically human ideals such as esteem,

spirituality, or status Third, we presently show the intersections of the

narrative that “computers are social actors” (Nass and Moon, 2000) and

the experiential value of their social role, considering robots as co-

creators of social and inner experiences with a user This strengthens

research into shopping experience marketing through the scrutiny of

symbols and values shared by AI and humans (Bolton et al., 2018), for

instance making room for thought on the emergence of feelings of the

‘uncanny valley’ in the context of the shopping experience

The article is structured as follows First, we present the background

literature and identify gaps in knowledge in Section 2 Next, we propose

and discuss the theoretical model of our research in Section 3 Section 4

presents the methodology of our study, followed by the findings in

Section 5 The discussion is presented in Section 6 and the research

implications in Section 7 Section 8 outlines the conclusions and future

work

2 Literature review

2.1 Social robots

categorises human postures to find the appropriate action for a given set

of observed inputs before interacting with a human By searching for

postures when observing a human, AI demonstrates higher social

intelligence Breazeal and Scassellati (1999) advanced the view that if robots have the computing abilities to build complex human behaviours, they require a system to consistently project themselves into the next behaviour and account for past ones, based on their perception of a human user This “attention system” therefore increases a robot’s ability

to socially interact with a human user by collecting cues to what behavioural strategies, such as avoidance or engagement, and what emotions and motivations it should display to the user This follows

are defined as not only interacting with third parties but also as having developed an awareness of their own mental state and the mental state observed by those third parties In the words of Duffy et al (1999), social robots “behave in ways that are conducive to their own goals and those

of their community” in the complex and variable environment of a social group According to Wirtz et al (2018), service robots are “system- based, autonomous, and adaptive interfaces that interact, communicate, and provide services to an organisation’s customers” Social robots exhibit heuristics similar to those of humans and are accorded person-alities from users (Banks, 2020) Within the existing literature, commonly agreed properties associated with social robots range from social consciousness to empathy and sociability (Table 2) Duffy et al.’s

having capabilities to react and deliberately act while sensing their environment For Fong et al (2003), social robots are socially interac-tive in the sense that they assess other entities, such as humans or ma-chines, in a heterogeneous group, and interpret their position in society

in a personal way based on their recorded past experience For Breazeal

rooted in sociability, as is the ability to mirror the social context of human interactions through anthropomorphism and non-verbal signals added to AI

focusing on anthropomorphism in robots, which continues to gain mentum with technological advances in AI related to emotional and social intelligence Shin and Choo (2011) discussed socially interactive robots that act autonomously and communicate in constant interaction with humans and other machines, which means that they need to

mo-Table 2

ChatGPT social robot properties

Property General capabilities implied ChatGPT capabilities Social consciousness

( Duffy et al.,

1999 )

Embodied AI agent, Autonomous or semi- autonomous Reactive and deliberate

Embodied into a screen interface and the identity of ChatGPT

Autonomous Reactive and deliberatively advising

Social interactivity (

Fong et al., 2003 ) Personal perception and interpretation of the world

according to a recorded past experience

Identification of other agents, human, animal or machine, and

communication within a heterogeneous group Enactment according to a social role identified within

a group

Possesses the view of the world from Web Data, framed into ethical rules programmed by Open AI developers

Can identify the language of the user but needs the user

to disclose personal information to identify them comprehensively

Acts as a subordinate oriented towards the user with the central objective to serve them

Sociability (

Bartneck and Forlizzi, 2004 ;

Breazeal, 2002 )

Understanding and relatedness to humans:

empathy Mirroring of human social context through anthropomorphy and lifelike features Verbal and non-verbal signalling to humans

Displays signals of empathy and support to the user Mirrors verbal anthropomorphic only Communicates verbal signals only, under a textual form

Trang 4

understand the emotions of humans to function efficiently Such social

skills are important for user learning towards collective and individual

goals (Kreijns et al., 2007) Social bots that can help users improve their

skills and show responsiveness lead to higher adoption by people, thus

embedding into the fundamentals of the service-dominant logic (SDL)

theory, refocusing the value of goods and services purchases towards the

experience and capacity a consumer can derive from them (Vargo and

competitive advantage; such centrism is based on the emergence of a

service as a process of co-creation, where consumption and provision of

the service continuously interact to raise the value of the experience,

transcending the acquisition of the service itself Research in this area

has primarily focused on assistance robots used in healthcare, mobility,

disability care, or education (Kelly et al., 2022), focusing on the

inter-personal aspect of social robots Even though this topic has a long history

in management research, it is currently being reinforced by academics

and follows a favourable context For example, Lee and Lee (2020) point

out that customers increasingly prefer minimal human contact during

their shopping experience, with AI robots providing higher customer

satisfaction than their human counterparts (Bolton et al., 2018)

However, social robots are perceived with greater distrust than their

human counterparts, with this phenomenon decreasing for

anthropo-morphic robots (Edwards et al., 2019) Adult users prefer human-like

computers, while children are more comfortable with cartoon-like

ro-bots (Tung, 2016) In this matter, other scholars draw attention to the

‘uncanny valley’ effect, which emerges from a cognitive dissonance

sensed by a human user between the features they instinctively expect

from a human likeness and what they perceive in a human-like robot,

thus raising a feeling of discomfort or disgust (K¨atsyri et al., 2015) This,

therefore, plays an important role in reducing the positive impact of

anthropomorphism on the acceptance of social robots, as argued by Yam

human-likeness expectations from users and increase social robot

acceptance in the Japanese service industry Humans confronted with

the contradiction of a realistic humanoid robot suffer from a discrepancy

between their anticipation of a human and their perception of something

that is not fully human (Mori et al., 2012) Indeed, the anthropomorphic

dimensions of social robots imply that a user could unconsciously

attribute this gap between what should resemble a human and what they

see of a robot to illness, death, or zombification (Diel et al., 2021),

leading to less trust and engagement from the user and overall, the

reversal of perceived usefulness into the intuition of impairment (

interfaces has shown that users with autism need less human help to

interact with a social robot than with a computer (Pop et al., 2013)

If we were to place the uncanny valley on a continuum, it would start

with industrial robots at one end, completely non-humanoid (eliciting

neutral emotional responses from humans), continuing along, we reach

a point where robots are designed to look very human-like, but just

enough to be detectable (emotional response becomes negative), and as

we move out of the valley and approach the far end of the spectrum, we

find entities that are virtually indistinguishable from real humans These

indistinguishable ones might be sophisticated androids or computer-

generated humans that look and move just like real people (for e.g.,

deepfakes) At this point, the emotional response becomes positive

because our brains are no longer able to detect the imperfections, and we

accept them as fully human (Cheetham et al., 2011) The discomfort

caused by human-like entities in the uncanny valley stems from these

entities violating our innate social norms and expectations, i.e., humans

have ingrained rules and expectations for social interactions, and

human-like entities are expected to follow these norms (Moore, 2012)

Moreover, human-like entities lack signs of life or consciousness which

can trigger fears about human identity, the uniqueness of human

con-sciousness, and may elicit the idea of being replaced, causing fear and

scepticism (MacDorman and Ishiguro, 2006) The variations in response

to human likeness of digitally created faces can be also attributed to

factors like individual differences and cultural backgrounds (Burleigh

et al., 2013)

Social robots can “respond to and trigger human emotions” (schel et al., 2020) to build interpersonal relationships and participate in communities on a larger scale Consumers demand more pleasure from intelligent agents and a kind of “fool’s licence” from social intelligent systems, as Dodgson et al (2013) showed Reciprocity and honesty are also expected from social robots (McEneaney, 2013) Social robots have trust, affection (Picard, 1999) or the ability to love (Samani et al., 2010),

Hen-or detect weak signals (Heylen et al., 2009) For example, most nized work on social robots concerns highly sensitive social situations, such as autism (Mejia and Kajikawa, 2017; Robins et al., 2005) Per-sonalisation, connectivity, and reliability give social robots the ability to innovate their services to support vulnerable users (Khaksar et al.,

recog-2016) Within the singular context of the recent Covid-19 pandemic, social robots have played a key role in restoring mimicked social in-teractions to support the successful enforcement of the social distancing necessary in such context, and to manage the monitoring and inhibition

of the associated danger of infection, especially for healing practitioners (Yang et al., 2020) Aymerich-Franch (2020) studied 240 use cases during this pandemic and classified social robots into three care func-tions: linking people with caregivers, protecting, and supporting humans’ well-being Therefore, social robots mirror and automatize a 24/7 social presence for answering health and well-being considerations

of individuals Further to healthcare, they mitigate the feelings of loneliness and boredom of families during the lockdowns of the Covid-

19 pandemic

social robots The first problem arises from the long-term habituation of users and the potential dependency that consumers might build on social robots If such AI robots are used in the care of disabled people, they could gradually lose their autonomy by relying on a social robot Social robots can also control decisions about impressionable users, infantilise them, or isolate them from other humans In these first cases, human integrity is challenged by the democratization of social robots, while a second problem arises from the integrity of the AI itself Social bots have social functions, behaviours, and appearance, but they lack authenticity and morality, leading to a preference for insentient machines over humans to meet social needs (Tan et al., 2021) However, users tend to cling to stereotypes when it comes to placing a social robot in a social position They prefer extraverted social robots for healthcare and masculine-looking robots for security tasks (Tay et al., 2014)

Research on robot acceptance is often embedded in the technology acceptance model (TAM) (Venkatesh et al., 2003) or its further devel-opment, the unified theory of acceptance and use of technology (UTAUT) (Venkatesh et al., 2003) The technology acceptance model (TAM) developed by Davis (1989) bases users’ intention to use a tech-nology on the utilitarian logic of the cost-benefit ratio of that technol-ogy Social robots fall within the framework of TAM (Shin and Choo,

2011) Robots that are seen as useful and easy to use attract demand and consumption Saari et al (2022) applied TAM 3 to various market seg-ments: the proactive early AI adopter is contrasted with the mass mar-ket The authors focused on AI functionalities to distinguish AI designs that are suitable for both segments They showed that perceived ease of use does not influence the intention to use social AI, while perceived usefulness is an important factor in a sample that is largely representa-tive of the mass market This leads to the perspective that users accept AI not because of its accessibility but as a way to advance themselves Mass users see social robots as a benefit that they enjoy In contrast, early adopters look for reliable and transferable outcomes However, utili-tarian expectations related to simplicity and usefulness have been complemented by expectations of enjoyment (Pillai et al., 2020), attractiveness (Thongsri et al., 2018), or well-being (Meyer-Waarden

existing evidence divides the factors influencing the intention to use social robots into utilitarian and hedonistic antecedents (De Graaf and

Trang 5

Allouch, 2013)

The field of social robots also has its roots in the ‘computers are social

actors’ (CASA) perspective advocated by Nass and Moon (2000) The

authors assumed that users unconsciously confuse humans and

com-puters and tend to ascribe a psychological nature to a computer, leading

them to display social biases and form interpersonal relationships with

computers For example, users attributed action risks to AI collaborators

and anticipated moral hazards, lying, and problems in choosing robots

(McEneaney, 2013) Atkinson et al (2012) and Kanda et al (2004)

pioneered the CASA view of AI by articulating interpersonal trust and its

impact on the success of AI-human collaboration or human

vulnera-bility The matching person technology theory developed by Scherer and

the acceptance of social robots Using a grounded theory approach, the

authors qualitatively analysed the relationships established by users

with assistive technologies and emphasised that applying generic rules

to user acceptance of social robots does not do justice to the reality of

users Khaksar et al (2016) also used grounded theory to analyse the

degree of innovation of social robots Shin and Choo (2011) showed that

the adaptability and sociability of a social robot complements its

use-fulness and usability, as it leads users to develop a positive attitude

to-wards a social robot, while the perception of being in the company of a

psychologically aware robot directly increases the intention to use it

The authors distinguished between adaptivity and adaptability, which

describes how robots dynamically adapt to new solutions in changing

environments They showed that robots with the ability to dynamically

adapt to a Rousseauian social contract are more valued by their human-

like counterparts

As an interpersonal partner or social actor, the social robot is

incorporated into a cohort or individual’s life through trust (Kim et al.,

by presenting their social skills, ethical integrity, and goodwill (Kim

et al., 2020) In this way, the social robot’s intelligence, autonomy,

anthropomorphism, and empathy play a key role in developing trust

among users Gursoy et al (2019) introduced the concept of social robot

integration for consumers as a long-term use of a social robot and

pro-posed individual findings that encourage reflection on the development

of a relationship between a robot and a human, showing that trust in

social robots may not last as long as predicted According to the authors,

the emotions and motivation of the user are the most important factors

for the integration of a robot into the household The appearance of a

social robot that resembles a human increases the intention to use it but

decreases the willingness to integrate it For users, anthropomorphism is

an advantage when using intelligent social robots without worrying

about attachment, but it evokes anxiety in users that challenges their

cognitive integrity in the long run when it comes to integrating them

into everyday life The findings of Gursoy et al (2019) showed the

myopia of consumers in relation to the current increase in the demand

for humanoid social robots In this sense, they accept social robots but

lose their enthusiasm when they realise the potential future loss of

control or competition in interactions According to Gaudiello et al

de-velops more efficiently when users consider the functionalities of social

robots and make an abstraction from the whole social robot entity

interaction should be viewed through the lens of neuroscience Indeed,

even though social robots are becoming more capable of exhibiting a

social presence, the remaining gap between human and robot cognition

prevents social robots from fully meeting user expectations Because

social robots appear alien to humans, as latter mainly use the frontal

cortex, a part of the brain responsible for cognitive reasoning rather than

intuitive emotions Users develop engagement and empathy towards

social robots, but these arise from the mental perception of the social

robot’s state rather than spontaneous interaction with it Currently,

humans’ unconscious intuition prevents them from fully accepting a

social robot, as deep parts of the brain react and confront the user with

inexplicable feelings of rejection The sight of an inanimate, human- looking object neurologically explains the uncanny valley phenome-non (Rosenthal-von der Pütten et al., 2019)

2.2 ChatGPT (GPT-3)

One of the most remarkable specimens of social robots, GPT-3, known as ChatGPT, is currently gaining more and more public and scientific attention as a “cultural sensation” (Thorp and H H., 2023) Launched by the organisation Open AI, this conversational agent uses natural language processing and machine learning from data circulating

on the internet to engage in discussions with around 1 million users around the globe (Mollman, 2022) ChatGPT can be assimilated into a social robot as it presents the consciousness of a social being (Duffy

et al., 1999), possesses one-to-one interactivity (Fong et al., 2003), and sociability in its textual interface (Breazeal, 2002), as presented in

together with ChatGPT, such as The Guardian or BuzzFeed (Pavlik, 2023) GPT-3 shows high abilities to answer mathematical problems, find and synthesize information, understand business stakes (Kecht et al., 2023), recommend decisions (Phillips et al., 2022), or even write poetry (K¨obis

GPT-3 is a type of generative AI, which can generate content autonomously as text, plan, or programme from the analysis of massive data amounts Such generative AI presents unprecedented capabilities to create and provide responses in a human-like manner (Pavlik, 2023) GPT-3 also learns from its own interactions with users to improve future answers and personalise suggestions (Phillips et al., 2022) For instance, organisations can train such robots to deal with customers’ requests and assist customer relationship management (Kecht et al., 2023) Man-agers’ perceptions and attitudes towards ChatGPT have been presented

already supports research, idea creation, or writing messages and ports, and is seen as helping to improve their communication and effi-ciency at work, needing less time to yield outcomes of higher quality However, ChatGPT is only moderately appreciated by teachers, as it raises concerns about student dishonesty or even its value for the learning process, but it is perceived as bringing benefits in terms of students’ motivation and engagement, or of teachers’ responsivity and productivity, helping them focus on high-level tasks (Iqbal et al., 2022) Finally, academics perceive ChatGPT as a high-potential and disruptive technology for economies and humanity, as developed by the con-sortium of academics in the recent research of Dwivedi et al (2023) ChatGPT would help thoroughly and relevantly managers in their daily tasks but would increase risks of poor reputation, offensive content, plagiarism, loss of privacy, or inaccurate information Therefore, the democratization of ChatGPT should be framed with policy regulations and academic research

re-Many studies about GPT-3 express concerns about the ethical plications of such tools, regarding the human value of future creations (Else, 2023) or the social aspects of interacting with GPT-3 (Krügel et al.,

im-2023) For instance, Gao et al (2022) demonstrated that researchers were not able to distinguish a scientific text written by GPT-3 from a text written by a peer, and that artificial intelligence proves useful to detect artificially written documents Even users tend to underestimate how GPT-3 influences their decisions and may follow even amoral sugges-tions (Krügel et al., 2023) O’Connor (2022) also discussed whether GPT-3 augments or replaces the learning process of students, and Cas-

hand, K¨obis and Mossink (2021) also demonstrated that human readers would still have a preference for poems written by humans (even novice poets) rather than poems written by GPT-2, linking the ChatGPT tech-nology to unconscious rejections of robots highlighted by Rosenthal-von

a “tortured” writing style on occasions (Cabanac et al., 2021) Regarding

Trang 6

the social implications of GPT-3, Henrickson (2023), for instance,

studied the case of thanabots, applications of GPT-3 to recreate the

interaction with a deceased relative to assist mourning journeys The

emotions built through a thanabot are based on the deceased person’s

rhetorical style and on mimicking shared past experiences If GPT-3 is

not present under an anthropomorphic physique (voice and humanoid

body), it is one the best depictions of social intelligence and acts as a rich

platform for future transformations into highly realistic social robots

for nurses or a fellow student who shares the care of patients (Aydın and

conversational robot, raising ontological and ethical questions about it

ChatGPT remain scarce (Krügel et al., 2023; K¨obis and Mossink, 2021)

As regards social robots, ChatGPT is useful to complete social robots’

conversational capabilities, yet currently involves diverse risks in the

form of sporadic AI failures (Balagopalan et al., 2023) but shows

pros-pects of long-term negative impact on organisations’ performance and

on individuals’ well-being (Thongsri et al., 2018; Lee and Moon, 2015)

For instance, as the global web nurtures social robots, this could lead to

the dissemination of hate speech or false information to the human

subjects of social interactions (Cao et al., 2021) Since ChatGPT

perpetually evolves through interactions, its application in social

ro-botics could inadvertently expose private user communications to the

public, indiscriminately merging this sensitive data with its broader pool

of learned information ChatGPT doesn’t have the ability to understand

or judge what’s morally right or recognize private information, which

could drastically impair service quality or consumer trust in service

quality in the case of social robots (Prakash and Das, 2021) If ChatGPT,

is unable to distinguish between private and public interactions, there’s

an inherent risk that personal information shared in confidence could be

inadvertently disclosed in other, unrelated contexts This loss of privacy

is not just a personal issue but can also have legal implications,

partic-ularly if the information shared involves health, financial data, or other

sensitive subjects protected under privacy laws like the GDPR or HIPAA

The learning process of generative AI applications is continuously

evolving and doesn’t inherently discriminate between what it should

and shouldn’t remember or share Without strict data governance

pro-tocols, the AI might unknowingly share private conversations, thinking

it’s providing helpful or relevant information based on previous

in-teractions This scenario could lead to uncomfortable situations, or

worse, harm someone if sensitive information about personal struggles,

identity, or private relationships is revealed to the wrong audience In

scenarios where private information is disclosed publicly, there’s a risk

of that data being used unethically or maliciously This misuse could

range from targeted advertising based on private information, to more

nefarious outcomes like blackmail or identity theft Using ChatGPT for

social robot applications demonstrates a risk to service performance

it-self (Lee and Moon, 2015) as the capability to push the technicity of a

process and to project its technologic outcome into transcendent

story-telling remains specific to humans (Jonas, 1982) In other words,

aca-demics cannot currently determine whether social robots would seek

constant progress in designing services or products, and overall bring a

growing social value, or more generally whether generative AI would

relate to the rather human endless desire for betterment Rather,

ChatGPT’s performance shows risks of harming individuals’ well-being

or the potential reinforcement of cases of certain users in affective need

or even dependence on conversations with social robots, or the

increasing delegation of social responsibilities to generative AI, thereby

preventing human users from developing personal competences to

navigate throughout our society by themselves (Xian, 2021) As users

become aware of potential privacy infringements, their trust in social

robotics could diminish This erosion of trust is harmful not only to the

user experience but also to the reputation and reliability of the

com-panies developing and utilizing these technologies

2.3 UTAUT and Holbrook’s experience-based typology of consumer value

The UTAUT remains one of the most commonly used models for the analysis of AI acceptance (Kelly et al., 2022) Extensions of the TAM, such as the theory of reasoned action, the theory of planned behaviours, and other frameworks such as the motivation model or the innovation diffusion theory (Oye et al., 2014), argue that the intention to use a particular technology is influenced by the performance and effort ex-pectancies, and by the two parameters from the environment of the user: social influence and facilitating conditions (Venkatesh et al., 2003) Social influence and facilitating conditions place the potential user in a context where technology acceptance receives the appraisal of sur-rounding peers and where the potential user’s means and environment support the acceptance of this technology In this model of technology acceptance, gender, age, experience with technologies, and voluntari-ness of use act as moderators of the relationship between the precedent factors and intention to use technology In healthcare, social robots’ acceptance integrates the concepts of trust and risk as antecedents (Prakash and Das, 2021; Fan et al., 2020) Regarding acceptance of AI in the field of GPT-3 services for the public, Kuberkar and Singhal (2020) advanced anthropomorphism as a factor of intention to use, while Cao

develop-ment concerns as players in the process of accepting AI assistance donism, security, and sustainability are also raised as arguments for consumers to accept AI (Gansser and Reich, 2021) However, De Graaf

that the UTAUT lacks consideration for hedonic factors of technology acceptance, such as attractiveness and enjoyment Such variables arise from the experience during the use of a robot and imply that robots are mistaken for social actors by the user

In consumer marketing, Holbrook (1999) developed a typology of dimensions that determine the value a consumer anticipates and re-trieves from the consumption experience Holbrook claimed that mar-keting frameworks focusing on product development for the average target person, without consideration of a consumer’s experience, lack comprehensiveness in a world where markets become increasingly fragmented, and he argued for the progression towards more interpre-tative models of customer satisfaction, experience and expectations In this line, the author presented eight value dimensions (see Table 3) that include consumer experience and can emerge simultaneously or sepa-rately: efficiency, excellence, status, esteem, play, aesthetics, ethics, and spirituality (Holbrook, 1999) Consumer value has extrinsic and intrinsic roots and can be oriented towards the self or others, while it comes from an active or a reactive reflection of the consumer In this sense, efficiency is extrinsic, active, and self-oriented, and it relates to the usefulness for efforts, time, and money engaged; excellence is extrinsic and self-oriented, but reactive to the reliability and core quality features of the consumed product or service Play, on the other hand, is intrinsic, self-oriented and active, depending on how enjoyable the consumption experience is, while aesthetics, intrinsic, self-oriented but reactive, states how pleasant the consumption is to the consumer’s senses (Holbrook, 1999)

Oriented towards others and active, status and ethics are extrinsic and intrinsic, respectively, the first depicting how others see the social value of the consumption of one product or service and the second how the consumer sees the moral value of their own consumption The latter,

Table 3

Consumer value typology based on consumer experience, from Holbrook (1999)

Extrinsic Intrinsic

Trang 7

the ethical dimension of experience, can therefore be defined positively

as one’s adjustment to the moral values shared in one’s social group or

normatively, as the adjustment of one individual to a set of universal

moral standards, for instance, transparency, trustworthiness, and

re-sponsibility (Laczniak and Murphy, 2019) Therefore, the ethical value

of an experience represents the level of attention given to ethical

obli-gations, personal values and moral beliefs and the moral identity of

receivers within a service (Sun, 2020) Esteem and spirituality

di-mensions are also other-oriented but reactive, extrinsic and intrinsic,

respectively Esteem relates to how others treat the consumer who uses a

certain product or service, while spirituality relates to how the consumer

respects their own faith and personal well-being by consuming a product

or service (Holbrook, 1999) The author derives the spiritual dimension

of experience from an individual’s search for meaning and values to feel

connected to the complete self and the surrounding environment

(McKee, 2003) In an increasingly volatile, uncertain, complex, and

ambiguous world, the transformational and inclusion capabilities

materialized within experienced spirituality gain increasing attention

from businesses and academics (Husemann and Eckhardt, 2019; Santana

dimensions of the consumer experience share common orientations

about the objective of the experience and the actions in the experience

the others in the construction of an experience value in consumption

For instance, the values of efficiency, excellence, and aesthetics have in

common that they are not socially rooted but individually assessed

Similarly, ethics and spirituality come from the objective of gaining

intrinsic value as a consumer

The variation in moral perspectives among individuals is a complex

phenomenon, deeply rooted in cultural, societal, psychological, and

personal factors One person’s view of an immoral act might differ

substantially from another’s based on these influences This concept

becomes particularly evident when we examine cultural variations

across different societies (Hofstede, 2011) In this context, Hofstede’s

cultural dimensions theory helps to understand the impact of a society’s

culture on the values of its members and how these values relate to

behaviour It implies that people’s beliefs, values, and behavioural

norms can vary significantly between cultures For instance, in highly

individualistic societies, people are more likely to make moral

judg-ments based on personal beliefs, rights, and freedoms, sometimes

emphasizing the importance of standing up for one’s convictions even if

it goes against societal norms Conversely, in collectivist cultures,

mo-rality is often framed in terms of social harmony, and community

wel-fare Therefore, disrupting these elements, might be deemed highly

immoral Similarly, in cultures with high uncertainty avoidance,

devi-ating from established norms or engaging in behaviour perceived as

unpredictable may be considered immoral, whereas societies with low

uncertainty avoidance might be more tolerant of such actions This

variation underscores the importance of cultural sensitivity and

awareness, in the increasingly turbulent digital landscape

Academics have demonstrated that some dimensions can influence

others in certain contexts without global consensus, for example, play

can support ethical values by incentivizing the goodwill of the co-

creators of an experience (Sheetal et al., 2022) Lemke et al (2011)

also underlined the variability of the impact of status or esteem on

perceived excellence, while Gentile et al (2007) reminded us that

sensorial cues of the experience as aesthetical value will have a

fluctu-ating influence on other features of the consumption experience

depending on the context In sum, Holbrook (1999) provided a set of

modalities for the experience of consuming a service that can combine to

create a large diversity of different experiential situations Overall, the

theoretical narrative developed by Holbrook (1999) is based on the

service-dominant logic, which considers service as co-created by

cus-tomers towards a transformative value into capacities and extends it

from its utilitarian aspects to dimensions of hedonism and sense-making

Sense-making defines, according to Weick (1995), the continuous

process of mapping a comprehensive meaning or symbol of their actions and social interactions from the inner interpretation of context into their own values In this line, as customers enact service co-creation, they incorporate the purchasing experience into the construction of sense as a stimulus interpreted in coherence with life-long integrity (Holbrook,

1999) In parallel, service providers shall therefore guide the experience

to orient its value to the offer of a constructive and positive sense for the customer

Regarding AI use, Chen et al (2021) showed that experiential value influences the intention to buy an AI service and to cooperate for the service to result in the best quality possible, hence to co-create the value

of a service with AI Autonomous social robots also result in higher levels

of hedonic values (play and aesthetics) and symbolic values (status and ethics) for consumers (Frank et al., 2021) This typology of values also frames the acceptance of online AI chatbots and online purchases (Yin

useful to denote affective and symbolic cues behind AI acceptance in the completion of cognitive influences; AI aims at increasingly socialize and share experiences with consumers (Puntoni et al., 2021)

This study aims to understand the influence of the experiential pects anticipated from the use of social robots on their acceptance The choice of integrating UTAUT is justified by the willingness to provide a panoptic framework to social robots that neutralizes the social value of

as-AI, as opposed to the computers as social actors paradigm (Nass and

unmistakably even when they are not looked for On the other hand, the integration of this research into the Typology of Experiential Consumer Value (Holbrook, 1999) aims at proposing a comprehensive framework for the development of stimuli that influence social robots’ acceptance This theoretical framework will help understand and classify anteced-ents of social robots’ use into symbolic and hedonic cues from the experience anticipated by users Finally, we show the experiential meaning of each antecedent and provide a reasoning coherent with the perspective that users’ experience shapes their collaboration with a so-cial robot and therefore leads to a value co-created between AI and humans

2.4 Knowledge gaps

Recently, Puntoni et al (2021) stated that although AI technologies for the consumer are considered with objectivity as neutral objects provided to the public, they convey social specificities and interactional experiences, as in human-to-human services In this sense, the authors advocate the undertaking of research questions about feelings behind the experience of AI use for consumers; feelings of exploitation through personal data collection; the well-being felt from personalisation; the fear of self-integrity when delegating to AI assistants; the alienation felt from being categorized by an AI; and the dilemma between felt companionship and fear of vulnerability when developing interpersonal relationships with AI Academics undertaking the aforementioned pro-posals have focused on perceived data risks (Dinev et al., 2016), per-sonalisation (Liu and Tao, 2022), and fears of self-integrity, error, or discrimination for social robots (Cao et al., 2021) However, existing findings emerge in isolation, and no study overlaps felt experiences and attitudes associated with a unified perspective Furthermore, although semantics of collaboration and socio-technical value appear frequently

in such studies (Chowdhury et al., 2022), demonstrations of AI value as a co-creation between users and robots remain nascent and scarce (Huang

acceptance currently encompasses perspectives of a one-sided view of AI use values defined by the user (Krügel et al., 2023; Cao et al., 2021) Antecedents stemming from the existing findings fall into the spec-trum of AI design characteristics (Gansser and Reich, 2021) and inter-personal specificities (Kim et al., 2020; Gaudiello et al., 2016) However,

we argue that interpersonal relationships start with shared experiences encompassing both human and AI characteristics, being the result of a

Trang 8

co-created story between a human through attitude and a robot through

learning from data Siegel (1999) explained the interpersonal

experi-ences footprint into neurobiological repercussions Cetin and Dincer

recognition, willingness to help, and shared expertise, on customer

loyalty Chen and Lin (2015) demonstrated that the experience felt by

users of online blogs positively influenced their intention to continue to

participate, which lasted in a sustainable relationship between blog

members Consumers feeling a social exchange with service providers,

feeling supported by them, showed intention to repurchase such services

and therefore sustain a recurrent contact Overall, common reasoning

explains experience as a factor of commitment and engagement (Roy

et al., 2021)

Since ChatGPT was publicly released, AI entered the field of

con-sumer marketing with a business-to-concon-sumer approach (Mollman,

2022) Consumers would, however, interact with GPT-3 aiming at

ful-filling a task or objective, such as students to support their homework

(O’Connor, 2023), users interested in learning about topics, or

jour-nalists and bloggers (Pavlik, 2023), making managers and students the

main user groups of GPT-3 nowadays Consequently, consumers of this

free service come to GPT-3 with the idea of finding help with a project,

personal, or oriented towards others They enact and react within the

conversation with GPT-3 and attribute shortcut meanings to value GPT-

3 and yield an outcome co-created with humans and GPT-3 as sources

3 Model development and hypotheses

According to the UTAUT, users build positive attitudes and intention

to use AI from the expectation of performance and effort, and from the

perception of facilitating conditions and social influences (Venkatesh

et al., 2003) For instance, users show higher intention to interact with a

social robot because they expect specific design features such as the

ability to provide a personalized performance (Gao and Huang, 2019) or

feel subjective norms valorise the use of AI (Taylor and Todd, 1995) On

the other hand, consumers are now increasingly exposed to social robots

as service providers and retrieve experiences from them (Puntoni et al.,

2021) Experiences bring a co-created service value between customers

and providers, in eight dimensions which simultaneously come from

efficiency, excellence, play, aesthetics, status, ethics, esteem, and

spiri-tuality (Holbrook, 1999) Experience dimensions will enhance the

intention to engage in repeated interactions and engagement in a

rela-tionship between the service provider and the service user (Roy et al.,

under-stand and explain how acceptance of social robots arises in interpersonal

relationships between social robots and humans The association of

these theories also offers the perspective that value is raised in co-

creation between both parts and embodies the concept of

collabora-tion in the value brought by AI (Chowdhury et al., 2022) In this regard,

academics confirmed that concepts that echo experiential value

signif-icantly influence the intention to interact with social robots: perceived

efficiency and excellence (Davis, 1989), enjoyment (Xian, 2021),

anthropomorphic aesthetics (Liu and Tao, 2022), social recognition

Asked, for instance, for spiritual guidance to find meaning in one’s

own life, ChatGPT would provide explicit knowledge about self-

realization practices as reflecting one’s own values, goals, and

capabil-ities, or self-care If it offers guidelines and satiates the desire for

in-formation about spirituality, it cannot play the role of a spiritual leader

without influencing a user’s free will, therefore requiring scrutiny on

how such generative AI can offer a spiritual impact on users On the

other hand, the ethical dimension in the experience of using ChatGPT

would first entail a normative framework of what is moral or not around

the globe regardless of cultural variation, and this normative framework

under the supervision of developers Second, as ChatGPT touches

transversal applications within a nation, the ethical feature of its swers would require constant dialogue with institutional regulators In this sense, questions remain on what ethical value the experience of interacting with ChatGPT can bring, between a normative view behind the design of international conversational technology, and the positive view of national regulation, and what potential caveats could appear

an-On the other hand, similarly to the human-to-human applications of

ChatGPT could interact with each other, first because they intrinsically include intersectional aspects and second because users apply the complexity of social interactions to intelligent technology (Nass and

to the efficiency or excellence of the robot Arsenyan and Mirowska

media influencers could repulse users, preventing them from enjoying other values of the experience of an AI Efficiency, under the parameter

of perceived usefulness, is impacted by a user’s privacy concerns about a robot, hence the excellence dimension of its user experience (Ho and Lin,

2010) Perceived organizational support for AI, leveraging the status value of using AI, can increase trust in its reliability, such as excellence, and in its performance, such as efficiency, felt by an employee (Park and

raising a user’s self-esteem with appraisal messages could increase the perceived ethical value of interacting with the AI, implying that the robot displays exclusive support to the individual as a rule Overall, in line with the CASA argumentation (Nass and Moon, 2000), users of ChatGPT would intersect utilitarian dimensions with hedonic, social, and self-oriented aspects when interacting with it as they would in the context of a service experience with a human Following Puntoni et al (2021), we built a theoretical model that studies and discusses such factors under the comprehensive lens of Holbrook’s experience frame-work (Holbrook, 1999) This model will enhance the ability of AI de-signers to build social robots as sustainable collaborators of humans by integrating bases that create value and relationships for social AI and letting them acknowledge a holistic view of the feelings experienced by users to ensure responsible ways to incorporate social robots in our society

In this line of arguments, the present model hypothesises that ciency, excellence, status, esteem, play, aesthetics, ethics, and spiritu-ality, as anticipated aspects of the ChatGPT experience, enhance the intention to use ChatGPT, which has a significant impact on ChatGPT–human co-creation The model harmonizes the dimensions of ChatGPT use by instrumenting with AI use antecedents underlined in the existing literature (Tables 4 and 5) It instruments efficiency with perceived usefulness and ease of use and excellence with ChatGPT assurance, arguing that precedent factors contribute highly to making a social robot look efficient and performant (Ho and Lin, 2010; Davis,

effi-1989) Status and esteem are, respectively proxied by ChatGPT social recognition, subjective norms and ChatGPT personalisation On the one hand, subjective norms and social recognition of using ChatGPT make a consumer expect to reach a certain social status for their use of ChatGPT

other hand, personalisation makes a user feel respected and recognized for their self-worth (Gao and Huang, 2019) The model also covers play and aesthetics with hedonic motivation and ChatGPT enjoyment and with ChatGPT anthropomorphism The theoretical argument advances that enjoyment and hedonic motivation are bases for play (Xian, 2021;

due to its ability to please the senses of users (Liu and Tao, 2022) Finally, the present model instruments ethics with concepts of ChatGPT procedural and interactional justice and spirituality with ChatGPT empowerment and ChatGPT well-being The last instruments are justi-fied with the narrative that justice increases expectations of ethical collaboration from a user facing the prospect of using ChatGPT (Del Río-

Trang 9

adjusting to the spiritual goals of a user (Meyer-Waarden and Cloarec,

2022; Naranjo-Zolotov et al., 2019)

The considerable impact of efficiency, here in the form of perceived

usefulness and perceived simplicity of use, on self-reported utilization

among 120 managers was developed and confirmed by Davis in Davis,

1989, enhancing research on technology acceptance By delivering

utilitarian value, efficiency in retail also raises consumers’ intentions to

use AI while shopping (Pillai et al., 2020) In education, Huprich (2016)

showed that if AI applications are shown to improve students’ learning

processes, colleges would be willing to integrate them By demonstrating

that effort expectancy and usage convenience work in conjunction with

performance to impact the intention to use AI, Xian (2021) and Gansser

the intention to use AI Efficiency favourably influences the intention to

use for both early adopters and mass consumers, according to Saari et al

flexibility and sociability However, Kuciapski (2017) reminds us that a

technology’s usefulness depends on the context, task, or purpose that the

user has in mind The idea of compatibility between technology and the

user, as well as their surroundings, as a moderator adversely affecting

the impact of efficiency on the intention to use, was presented by

perceived cognitive demand (Thongsri et al., 2018) The impact of

ChatGPT’s efficiency on how users intend to use it depends on user-

specific contextual elements, even if it exhibits great capabilities to

respond to requests and produce content For instance, ChatGPT could

first be seen as helpful, but later appear to be unsuited to societal

per-formance (Krügel et al., 2023) As a result, we question the impact of

ChatGPT on the intention to use it in hypothesis(H) 1

H1 : ChatGPT use efficiency will have a significant effect on its

intention to use

The excellence of a social robot, being a matter of assurance, reduced

risk and concern about the quality of the outcome offered to the user,

increasing the intention to use intelligent banking services online (Ho

banking apps also positively influences the intention to use (Yu, 2012)

In the fashion industry, Lee and Moon (2015) questioned the willingness

Table 4

Model constructs definitions

ChatGPT use

efficiency Cost benefit – The productivity a user expects when using ChatGPT, in

terms of outcomes for efforts

Davis (1989)

ChatGPT use

excellence Reliability – The service quality a user expects from using ChatGPT,

relative to risk, hence reliability

Ho and Lin (2010) Johnson and Grayson (2005)

ChatGPT use

status Human appraisal – The appraisal a user expects from other humans

when using ChatGPT

Taylor and Todd (1995) Meyer-Waarden and Cloarec (2022)

ethics Morality – The moral standards and ethical obligation a user expects

when using ChatGPT

Del Río-Lanza et al

(2009)

ChatGPT use

spirituality Self-connection – How connected to themselves and to the

environment and how belonging to

something bigger a user expects to

feel when using ChatGPT

Naranjo-Zolotov et al

(2019) Meyer-Waarden and Cloarec (2022)

Table 5

Model constructs, associated items, and references

Efficiency • Using Chat GPT will improve my

• My interaction with Chat GPT will

be clear and understandable

• I will find it easy to get Chat GPT to

do what I want it to do

• Interacting with Chat GPT won’t require a lot of mental effort

• Interaction data will be protected

• I feel relieved to interact with Chat GPT

• Given by the Chat GPT’s track record, I have no reservations about acting on its advice

• Given the Chat GPT’s track record, I have good reason to doubt his or her competence

• I can rely on the Chat GPT to undertake a thorough analysis of the situation before advising me

• I have to be cautious about acting

on the advice of Chat GPT because its opinions are questionable

• I cannot confidently depend on Chat GPT since it may complicate

my affairs by careless work

(reversed)

• Interaction with ChatGPT will be favourable

Ho and Lin (2010) Johnson and Grayson (2005)

Status • People whose opinions I value will

encourage me to use Chat GPT

• People who are important to me will support me to use Chat GPT

• The senior management in my organisation will encourage using ChatGPT

• People who influence my behaviour think that I should use ChatGPT

• It would give me a more acceptable image of myself

• It would improve how my friends and family perceive me

• It would give me better social recognition

• Using ChatGPT will increase my profile in the organisation

• Using ChatGPT will be a status symbol in the organisation

Taylor and Todd (1995) Meyer-Waarden and Cloarec (2022)

Esteem • I feel that the Chat GPT system

recommendations are tailored to

• I feel that the Chat GPT system recommendations are delivered in

a timely way

• I would feel a sense of personal loss

if I could no longer use a specific Chat GPT system

• If I share my problems with the Chat GPT system, I feel he or she would respond caringly

Gao and Huang (2019) Johnson and Grayson (2005)

(continued on next page)

Trang 10

to use online clothing personalisation software and confirmed sumers’ preference on performance and the risks of contrasts between expected quality and purchased quality Mcknight et al (2011) showed that excellence increased trust in the robot, which secured the intention

con-to use it For Cao et al (2021), excellence lies in susceptibility, and fear that the robot will develop suggestions with negative effects regarding the user’s goals On the other hand, the complexity of the tasks sur-rounding the use of a robot decreases its perceived excellence, first in terms of explainability and transparency, which appear lower for com-plex tasks fulfilled with AI, and also as users tend to lower their ex-pectations of AI assistants for goals that appear complex to them

performance of a purchase based on past observation of performance and can lead excellence to present an even higher impact than expected

on the intention to use ChatGPT This is demonstrated by the frequency with which robots are mentioned in the media (Thorp and H H., 2023)

In addition, ChatGPT weaknesses have recently been brought up by academics and journalists (Thorp and H H., 2023), potentially harming the expected excellence that consumers view in ChatGPT Therefore, we interrogate the influence of the excellence of ChatGPT use on the intention to use ChatGPT:

H2 : ChatGPT use excellence will have a significant effect on the

intention to use it

ChatGPT’s use status is presently instrumented with subjective norms (Venkatesh et al., 2003; Taylor and Todd, 1995) and social recognition (Meyer-Waarden and Cloarec, 2022) The UTAUT integrates the normative value of a technology as an antecedent of intention to use

transgress social norms in its suggestions and allegations decreases the intention to use it (Cao et al., 2021) The success of the use of AI in an organisation is pushed by leadership, organizational support and col-leagues sharing knowledge about it, proving that the social context regarding the use of a robot influences its success to effectively socially integrate into the organisation (Chowdhury et al., 2022) If the use of

Table 5 (continued)

• The Chat GPT system displays a

warm and caring attitude towards

me

• I can talk freely with the Chat GPT

system about my problems at work

and know that he or she will want

to listen

Play • Using Chat GPT is fun for me

• Using Chat GPT is very enjoyable

• Using Chat GPT is very

entertaining

• Using ChatGPT is a joy

• Using ChatGPT is an adventure

• Using ChatGPT is a thrill

• Using ChatGPT will be rewarding

• Using ChatGPT will be a pleasant

Liu and Tao (2022)

Ethics • I think my problem was resolved by

the Chat GPT in the right way

• I think the Chat GPT has been

guided with good policies and

practices for dealing with

problems

• Despite the trouble caused by the

problem, the Chat GPT was able to

respond adequately

• The Chat GPT proved flexible in

solving the problem

• The Chat GPT tried to solve the

problem as quickly

• The Chat GPT showed interest in

my problem

• The Chat GPT did everything

possible to solve my problem

• The Chat GPT was honest when

dealing with my problem

• The Chat GPT proved able and to

have enough authority to solve the

problem

• The Chat GPT dealt with me

courteously when solving the

problem

• The Chat GPT showed interest in

being fair when solving the

problem

• The treatment and communication

with the Chat GPT to solve the

problem were acceptable

Del Río-Lanza et al

(2009)

Spirituality • Chat GPT use is very important to

me

• Chat GPT I use is meaningful to me

• Chat GPT activities are personally

meaningful to me

• Based on Chat GPT usage, my

impact on what happens in the

community is large

• Based on Chat GPT usage, I have

significant influence over what

happens in the community

• Based on Chat GPT usage, I have a

great deal of control over what

happens in the community

• If I used ChatGPT my life quality

would be improved to ideal

• If I used this ChatGPT my well-

being would improve

• If I used this ChatGPT, I would feel

happier

Naranjo-Zolotov et al

(2019) Meyer-Waarden and Cloarec (2022)

Table 5 (continued)

B´ehavioral Intention

• I intend to use ChatGPT in the next

Co-creation • I will feel comfortable co-creating

content with ChatGPT

• I will feel comfortable to solve problems with ChatGPT

• The ChatGPT service will allow me

to have my say to co-create

• My ChatGPT experience is enhanced as a result of co-creation ability and capability

• I will enjoy collaborating with ChatGPT to solve problems/

• I think ChatGPT as an assistant/

workmate will be easy to get along with

Gao and Huang, 2019 Chowdhury et al., 2022

Trang 11

such innovative technologies is felt as vital for organisations, managers

demonstrate a higher willingness to integrate robots (Ochmann and

their shift to this channel through the “demonetization effect”, the

democratization of mobile banking in their social environment (Sobti,

2019) The use of AI robots is caused by a social need to connect with

other humans (Thongsri et al., 2018) For example, the ability of an AI

application to benefit networking in education increases the intention to

use it (Kashive et al., 2020) Social influence increases the intention to

use social robots (Xian, 2021); recognition retrieved by users from their

peers using a robot positively influences their intention to use it (Meyer-

a robot also depends on individual user habits, such as addiction,

independently of social influence (Xian, 2021) and on a personal

mindset about new technologies, as resistance to change prevents

con-sumers from using robots regardless of mandates to interact with them

that is highly discussed and popular currently (Thorp and H H., 2023)

Intention to use it might be a matter of individual attitude towards

trends: optimism (Pillai et al., 2020), attraction, or rejection and

scep-ticism (Krügel et al., 2023) This leads us to formulate Hypothesis 3:

H3 : ChatGPT use status will have a significant effect on intention to

use it

In this study, personalization and affective trust proxy ChatGPT

users’ esteem as an experiential value in using ChatGPT For instance,

the fact that a robot suits the individual requirements of a user increases

the intention to interact with it (Pillai et al., 2020) A robot that

cour-teously obeys and responds to a user’s demands (Del Río-Lanza et al.,

2009) and behaves politely is more likely to be adopted by a consumer

to support user self-development also cause the intention to use it

en-ables users to express themselves through two-way communication (Gao

attractive to consumers Moreover, robots offering users a feeling of

control over decisions present higher user acceptance rates (Zarifis et al.,

2021) Johnson and Grayson’s (2005) concept of affective trust also

links to self-esteem, leading consumers to form an attachment to an offer

and trust it out of confirmation bias, rationalizing their own affections

with biased positive expectations of the offer For social robots, we infer

similar reasoning is followed by AI users (Abadie et al., 2019), and

supports the esteem value of interactions with ChatGPT to impact the

intention to use it Yet, within the hierarchy of consumers’ basic needs

are priorities relative to the need to raise self-esteem In this sense,

ChatGPT users’ esteem may not impact the intention to use it as much as

other experiential value dimensions In addition, as ChatGPT’s purpose

is oriented towards cognitive support, esteem might play a minor

cosmetic role in increasing the intention to use this service Finally,

within the field of Service Marketing, Li et al (2022) and Li et al (2019)

have demonstrated that perceived courteousness, service

responsive-ness, and esteem are influenced by the physical attractiveness of service

staff, as a “beauty premium” helping to forgive service failures Since

ChatGPT only shows behavioural anthropomorphism and no physical

appearance; it could show a low esteem value to ChatGPT users Based

on this debate, we challenge the influence of ChatGPT user esteem

through Hypothesis 4:

H4 : ChatGPT user esteem will have a significant effect on the

inten-tion to use it

In this study, we present the experiential dimension of play value

with perceived enjoyment (Pillai et al., 2020) and hedonic motivation

partly based on the fact that consumers enjoy interactions with the

robot In retail, similar findings confirm that collaborating with a service

robot is a “joy”, “thrill”, or “adventure” and relieves stress, which will significantly boost the intention to use it (Pillai et al., 2020) Moreover, social robots, being engaging and dynamic, are more popularly accepted among shoppers (Pillai et al., 2020) For Hui et al., 2007, positive atti-tudes towards robots anchor into AI, making tasks more interesting and fun for students The educative environment that frames acceptance and successful AI use should appear enjoyable (Kashive et al., 2020) Meyer-

con-sumers to use autonomous cars Likewise, Xian (2021) considered donic motivation to be a key factor of the intention to use robots in the leisure services sector However, the capabilities observed in ChatGPT appear majorly utilitarian, computation-based, and generate mainly textual data (Kecht et al., 2023) In this sense, the gamified aspect of ChatGPT emerges as low, questioning the representativity of past find-ings about AI and social robots in their ability to offer play to users of ChatGPT Therefore, testing whether ChatGPT also brings the play value and whether it has a role in the intention to use this robot arises as a critical task for research We thus develop Hypothesis 5, which suggests ChatGPT’s use of play significantly influences the intention to use it

he-H5 : ChatGPT’s use of play will have a significant effect on the

intention to use it

ChatGPT’s use of aesthetics refers in the present study to behavioural anthropomorphism developed by Liu and Tao (2022) The aesthetical value of an experience implies pleasure from the perceived beauty of a consumer product, action, environment, or interaction (with staff) offered (Holbrook, 1999) As regards social robots, research findings place anthropomorphism at the centre of perceived robot beauty or the pleasure to interact with it (Kanda et al., 2004; Duffy, 2003) Physical human likeness leads to a higher intention to use social robots (Blut

and reduces the perceived threat of a robot (Lee and Liang, 2016) through the formation of positive emotions with the anthropomorphic robot (Chiang et al., 2022) In addition, Seo (2022) showed that female robots increase the satisfaction of customers with hospitality services, as gender stereotyping represents females as appealing Liu and Tao (2022) showed that anthropomorphism appears attractive to users in its behavioural aspects: the robot that seems to develop autonomously its own decisions, own opinions and emotions, and overall is perceived as conscious, is more engaging to consumers For Esmaeilzadeh and Vaezi

Consciousness is defined as the robot’s ability to perceive its internal states, innovate, and communicate while agreeing on a specific set of symbols and behaviours to conform to This increases the empathy of the robot felt by a consumer, and overall increases the propensity to adopt the AI Robot anthropomorphism also leverages the perceived quality of

a service through its human-like behavioural or physical appearance, and fosters adoptions and even loyalty intentions of robots (Noor et al.,

2022) Similarly, the perceived humanity of a robot increases the acceptance of virtual assistants (Zhang et al., 2021) If Anthropomor-phism has been demonstrated as a positive influence to accept robots, it shows a diverging impact on users, who might feel repulsed by anthropomorphic robots if they felt that the robots’ intelligence and capabilities could threaten their own human intelligence (Gursoy et al.,

2019) Moreover, ChatGPT is still a recent phenomenon; not enough research has been conducted yet to confirm with confidence whether ChatGPT also falls into the case where human likeness has an attractive power on the user In addition, the present study focuses on behavioural anthropomorphism, as ChatGPT presents no physical appearance or voice, but only interacts through text In this sense, research on physical, psychological, and behavioural links between dimensions of anthropo-morphism remains scarce In this line, physical anthropomorphism could support or even activate the perception of behavioural anthro-pomorphism, allowing interrogations on ChatGPT’s ability to raise the intention to use it from human-like behaviour Consequently, we ques-tion the impact of ChatGPT’s anthropomorphism on the intention to use

Ngày đăng: 19/03/2024, 10:43

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w