Do những tiến bộ công nghệ gần đây, robot xã hội đang ngày càng trở nên phổ biến trong cuộc sống của người tiêu dùng. không gian. ChatGPT, một robot xã hội ảo, đã thu hút được sự chú ý đáng kể từ các phương tiện truyền thông đại chúng và giới học thuật những người sử dụng như nhau kể từ khi phát hành vào tháng 11 năm 2022. Sự chú ý này xuất phát từ những khả năng vượt trội của nó, như cũng như những thách thức tiềm ẩn mà nó đặt ra cho xã hội và các lĩnh vực kinh doanh khác nhau. Trước những diễn biến này, chúng tôi đã phát triển một mô hình lý thuyết dựa trên Lý thuyết thống nhất về chấp nhận và sử dụng công nghệ và người tiêu dùng loại hình giá trị tập trung vào trải nghiệm của người tiêu dùng để kiểm tra ảnh hưởng của các yếu tố kinh nghiệm đến có ý định sử dụng ChatGPT và sau đó cộng tác với nó để cùng tạo nội dung giữa các nhà quản lý doanh nghiệp.
Technological Forecasting & Social Change 201 (2024) 123202 Contents lists available at ScienceDirect Technological Forecasting & Social Change journal homepage: www.elsevier.com/locate/techfore A shared journey: Experiential perspective and empirical evidence of virtual social robot ChatGPT’s priori acceptance Amelie Abadie a, Soumyadeb Chowdhury b, Sachin Kumar Mangla c,* a Marketing Department, TBS Business School, Lot La Colline II, Route de Nouasseur, Casablanca, Morocco b Information, Operations and Management Sciences Department, TBS Business School, 1 Place Alphonse Jourdain, 31068 Toulouse, France c Research Centre - Digital Circular Economy for Sustainable Development Goals (DCE-SDG), Jindal Global Business School, O P Jindal Global University, Sonepat, India ARTICLE INFO ABSTRACT Keywords: Due to recent technological advancements, social robots are becoming increasingly prevalent in the consumer Social robot space ChatGPT, a virtual social robot, has captured significant attention from the mass media and academic ChatGPT practitioners alike since its release in November 2022 This attention arises from its remarkable capabilities, as Priori acceptance well as potential challenges it poses to society and various business sectors In light of these developments, we UTAUT developed a theoretical model based on the Unified Theory of Acceptance and Use of Technology and a consumer Consumer value creation framework value typology centred around consumer experiences to examine the influence of experiential factors on the Managerial usage intention intention to use ChatGPT and subsequently collaborating with it for co-creating content among business man agers To test this model, we conducted a survey of 195 business managers in the UK and employed partial PLS- structural equation modelling for analysis Our findings indicate that the efficiency, excellence, meaningfulness of recommendations, and conversational ability of ChatGPT will influence the behavioural intention to use it during the priori acceptance stage Based on these findings, we suggest that organisations should thoughtfully consider and strategize the deployment of ChatGPT applications to ensure their acceptance, eventual adoption, and subsequent collaboration between ChatGPT and managers for content creation or problem-solving 1 Introduction induced restrictions, such as social distancing and quarantine, led to the use of social robots as supportive aids in healthcare services (Aymerich- Since the inception of robotic applications, such as factory robots in Franch, 2020) Thus, social robots played a vital role in containing the product assembly lines, scientific exploration robots in space and under transmission of COVID-19 by performing specific tasks, such as moni the ocean, military robots for surveillance and diffusing bombs, and toring patients and aiding healthcare personnel, thus minimizing the transportation drones, which were primarily operated by human con spread of the disease (Javaid et al., 2020) Additionally, studies have trollers, these have been transformed into autonomous or semi- shown that quarantine measures and isolation have had adverse effects autonomous robots that are designed to alleviate pressing challenges on people’s mental health and overall well-being (Violant-Holz et al., of society (Sheridan, 2020) In this context, the term ‘social robots’ has 2020) Hence, social robots could potentially assist in promoting well- been coined in scientific literature While there is no consensus over the being during a pandemic (Yang et al., 2020) This led to the popu definition of social robots, they are usually defined as autonomous larity of these applications in the mass media as well as in academic agents that can communicate with humans and can act in a socially literature appropriate manner Ideally, the existing definitions consider the char acteristics of social robots, with the assumption that they will operate in OpenAI’s ChatGPT was released in November 2022 and raised public spaces and therefore should have a physical body (Fong et al., considerable interest for its groundbreaking approach to AI-generated 2003) The current definitions have yet to consider social robots in content, which produced complex original texts in response to a user’s virtual space, such as the metaverse or any online platform question ChatGPT is a cutting-edge AI language model that leverages generative AI techniques to provide algorithm-generated conversational Amidst the COVID-19 outbreak, social robots were deployed with responses to question prompts (van Dis et al., 2023) The outputs from varied uses in actual scenarios (Aymerich-Franch, 2020) The pandemic- generative AI models are almost indistinguishable from human- * Corresponding author E-mail addresses: sachinmangl@gmail.com, smangla@jgu.edu.in (S.K Mangla) https://doi.org/10.1016/j.techfore.2023.123202 Received 1 March 2023; Received in revised form 19 October 2023; Accepted 27 December 2023 Available online 18 January 2024 0040-1625/© 2024 Elsevier Inc All rights reserved A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 generated content, as they are trained using nearly everything available et al., 2020) However, it lacks a physical presence We argue that the on the web (e.g., around 45 terabytes of text data in the case of current definitions overlook the concept of social robots in a virtual ChatGPT) The model can be trained to perform specific tasks, such as space, and they are neither comprehensive nor unequivocal For preparing slides in a specific style, writing marketing campaigns for a instance, the prevailing definition neglects to consider the cultural specific demographic, online gaming commentary, and generating high- background and context in which social robots will be deployed resolution images (Chui et al., 2022a) The recent widespread global adoption of ChatGPT has demonstrated the tremendous range of use For instance, the prevailing definition neglects to consider the cul cases for the technology, including software development and testing, tural background and context in which social robots will be deployed poetry, essays, business letters, and contracts (Metz, 2022; Reed, 2022; Given the exponential popularity of virtual social robots like ChatGPT, Tung, 2023; Dowling and Lucey, 2023) ChatGPT is having a transversal which can be extensively used for natural language processing tasks impact over organisations, ranging from enabling scalable and auto such as text generation, language translation, and generating answers to mated marketing personalisation, to generating optimized operation a plethora of questions, and can disrupt various sectors in business such sequences, coding information systems, quickening and scaling up as the service industry, managerial decision making in different con simulations and experiments for research and development, or simply texts, marketing and sales, and human resource management, it is answering complex risk and legal interrogations of managers (Chui necessary to understand the perception of managers regarding this new et al., 2022b) ChatGPT prospects take a wide space of applications technology For the technology to successfully operate in and be inte across industries and businesses (Table 1) while demonstrating the grated into the business environment, it has to be accepted by managers, transformative potential of globally increasing the efficiency and di because they will be responsible for developing strategies to facilitate versity of human value production (Zhang et al., 2023; Shaji et al., 2023; ChatGPT–employee collaboration based on their own perception Frederico, 2023) Existing studies have shown that social robots can transform the service encounter of a consumer through novel and emotionally charged The launch of ChatGPT has caught the attention of all scholars, interactive experiences (Larivi`ere et al., 2017), and such is the case with regardless of discipline The popular press has also engaged in discus ChatGPT Therefore, ChatGPT, like virtual social robots leveraging sions around the implications of ChatGPT, and, more broadly, on generative AI capabilities, will be a crucial technology that can be generative AI, highlighting the many potential promises and pitfalls of potentially considered the workforce of the future in a wide range of these systems The definitions of social robots found in most academic business settings and operations For example, existing studies have articles appear to lack consistency, leaving the theoretical framework discussed compelling cases where social robots will find their way within this context somewhat ambigous (Sarrica et al., 2020) Predom within organisations as AI technology evolves (Henkel et al., 2020), and inantly, social robots are defined as autonomous agents that can engage may improve the working conditions of employees (Goeldner et al., in significant social interactions with humans Their engagement style 2015), which will significantly enhance employee productivity, business depends on their applications, roles within specific environments, and productivity and the competitive advantage of firms (Kopp et al., 2021) adherence to established social and cultural norms These definitions Considering the virtual nature and generative AI capabilities of portray social robots as complex machines, with analytical and ChatGPT, its diffusion is likely to be faster within business organisations computational abilities surpassing those of humans, designed to because of its superior analytical and computational capabilities emotionally engage with people through communication, play, and compared to humans, interactive features, and ability to solve problems facial cues ChatGPT is a form of virtual social robot which possesses Since ChatGPT is a very new social robot, we have not yet come across autonomy, can sense and respond to environmental prompts (like any studies in the literature on its adoption by managers within orga answering questions), can interact with humans (through virtual con nisations, which leads us to the following research question: versations), and understands and adheres to societal norms (due to its programming that incorporates, to a certain extent, societal values, RQ: Which factors will drive and inhibit the intention to use ChatGPT barring instances of security breaches or algorithmic failures) Conse by business managers that will facilitate ChatGPT-manager collabora quently, ChatGPT fulfills four aspects of social robots encapsulated in tion for content creation? most foundational definitions (Fong et al., 2003; Duffy, 2003; Sarrica Grounded in the bodies of literature dealing with technology adop tion and consumer value creation, we integrate the unified theory of Table 1 Sectoral applications of ChatGPT (Derived from Richey et al., 2023; Dwivedi et al., 2023 and Budhwar et al., 2023) Industry sector Benefits of ChatGPT Risks of ChatGPT Education Support to teaching and learning activities as ChatGPT fosters more Incorrect information taught and learnt due to data-driven biases Gaming engaging and interactive educational tools and spaces Higher unpredictability of game use and less controllable gaming outcomes and purposes Media Unsupervised generation and personalization of sandbox game Lower monitoring of information quality and truthfulness Advertising scenarios and spaces Blurring responsibility and accountability of businesses over advertising content Diversification of media content and format and increasing Software productivity in content creation Increasing weight of algorithmic organisation of social networks, social media and engineering Automation of ad generation (synthetic advertising) and communication software (black box information and user visibility, filter bubbles, data personalization of ads, supporting and extending the capabilities of privacy risks) E-commerce ad creators Increasing risks of e-commerce misuse and fraud Healthcare Automation and extension of data engineers’ capabilities Increasing responsibility of machines over human patients’ health Finance Reduction in human contact and monitoring of financial services Logistics Increasing easiness of platform and website development and ChatGPT cannot currently replace human operations managers responsivity of e-services and chatbots to customer requests Increase in accuracy and efficiency of electronic health record systems and access to medical services (from AI caregivers) Increase in responsiveness and reliability of customer service More accurate identification of aberrant transactions and fraud Increase in data-driven supply chain management efficiency and agility More efficient customer expectations and demand integration in operations 2 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 acceptance and use of technology (UTAUT) (Venkatesh et al., 2003) and intelligence Breazeal and Scassellati (1999) advanced the view that if the marketing paradigm of experiential consumer value (Holbrook, robots have the computing abilities to build complex human behaviours, 1999) to develop a model This model aims at classifying antecedents to they require a system to consistently project themselves into the next the acceptance of social robots into experiential features and viewing behaviour and account for past ones, based on their perception of a value as co-created, respectively, to (1) provide the experiential expla human user This “attention system” therefore increases a robot’s ability nation and make sense of the relevant interpersonal factors discussed in to socially interact with a human user by collecting cues to what the existing literature on the use of social robots and (2) anchor the view behavioural strategies, such as avoidance or engagement, and what that humans and AI collaborate with the intention of co-creating value emotions and motivations it should display to the user This follows By doing so, we aspire to bring a realistic perspective of the acceptance Duffy et al.’s (1999) introduction of the concept of social robots, which of social robots closer to the current context and provide future research are defined as not only interacting with third parties but also as having with a comprehensive framework we believe to be consistent with the developed an awareness of their own mental state and the mental state development of a society where robots and humans would associate as observed by those third parties In the words of Duffy et al (1999), social counterparts and neighbours (Kanda et al., 2004) In this study, we robots “behave in ways that are conducive to their own goals and those examine the a priori acceptability phase, when users form their first of their community” in the complex and variable environment of a social judgments about the technology after first/initial interactions (Bobillier- group According to Wirtz et al (2018), service robots are “system- Chaumon and Dubois, 2009) We cannot objectively measure the based, autonomous, and adaptive interfaces that interact, communicate, acceptance (Terrade et al., 2009) and appropriation phases (Barcenilla and provide services to an organisation’s customers” Social robots and Bastien, 2009) because the technology is still evolving, and we are exhibit heuristics similar to those of humans and are accorded person yet to see full-fledged business applications In this context, it is worth alities from users (Banks, 2020) Within the existing literature, pointing out that a review on the acceptability of social robots con commonly agreed properties associated with social robots range from ducted by David et al (2022) found that around 74 % of articles in their social consciousness to empathy and sociability (Table 2) Duffy et al.’s database examined social robots’ a priori acceptance phase, despite (1999) perspective first considered them to possess social capabilities as these robots (such as NAO, Era-Robot, Pepper) already being opera having capabilities to react and deliberately act while sensing their tional for several years in health, education, and service domains (Al- environment For Fong et al (2003), social robots are socially interac Taee et al., 2016; Bietz, 2018; Niemel¨a et al., 2019; Cormons et al., tive in the sense that they assess other entities, such as humans or ma 2020) chines, in a heterogeneous group, and interpret their position in society in a personal way based on their recorded past experience For Breazeal This study broadens research on social robots by first building on and (2002) and Bartneck and Forlizzi (2004), the nature of social robots is expanding the theoretical developments from Duffy et al (1999) rooted in sociability, as is the ability to mirror the social context of introducing AI self-awareness, DiSalvo and Gemperle (2003) consid human interactions through anthropomorphism and non-verbal signals ering their anthropomorphia, Kreijns et al., 2007 demonstrating their added to AI usefulness to collective goals or Lee and Lee (2020) presenting social robots as service providers as efficient as humans to integrating social DiSalvo and Gemperle (2003) made a seminal contribution by robots into the context of an experience of co-creation of consumption focusing on anthropomorphism in robots, which continues to gain mo between an artificially intelligent service staff and a human customer In mentum with technological advances in AI related to emotional and this line, we synthesize findings about social robots to incorporate them social intelligence Shin and Choo (2011) discussed socially interactive into a comprehensive and versatile framework, consumer experience robots that act autonomously and communicate in constant interaction (Holbrook, 1999) Secondly, with this theoretical lens, the present with humans and other machines, which means that they need to research enlarges studies of social robots to include the influence of symbolic individual and social goals of human users as hedonic moti Table 2 vations, seeking recognition of how they interact with robots In this ChatGPT social robot properties matter, our research extends research on the ethics of social robots to their co-existence with other intrinsically human ideals such as esteem, Property General capabilities implied ChatGPT capabilities spirituality, or status Third, we presently show the intersections of the narrative that “computers are social actors” (Nass and Moon, 2000) and Social consciousness Embodied AI agent, Embodied into a screen the experiential value of their social role, considering robots as co- (Duffy et al., Autonomous or semi- interface and the identity of creators of social and inner experiences with a user This strengthens 1999) autonomous ChatGPT research into shopping experience marketing through the scrutiny of Reactive and deliberate Autonomous symbols and values shared by AI and humans (Bolton et al., 2018), for Reactive and deliberatively instance making room for thought on the emergence of feelings of the Social interactivity ( Personal perception and advising ‘uncanny valley’ in the context of the shopping experience Fong et al., 2003) interpretation of the world Possesses the view of the according to a recorded past world from Web Data, The article is structured as follows First, we present the background experience framed into ethical rules literature and identify gaps in knowledge in Section 2 Next, we propose Identification of other programmed by Open AI and discuss the theoretical model of our research in Section 3 Section 4 agents, human, animal or developers presents the methodology of our study, followed by the findings in machine, and Can identify the language of Section 5 The discussion is presented in Section 6 and the research communication within a the user but needs the user implications in Section 7 Section 8 outlines the conclusions and future heterogeneous group to disclose personal work Enactment according to a information to identify them social role identified within comprehensively 2 Literature review a group Acts as a subordinate oriented towards the user 2.1 Social robots Sociability ( Understanding and with the central objective to Bartneck and relatedness to humans: serve them Breazeal and Scassellati (1999) developed an algorithmic system that Forlizzi, 2004; empathy Displays signals of empathy categorises human postures to find the appropriate action for a given set Breazeal, 2002) Mirroring of human social and support to the user of observed inputs before interacting with a human By searching for context through Mirrors verbal postures when observing a human, AI demonstrates higher social anthropomorphy and anthropomorphic only lifelike features Communicates verbal Verbal and non-verbal signals only, under a textual signalling to humans form 3 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 understand the emotions of humans to function efficiently Such social factors like individual differences and cultural backgrounds (Burleigh skills are important for user learning towards collective and individual et al., 2013) goals (Kreijns et al., 2007) Social bots that can help users improve their skills and show responsiveness lead to higher adoption by people, thus Social robots can “respond to and trigger human emotions” (Hen embedding into the fundamentals of the service-dominant logic (SDL) schel et al., 2020) to build interpersonal relationships and participate in theory, refocusing the value of goods and services purchases towards the communities on a larger scale Consumers demand more pleasure from experience and capacity a consumer can derive from them (Vargo and intelligent agents and a kind of “fool’s licence” from social intelligent Lusch, 2004) The SDL advocates for customer-centrism to build systems, as Dodgson et al (2013) showed Reciprocity and honesty are competitive advantage; such centrism is based on the emergence of a also expected from social robots (McEneaney, 2013) Social robots have service as a process of co-creation, where consumption and provision of trust, affection (Picard, 1999) or the ability to love (Samani et al., 2010), the service continuously interact to raise the value of the experience, or detect weak signals (Heylen et al., 2009) For example, most recog transcending the acquisition of the service itself Research in this area nized work on social robots concerns highly sensitive social situations, has primarily focused on assistance robots used in healthcare, mobility, such as autism (Mejia and Kajikawa, 2017; Robins et al., 2005) Per disability care, or education (Kelly et al., 2022), focusing on the inter sonalisation, connectivity, and reliability give social robots the ability to personal aspect of social robots Even though this topic has a long history innovate their services to support vulnerable users (Khaksar et al., in management research, it is currently being reinforced by academics 2016) Within the singular context of the recent Covid-19 pandemic, and follows a favourable context For example, Lee and Lee (2020) point social robots have played a key role in restoring mimicked social in out that customers increasingly prefer minimal human contact during teractions to support the successful enforcement of the social distancing their shopping experience, with AI robots providing higher customer necessary in such context, and to manage the monitoring and inhibition satisfaction than their human counterparts (Bolton et al., 2018) of the associated danger of infection, especially for healing practitioners (Yang et al., 2020) Aymerich-Franch (2020) studied 240 use cases However, social robots are perceived with greater distrust than their during this pandemic and classified social robots into three care func human counterparts, with this phenomenon decreasing for anthropo tions: linking people with caregivers, protecting, and supporting morphic robots (Edwards et al., 2019) Adult users prefer human-like humans’ well-being Therefore, social robots mirror and automatize a computers, while children are more comfortable with cartoon-like ro 24/7 social presence for answering health and well-being considerations bots (Tung, 2016) In this matter, other scholars draw attention to the of individuals Further to healthcare, they mitigate the feelings of ‘uncanny valley’ effect, which emerges from a cognitive dissonance loneliness and boredom of families during the lockdowns of the Covid- sensed by a human user between the features they instinctively expect 19 pandemic from a human likeness and what they perceive in a human-like robot, thus raising a feeling of discomfort or disgust (K¨atsyri et al., 2015) This, Tan et al (2021) discuss the ethical implications of humans using therefore, plays an important role in reducing the positive impact of social robots The first problem arises from the long-term habituation of anthropomorphism on the acceptance of social robots, as argued by Yam users and the potential dependency that consumers might build on social et al (2021), who advocate for the dehumanization of robots to decrease robots If such AI robots are used in the care of disabled people, they human-likeness expectations from users and increase social robot could gradually lose their autonomy by relying on a social robot Social acceptance in the Japanese service industry Humans confronted with robots can also control decisions about impressionable users, infantilise the contradiction of a realistic humanoid robot suffer from a discrepancy them, or isolate them from other humans In these first cases, human between their anticipation of a human and their perception of something integrity is challenged by the democratization of social robots, while a that is not fully human (Mori et al., 2012) Indeed, the anthropomorphic second problem arises from the integrity of the AI itself Social bots have dimensions of social robots imply that a user could unconsciously social functions, behaviours, and appearance, but they lack authenticity attribute this gap between what should resemble a human and what they and morality, leading to a preference for insentient machines over see of a robot to illness, death, or zombification (Diel et al., 2021), humans to meet social needs (Tan et al., 2021) However, users tend to leading to less trust and engagement from the user and overall, the cling to stereotypes when it comes to placing a social robot in a social reversal of perceived usefulness into the intuition of impairment (Des position They prefer extraverted social robots for healthcare and tephe et al., 2015) A comparison between social robots and computer masculine-looking robots for security tasks (Tay et al., 2014) interfaces has shown that users with autism need less human help to interact with a social robot than with a computer (Pop et al., 2013) Research on robot acceptance is often embedded in the technology acceptance model (TAM) (Venkatesh et al., 2003) or its further devel If we were to place the uncanny valley on a continuum, it would start opment, the unified theory of acceptance and use of technology with industrial robots at one end, completely non-humanoid (eliciting (UTAUT) (Venkatesh et al., 2003) The technology acceptance model neutral emotional responses from humans), continuing along, we reach (TAM) developed by Davis (1989) bases users’ intention to use a tech a point where robots are designed to look very human-like, but just nology on the utilitarian logic of the cost-benefit ratio of that technol enough to be detectable (emotional response becomes negative), and as ogy Social robots fall within the framework of TAM (Shin and Choo, we move out of the valley and approach the far end of the spectrum, we 2011) Robots that are seen as useful and easy to use attract demand and find entities that are virtually indistinguishable from real humans These consumption Saari et al (2022) applied TAM 3 to various market seg indistinguishable ones might be sophisticated androids or computer- ments: the proactive early AI adopter is contrasted with the mass mar generated humans that look and move just like real people (for e.g., ket The authors focused on AI functionalities to distinguish AI designs deepfakes) At this point, the emotional response becomes positive that are suitable for both segments They showed that perceived ease of because our brains are no longer able to detect the imperfections, and we use does not influence the intention to use social AI, while perceived accept them as fully human (Cheetham et al., 2011) The discomfort usefulness is an important factor in a sample that is largely representa caused by human-like entities in the uncanny valley stems from these tive of the mass market This leads to the perspective that users accept AI entities violating our innate social norms and expectations, i.e., humans not because of its accessibility but as a way to advance themselves Mass have ingrained rules and expectations for social interactions, and users see social robots as a benefit that they enjoy In contrast, early human-like entities are expected to follow these norms (Moore, 2012) adopters look for reliable and transferable outcomes However, utili Moreover, human-like entities lack signs of life or consciousness which tarian expectations related to simplicity and usefulness have been can trigger fears about human identity, the uniqueness of human con complemented by expectations of enjoyment (Pillai et al., 2020), sciousness, and may elicit the idea of being replaced, causing fear and attractiveness (Thongsri et al., 2018), or well-being (Meyer-Waarden scepticism (MacDorman and Ishiguro, 2006) The variations in response and Cloarec, 2022) by a large number of authors The majority of the to human likeness of digitally created faces can be also attributed to existing evidence divides the factors influencing the intention to use social robots into utilitarian and hedonistic antecedents (De Graaf and 4 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 Allouch, 2013) inexplicable feelings of rejection The sight of an inanimate, human- The field of social robots also has its roots in the ‘computers are social looking object neurologically explains the uncanny valley phenome non (Rosenthal-von der Pütten et al., 2019) actors’ (CASA) perspective advocated by Nass and Moon (2000) The authors assumed that users unconsciously confuse humans and com 2.2 ChatGPT (GPT-3) puters and tend to ascribe a psychological nature to a computer, leading them to display social biases and form interpersonal relationships with One of the most remarkable specimens of social robots, GPT-3, computers For example, users attributed action risks to AI collaborators known as ChatGPT, is currently gaining more and more public and and anticipated moral hazards, lying, and problems in choosing robots scientific attention as a “cultural sensation” (Thorp and H H., 2023) (McEneaney, 2013) Atkinson et al (2012) and Kanda et al (2004) Launched by the organisation Open AI, this conversational agent uses pioneered the CASA view of AI by articulating interpersonal trust and its natural language processing and machine learning from data circulating impact on the success of AI-human collaboration or human vulnera on the internet to engage in discussions with around 1 million users bility The matching person technology theory developed by Scherer and around the globe (Mollman, 2022) ChatGPT can be assimilated into a Craddock (2002) is representative of the interpersonal view expressed in social robot as it presents the consciousness of a social being (Duffy the acceptance of social robots Using a grounded theory approach, the et al., 1999), possesses one-to-one interactivity (Fong et al., 2003), and authors qualitatively analysed the relationships established by users sociability in its textual interface (Breazeal, 2002), as presented in with assistive technologies and emphasised that applying generic rules Table 2 Journalists have also experimented with writing articles to user acceptance of social robots does not do justice to the reality of together with ChatGPT, such as The Guardian or BuzzFeed (Pavlik, 2023) users Khaksar et al (2016) also used grounded theory to analyse the GPT-3 shows high abilities to answer mathematical problems, find and degree of innovation of social robots Shin and Choo (2011) showed that synthesize information, understand business stakes (Kecht et al., 2023), the adaptability and sociability of a social robot complements its use recommend decisions (Phillips et al., 2022), or even write poetry (Ko¨bis fulness and usability, as it leads users to develop a positive attitude to and Mossink, 2021) wards a social robot, while the perception of being in the company of a psychologically aware robot directly increases the intention to use it GPT-3 is a type of generative AI, which can generate content The authors distinguished between adaptivity and adaptability, which autonomously as text, plan, or programme from the analysis of massive describes how robots dynamically adapt to new solutions in changing data amounts Such generative AI presents unprecedented capabilities to environments They showed that robots with the ability to dynamically create and provide responses in a human-like manner (Pavlik, 2023) adapt to a Rousseauian social contract are more valued by their human- GPT-3 also learns from its own interactions with users to improve future like counterparts answers and personalise suggestions (Phillips et al., 2022) For instance, organisations can train such robots to deal with customers’ requests and As an interpersonal partner or social actor, the social robot is assist customer relationship management (Kecht et al., 2023) Man incorporated into a cohort or individual’s life through trust (Kim et al., agers’ perceptions and attitudes towards ChatGPT have been presented 2020; Gaudiello et al., 2016) Social robots show their trustworthiness by Cardon et al (2023) as mainly positive and enthusiastic, as ChatGPT by presenting their social skills, ethical integrity, and goodwill (Kim already supports research, idea creation, or writing messages and re et al., 2020) In this way, the social robot’s intelligence, autonomy, ports, and is seen as helping to improve their communication and effi anthropomorphism, and empathy play a key role in developing trust ciency at work, needing less time to yield outcomes of higher quality among users Gursoy et al (2019) introduced the concept of social robot However, ChatGPT is only moderately appreciated by teachers, as it integration for consumers as a long-term use of a social robot and pro raises concerns about student dishonesty or even its value for the posed individual findings that encourage reflection on the development learning process, but it is perceived as bringing benefits in terms of of a relationship between a robot and a human, showing that trust in students’ motivation and engagement, or of teachers’ responsivity and social robots may not last as long as predicted According to the authors, productivity, helping them focus on high-level tasks (Iqbal et al., 2022) the emotions and motivation of the user are the most important factors Finally, academics perceive ChatGPT as a high-potential and disruptive for the integration of a robot into the household The appearance of a technology for economies and humanity, as developed by the con social robot that resembles a human increases the intention to use it but sortium of academics in the recent research of Dwivedi et al (2023) decreases the willingness to integrate it For users, anthropomorphism is ChatGPT would help thoroughly and relevantly managers in their daily an advantage when using intelligent social robots without worrying tasks but would increase risks of poor reputation, offensive content, about attachment, but it evokes anxiety in users that challenges their plagiarism, loss of privacy, or inaccurate information Therefore, the cognitive integrity in the long run when it comes to integrating them democratization of ChatGPT should be framed with policy regulations into everyday life The findings of Gursoy et al (2019) showed the and academic research myopia of consumers in relation to the current increase in the demand for humanoid social robots In this sense, they accept social robots but Many studies about GPT-3 express concerns about the ethical im lose their enthusiasm when they realise the potential future loss of plications of such tools, regarding the human value of future creations control or competition in interactions According to Gaudiello et al (Else, 2023) or the social aspects of interacting with GPT-3 (Krügel et al., (2016), perceived control is a key factor for acceptance, and trust de 2023) For instance, Gao et al (2022) demonstrated that researchers velops more efficiently when users consider the functionalities of social were not able to distinguish a scientific text written by GPT-3 from a text robots and make an abstraction from the whole social robot entity written by a peer, and that artificial intelligence proves useful to detect Henschel et al (2020) argued that social robotics and human–robot artificially written documents Even users tend to underestimate how interaction should be viewed through the lens of neuroscience Indeed, GPT-3 influences their decisions and may follow even amoral sugges even though social robots are becoming more capable of exhibiting a tions (Krügel et al., 2023) O’Connor (2022) also discussed whether social presence, the remaining gap between human and robot cognition GPT-3 augments or replaces the learning process of students, and Cas prevents social robots from fully meeting user expectations Because telvecchi (2022) asked whether GPT-3 could replace programmers Huh social robots appear alien to humans, as latter mainly use the frontal (2023) compared Korean students and GPT-3 performance On the other cortex, a part of the brain responsible for cognitive reasoning rather than hand, Ko¨bis and Mossink (2021) also demonstrated that human readers intuitive emotions Users develop engagement and empathy towards would still have a preference for poems written by humans (even novice social robots, but these arise from the mental perception of the social poets) rather than poems written by GPT-2, linking the ChatGPT tech robot’s state rather than spontaneous interaction with it Currently, nology to unconscious rejections of robots highlighted by Rosenthal-von humans’ unconscious intuition prevents them from fully accepting a der Pütten et al (2019) Generative AI still presents inconsistencies and social robot, as deep parts of the brain react and confront the user with a “tortured” writing style on occasions (Cabanac et al., 2021) Regarding 5 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 the social implications of GPT-3, Henrickson (2023), for instance, 2.3 UTAUT and Holbrook’s experience-based typology of consumer studied the case of thanabots, applications of GPT-3 to recreate the value interaction with a deceased relative to assist mourning journeys The emotions built through a thanabot are based on the deceased person’s The UTAUT remains one of the most commonly used models for the rhetorical style and on mimicking shared past experiences If GPT-3 is analysis of AI acceptance (Kelly et al., 2022) Extensions of the TAM, not present under an anthropomorphic physique (voice and humanoid such as the theory of reasoned action, the theory of planned behaviours, body), it is one the best depictions of social intelligence and acts as a rich and other frameworks such as the motivation model or the innovation platform for future transformations into highly realistic social robots diffusion theory (Oye et al., 2014), argue that the intention to use a (Henrickson, 2023) In healthcare, GPT-3 plays the role of a colleague particular technology is influenced by the performance and effort ex for nurses or a fellow student who shares the care of patients (Aydın and pectancies, and by the two parameters from the environment of the user: Karaarslan, 2022) Currently, most research on GPT-3 describes the social influence and facilitating conditions (Venkatesh et al., 2003) conversational robot, raising ontological and ethical questions about it Social influence and facilitating conditions place the potential user in a (Thorp and H H., 2023; Henrickson, 2023; O’Connor, 2022; Cas context where technology acceptance receives the appraisal of sur telvecchi, 2022), but studies on how consumers react and experience rounding peers and where the potential user’s means and environment ChatGPT remain scarce (Krügel et al., 2023; Ko¨bis and Mossink, 2021) support the acceptance of this technology In this model of technology acceptance, gender, age, experience with technologies, and voluntari As regards social robots, ChatGPT is useful to complete social robots’ ness of use act as moderators of the relationship between the precedent conversational capabilities, yet currently involves diverse risks in the factors and intention to use technology In healthcare, social robots’ form of sporadic AI failures (Balagopalan et al., 2023) but shows pros acceptance integrates the concepts of trust and risk as antecedents pects of long-term negative impact on organisations’ performance and (Prakash and Das, 2021; Fan et al., 2020) Regarding acceptance of AI in on individuals’ well-being (Thongsri et al., 2018; Lee and Moon, 2015) the field of GPT-3 services for the public, Kuberkar and Singhal (2020) For instance, as the global web nurtures social robots, this could lead to advanced anthropomorphism as a factor of intention to use, while Cao the dissemination of hate speech or false information to the human et al (2021) shed light on personal well-being and personal develop subjects of social interactions (Cao et al., 2021) Since ChatGPT ment concerns as players in the process of accepting AI assistance He perpetually evolves through interactions, its application in social ro donism, security, and sustainability are also raised as arguments for botics could inadvertently expose private user communications to the consumers to accept AI (Gansser and Reich, 2021) However, De Graaf public, indiscriminately merging this sensitive data with its broader pool and Allouch (2013) and Gansser and Reich (2021), for instance, argued of learned information ChatGPT doesn’t have the ability to understand that the UTAUT lacks consideration for hedonic factors of technology or judge what’s morally right or recognize private information, which acceptance, such as attractiveness and enjoyment Such variables arise could drastically impair service quality or consumer trust in service from the experience during the use of a robot and imply that robots are quality in the case of social robots (Prakash and Das, 2021) If ChatGPT, mistaken for social actors by the user is unable to distinguish between private and public interactions, there’s an inherent risk that personal information shared in confidence could be In consumer marketing, Holbrook (1999) developed a typology of inadvertently disclosed in other, unrelated contexts This loss of privacy dimensions that determine the value a consumer anticipates and re is not just a personal issue but can also have legal implications, partic trieves from the consumption experience Holbrook claimed that mar ularly if the information shared involves health, financial data, or other keting frameworks focusing on product development for the average sensitive subjects protected under privacy laws like the GDPR or HIPAA target person, without consideration of a consumer’s experience, lack The learning process of generative AI applications is continuously comprehensiveness in a world where markets become increasingly evolving and doesn’t inherently discriminate between what it should fragmented, and he argued for the progression towards more interpre and shouldn’t remember or share Without strict data governance pro tative models of customer satisfaction, experience and expectations In tocols, the AI might unknowingly share private conversations, thinking this line, the author presented eight value dimensions (see Table 3) that it’s providing helpful or relevant information based on previous in include consumer experience and can emerge simultaneously or sepa teractions This scenario could lead to uncomfortable situations, or rately: efficiency, excellence, status, esteem, play, aesthetics, ethics, and worse, harm someone if sensitive information about personal struggles, spirituality (Holbrook, 1999) Consumer value has extrinsic and identity, or private relationships is revealed to the wrong audience In intrinsic roots and can be oriented towards the self or others, while it scenarios where private information is disclosed publicly, there’s a risk comes from an active or a reactive reflection of the consumer In this of that data being used unethically or maliciously This misuse could sense, efficiency is extrinsic, active, and self-oriented, and it relates to range from targeted advertising based on private information, to more the usefulness for efforts, time, and money engaged; excellence is nefarious outcomes like blackmail or identity theft Using ChatGPT for extrinsic and self-oriented, but reactive to the reliability and core quality social robot applications demonstrates a risk to service performance it features of the consumed product or service Play, on the other hand, is self (Lee and Moon, 2015) as the capability to push the technicity of a intrinsic, self-oriented and active, depending on how enjoyable the process and to project its technologic outcome into transcendent story consumption experience is, while aesthetics, intrinsic, self-oriented but telling remains specific to humans (Jonas, 1982) In other words, aca reactive, states how pleasant the consumption is to the consumer’s demics cannot currently determine whether social robots would seek senses (Holbrook, 1999) constant progress in designing services or products, and overall bring a growing social value, or more generally whether generative AI would Oriented towards others and active, status and ethics are extrinsic relate to the rather human endless desire for betterment Rather, and intrinsic, respectively, the first depicting how others see the social ChatGPT’s performance shows risks of harming individuals’ well-being value of the consumption of one product or service and the second how or the potential reinforcement of cases of certain users in affective need the consumer sees the moral value of their own consumption The latter, or even dependence on conversations with social robots, or the increasing delegation of social responsibilities to generative AI, thereby Table 3 preventing human users from developing personal competences to Consumer value typology based on consumer experience, from Holbrook (1999) navigate throughout our society by themselves (Xian, 2021) As users become aware of potential privacy infringements, their trust in social Extrinsic Intrinsic robotics could diminish This erosion of trust is harmful not only to the user experience but also to the reputation and reliability of the com Self-oriented Active Efficiency Play panies developing and utilizing these technologies Other-oriented Reactive Excellence Aesthetics Active Status Ethics Reactive Esteem Spirituality 6 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 the ethical dimension of experience, can therefore be defined positively process of mapping a comprehensive meaning or symbol of their actions as one’s adjustment to the moral values shared in one’s social group or and social interactions from the inner interpretation of context into their normatively, as the adjustment of one individual to a set of universal own values In this line, as customers enact service co-creation, they moral standards, for instance, transparency, trustworthiness, and re incorporate the purchasing experience into the construction of sense as a sponsibility (Laczniak and Murphy, 2019) Therefore, the ethical value stimulus interpreted in coherence with life-long integrity (Holbrook, of an experience represents the level of attention given to ethical obli 1999) In parallel, service providers shall therefore guide the experience gations, personal values and moral beliefs and the moral identity of to orient its value to the offer of a constructive and positive sense for the receivers within a service (Sun, 2020) Esteem and spirituality di customer mensions are also other-oriented but reactive, extrinsic and intrinsic, respectively Esteem relates to how others treat the consumer who uses a Regarding AI use, Chen et al (2021) showed that experiential value certain product or service, while spirituality relates to how the consumer influences the intention to buy an AI service and to cooperate for the respects their own faith and personal well-being by consuming a product service to result in the best quality possible, hence to co-create the value or service (Holbrook, 1999) The author derives the spiritual dimension of a service with AI Autonomous social robots also result in higher levels of experience from an individual’s search for meaning and values to feel of hedonic values (play and aesthetics) and symbolic values (status and connected to the complete self and the surrounding environment ethics) for consumers (Frank et al., 2021) This typology of values also (McKee, 2003) In an increasingly volatile, uncertain, complex, and frames the acceptance of online AI chatbots and online purchases (Yin ambiguous world, the transformational and inclusion capabilities et al., 2023) Overall, the experiential value framework is regarded as materialized within experienced spirituality gain increasing attention useful to denote affective and symbolic cues behind AI acceptance in the from businesses and academics (Husemann and Eckhardt, 2019; Santana completion of cognitive influences; AI aims at increasingly socialize and and Botelho, 2019) In Holbrook’s (1999) theoretical setting, all eight share experiences with consumers (Puntoni et al., 2021) dimensions of the consumer experience share common orientations about the objective of the experience and the actions in the experience This study aims to understand the influence of the experiential as (Table 3) Therefore, none of these dimensions is perfectly isolated from pects anticipated from the use of social robots on their acceptance The the others in the construction of an experience value in consumption choice of integrating UTAUT is justified by the willingness to provide a For instance, the values of efficiency, excellence, and aesthetics have in panoptic framework to social robots that neutralizes the social value of common that they are not socially rooted but individually assessed AI, as opposed to the computers as social actors paradigm (Nass and Similarly, ethics and spirituality come from the objective of gaining Moon, 2000) By doing so, we aim to show that social attributes emerge intrinsic value as a consumer unmistakably even when they are not looked for On the other hand, the integration of this research into the Typology of Experiential Consumer The variation in moral perspectives among individuals is a complex Value (Holbrook, 1999) aims at proposing a comprehensive framework phenomenon, deeply rooted in cultural, societal, psychological, and for the development of stimuli that influence social robots’ acceptance personal factors One person’s view of an immoral act might differ This theoretical framework will help understand and classify anteced substantially from another’s based on these influences This concept ents of social robots’ use into symbolic and hedonic cues from the becomes particularly evident when we examine cultural variations experience anticipated by users Finally, we show the experiential across different societies (Hofstede, 2011) In this context, Hofstede’s meaning of each antecedent and provide a reasoning coherent with the cultural dimensions theory helps to understand the impact of a society’s perspective that users’ experience shapes their collaboration with a so culture on the values of its members and how these values relate to cial robot and therefore leads to a value co-created between AI and behaviour It implies that people’s beliefs, values, and behavioural humans norms can vary significantly between cultures For instance, in highly individualistic societies, people are more likely to make moral judg 2.4 Knowledge gaps ments based on personal beliefs, rights, and freedoms, sometimes emphasizing the importance of standing up for one’s convictions even if Recently, Puntoni et al (2021) stated that although AI technologies it goes against societal norms Conversely, in collectivist cultures, mo for the consumer are considered with objectivity as neutral objects rality is often framed in terms of social harmony, and community wel provided to the public, they convey social specificities and interactional fare Therefore, disrupting these elements, might be deemed highly experiences, as in human-to-human services In this sense, the authors immoral Similarly, in cultures with high uncertainty avoidance, devi advocate the undertaking of research questions about feelings behind ating from established norms or engaging in behaviour perceived as the experience of AI use for consumers; feelings of exploitation through unpredictable may be considered immoral, whereas societies with low personal data collection; the well-being felt from personalisation; the uncertainty avoidance might be more tolerant of such actions This fear of self-integrity when delegating to AI assistants; the alienation felt variation underscores the importance of cultural sensitivity and from being categorized by an AI; and the dilemma between felt awareness, in the increasingly turbulent digital landscape companionship and fear of vulnerability when developing interpersonal relationships with AI Academics undertaking the aforementioned pro Academics have demonstrated that some dimensions can influence posals have focused on perceived data risks (Dinev et al., 2016), per others in certain contexts without global consensus, for example, play sonalisation (Liu and Tao, 2022), and fears of self-integrity, error, or can support ethical values by incentivizing the goodwill of the co- discrimination for social robots (Cao et al., 2021) However, existing creators of an experience (Sheetal et al., 2022) Lemke et al (2011) findings emerge in isolation, and no study overlaps felt experiences and also underlined the variability of the impact of status or esteem on attitudes associated with a unified perspective Furthermore, although perceived excellence, while Gentile et al (2007) reminded us that semantics of collaboration and socio-technical value appear frequently sensorial cues of the experience as aesthetical value will have a fluctu in such studies (Chowdhury et al., 2022), demonstrations of AI value as a ating influence on other features of the consumption experience co-creation between users and robots remain nascent and scarce (Huang depending on the context In sum, Holbrook (1999) provided a set of et al., 2020; Kaartemo and Helkkula, 2018) Research on social robot modalities for the experience of consuming a service that can combine to acceptance currently encompasses perspectives of a one-sided view of AI create a large diversity of different experiential situations Overall, the use values defined by the user (Krügel et al., 2023; Cao et al., 2021) theoretical narrative developed by Holbrook (1999) is based on the service-dominant logic, which considers service as co-created by cus Antecedents stemming from the existing findings fall into the spec tomers towards a transformative value into capacities and extends it trum of AI design characteristics (Gansser and Reich, 2021) and inter from its utilitarian aspects to dimensions of hedonism and sense-making personal specificities (Kim et al., 2020; Gaudiello et al., 2016) However, Sense-making defines, according to Weick (1995), the continuous we argue that interpersonal relationships start with shared experiences encompassing both human and AI characteristics, being the result of a 7 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 co-created story between a human through attitude and a robot through transversal applications within a nation, the ethical feature of its an learning from data Siegel (1999) explained the interpersonal experi swers would require constant dialogue with institutional regulators In ences footprint into neurobiological repercussions Cetin and Dincer this sense, questions remain on what ethical value the experience of (2014) showed the positive impact of service experience, such as interacting with ChatGPT can bring, between a normative view behind recognition, willingness to help, and shared expertise, on customer the design of international conversational technology, and the positive loyalty Chen and Lin (2015) demonstrated that the experience felt by view of national regulation, and what potential caveats could appear users of online blogs positively influenced their intention to continue to participate, which lasted in a sustainable relationship between blog On the other hand, similarly to the human-to-human applications of members Consumers feeling a social exchange with service providers, Holbrook’s (1999) typology, these eight dimensions of experience with feeling supported by them, showed intention to repurchase such services ChatGPT could interact with each other, first because they intrinsically and therefore sustain a recurrent contact Overall, common reasoning include intersectional aspects and second because users apply the explains experience as a factor of commitment and engagement (Roy complexity of social interactions to intelligent technology (Nass and et al., 2021) Moon, 2000), implying indirect links For instance, as in Abadie et al (2019), trust in an AI counterpart can mitigate the attention of one user Since ChatGPT was publicly released, AI entered the field of con to the efficiency or excellence of the robot Arsenyan and Mirowska sumer marketing with a business-to-consumer approach (Mollman, (2021) also discussed that the lack of ethicality in AI-generated social 2022) Consumers would, however, interact with GPT-3 aiming at ful media influencers could repulse users, preventing them from enjoying filling a task or objective, such as students to support their homework other values of the experience of an AI Efficiency, under the parameter (O’Connor, 2023), users interested in learning about topics, or jour of perceived usefulness, is impacted by a user’s privacy concerns about a nalists and bloggers (Pavlik, 2023), making managers and students the robot, hence the excellence dimension of its user experience (Ho and Lin, main user groups of GPT-3 nowadays Consequently, consumers of this 2010) Perceived organizational support for AI, leveraging the status free service come to GPT-3 with the idea of finding help with a project, value of using AI, can increase trust in its reliability, such as excellence, personal, or oriented towards others They enact and react within the and in its performance, such as efficiency, felt by an employee (Park and conversation with GPT-3 and attribute shortcut meanings to value GPT- Jung, 2021) Also, Laitinen et al (2016) suggested that social robots 3 and yield an outcome co-created with humans and GPT-3 as sources raising a user’s self-esteem with appraisal messages could increase the perceived ethical value of interacting with the AI, implying that the 3 Model development and hypotheses robot displays exclusive support to the individual as a rule Overall, in line with the CASA argumentation (Nass and Moon, 2000), users of According to the UTAUT, users build positive attitudes and intention ChatGPT would intersect utilitarian dimensions with hedonic, social, to use AI from the expectation of performance and effort, and from the and self-oriented aspects when interacting with it as they would in the perception of facilitating conditions and social influences (Venkatesh context of a service experience with a human Following Puntoni et al et al., 2003) For instance, users show higher intention to interact with a (2021), we built a theoretical model that studies and discusses such social robot because they expect specific design features such as the factors under the comprehensive lens of Holbrook’s experience frame ability to provide a personalized performance (Gao and Huang, 2019) or work (Holbrook, 1999) This model will enhance the ability of AI de feel subjective norms valorise the use of AI (Taylor and Todd, 1995) On signers to build social robots as sustainable collaborators of humans by the other hand, consumers are now increasingly exposed to social robots integrating bases that create value and relationships for social AI and as service providers and retrieve experiences from them (Puntoni et al., letting them acknowledge a holistic view of the feelings experienced by 2021) Experiences bring a co-created service value between customers users to ensure responsible ways to incorporate social robots in our and providers, in eight dimensions which simultaneously come from society efficiency, excellence, play, aesthetics, status, ethics, esteem, and spiri tuality (Holbrook, 1999) Experience dimensions will enhance the In this line of arguments, the present model hypothesises that effi intention to engage in repeated interactions and engagement in a rela ciency, excellence, status, esteem, play, aesthetics, ethics, and spiritu tionship between the service provider and the service user (Roy et al., ality, as anticipated aspects of the ChatGPT experience, enhance the 2021; Chen and Lin, 2015) UTAUT and experiential value help under intention to use ChatGPT, which has a significant impact on stand and explain how acceptance of social robots arises in interpersonal ChatGPT–human co-creation The model harmonizes the dimensions of relationships between social robots and humans The association of ChatGPT use by instrumenting with AI use antecedents underlined in the these theories also offers the perspective that value is raised in co- existing literature (Tables 4 and 5) It instruments efficiency with creation between both parts and embodies the concept of collabora perceived usefulness and ease of use and excellence with ChatGPT tion in the value brought by AI (Chowdhury et al., 2022) In this regard, assurance, arguing that precedent factors contribute highly to making a academics confirmed that concepts that echo experiential value signif social robot look efficient and performant (Ho and Lin, 2010; Davis, icantly influence the intention to interact with social robots: perceived 1989) Status and esteem are, respectively proxied by ChatGPT social efficiency and excellence (Davis, 1989), enjoyment (Xian, 2021), recognition, subjective norms and ChatGPT personalisation On the one anthropomorphic aesthetics (Liu and Tao, 2022), social recognition hand, subjective norms and social recognition of using ChatGPT make a (Meyer-Waarden and Cloarec, 2022), ethical concerns (Del Río-Lanza consumer expect to reach a certain social status for their use of ChatGPT et al., 2009), individualisation (Gao and Huang, 2019), and well-being (Meyer-Waarden and Cloarec, 2022; Taylor and Todd, 1995) On the (Meyer-Waarden and Cloarec, 2022) other hand, personalisation makes a user feel respected and recognized for their self-worth (Gao and Huang, 2019) The model also covers play Asked, for instance, for spiritual guidance to find meaning in one’s and aesthetics with hedonic motivation and ChatGPT enjoyment and own life, ChatGPT would provide explicit knowledge about self- with ChatGPT anthropomorphism The theoretical argument advances realization practices as reflecting one’s own values, goals, and capabil that enjoyment and hedonic motivation are bases for play (Xian, 2021; ities, or self-care If it offers guidelines and satiates the desire for in Pillai et al., 2020) and that anthropomorphism can be viewed as beauty formation about spirituality, it cannot play the role of a spiritual leader due to its ability to please the senses of users (Liu and Tao, 2022) without influencing a user’s free will, therefore requiring scrutiny on Finally, the present model instruments ethics with concepts of ChatGPT how such generative AI can offer a spiritual impact on users On the procedural and interactional justice and spirituality with ChatGPT other hand, the ethical dimension in the experience of using ChatGPT empowerment and ChatGPT well-being The last instruments are justi would first entail a normative framework of what is moral or not around fied with the narrative that justice increases expectations of ethical the globe regardless of cultural variation, and this normative framework collaboration from a user facing the prospect of using ChatGPT (Del Río- under the supervision of developers Second, as ChatGPT touches Lanza et al., 2009) and that well-being and empowerment contribute to 8 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 Table 4 Table 5 Model constructs definitions Model constructs, associated items, and references Construct Definition Adapted from Construct Associated items Adapted references Davis (1989) ChatGPT use Cost benefit – The productivity a Davis (1989) Efficiency • Using Chat GPT will improve my efficiency user expects when using ChatGPT, in Excellence work performance Ho and Lin (2010) terms of outcomes for efforts Ho and Lin (2010) Johnson and Grayson ChatGPT use Reliability – The service quality a Johnson and Grayson Status • Using Chat GPT will enhance my (2005) excellence user expects from using ChatGPT, (2005) Esteem productivity relative to risk, hence reliability Taylor and Todd (1995) Taylor and Todd (1995) ChatGPT use Human appraisal – The appraisal a Meyer-Waarden and • Chat GPT will be a useful tool in my Meyer-Waarden and status user expects from other humans Cloarec (2022) work Cloarec (2022) when using ChatGPT Gao and Huang (2019) ChatGPT use Robot appraisal – The recognition Johnson and Grayson • My interaction with Chat GPT will Gao and Huang (2019) esteem a user expects from ChatGPT when (2005) be clear and understandable Johnson and Grayson using ChatGPT Xian (2021) Pillai et al (2005) ChatGPT use Gaming features – The (2020) • I will find it easy to get Chat GPT to play entertainment a user expects when do what I want it to do (continued on next page) using ChatGPT Liu and Tao (2022) ChatGPT use Beauty – The sensory exaltation a • Interacting with Chat GPT won’t aesthetics user expects to feel when using Del Río-Lanza et al require a lot of mental effort ChatGPT (2009) ChatGPT use Morality – The moral standards and • Interactions with Chat GPT will be ethics ethical obligation a user expects Naranjo-Zolotov et al reliable when using ChatGPT (2019) Meyer-Waarden ChatGPT use Self-connection – How connected and Cloarec (2022) • Interaction with ChatGPT will be spirituality to themselves and to the credible environment and how belonging to something bigger a user expects to • Interaction data will be protected feel when using ChatGPT • I feel relieved to interact with Chat adjusting to the spiritual goals of a user (Meyer-Waarden and Cloarec, GPT 2022; Naranjo-Zolotov et al., 2019) • Given by the Chat GPT’s track The considerable impact of efficiency, here in the form of perceived record, I have no reservations about usefulness and perceived simplicity of use, on self-reported utilization acting on its advice among 120 managers was developed and confirmed by Davis in Davis, • Given the Chat GPT’s track record, I 1989, enhancing research on technology acceptance By delivering have good reason to doubt his or utilitarian value, efficiency in retail also raises consumers’ intentions to her competence use AI while shopping (Pillai et al., 2020) In education, Huprich (2016) • I can rely on the Chat GPT to showed that if AI applications are shown to improve students’ learning undertake a thorough analysis of processes, colleges would be willing to integrate them By demonstrating the situation before advising me that effort expectancy and usage convenience work in conjunction with • I have to be cautious about acting performance to impact the intention to use AI, Xian (2021) and Gansser on the advice of Chat GPT because and Reich (2021) supported the importance of efficiency as a cause for its opinions are questionable the intention to use AI Efficiency favourably influences the intention to • I cannot confidently depend on use for both early adopters and mass consumers, according to Saari et al Chat GPT since it may complicate (2022) The same criteria are important for intention to use, according my affairs by careless work to Shin and Choo (2011), because social robot efficiency depends on (reversed) flexibility and sociability However, Kuciapski (2017) reminds us that a • Interaction with ChatGPT will be technology’s usefulness depends on the context, task, or purpose that the favourable user has in mind The idea of compatibility between technology and the • People whose opinions I value will user, as well as their surroundings, as a moderator adversely affecting encourage me to use Chat GPT the impact of efficiency on the intention to use, was presented by Kar • People who are important to me ahanna et al (2006) The use of social robots will be based on a user’s will support me to use Chat GPT perceived cognitive demand (Thongsri et al., 2018) The impact of • The senior management in my ChatGPT’s efficiency on how users intend to use it depends on user- organisation will encourage using specific contextual elements, even if it exhibits great capabilities to ChatGPT respond to requests and produce content For instance, ChatGPT could • People who influence my first be seen as helpful, but later appear to be unsuited to societal per behaviour think that I should use formance (Krügel et al., 2023) As a result, we question the impact of ChatGPT ChatGPT on the intention to use it in hypothesis(H) 1 • It would give me a more acceptable image of myself H1 : ChatGPT use efficiency will have a significant effect on its • It would improve how my friends intention to use and family perceive me • It would give me better social The excellence of a social robot, being a matter of assurance, reduced recognition risk and concern about the quality of the outcome offered to the user, • Using ChatGPT will increase my increasing the intention to use intelligent banking services online (Ho profile in the organisation and Lin, 2010) The credibility of the service provided by AI mobile • Using ChatGPT will be a status banking apps also positively influences the intention to use (Yu, 2012) symbol in the organisation In the fashion industry, Lee and Moon (2015) questioned the willingness • I feel that the Chat GPT system recommendations are tailored to my interests • I feel that the Chat GPT system recommendations are personalized • I feel that the Chat GPT system recommendations are personalized for my use • I feel that the Chat GPT system recommendations are delivered in a timely way • I would feel a sense of personal loss if I could no longer use a specific Chat GPT system • If I share my problems with the Chat GPT system, I feel he or she would respond caringly 9 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 Table 5 (continued ) Table 5 (continued ) Construct Associated items Adapted references Construct Associated items Adapted references Xian (2021) Pillai et al Venkatesh et al., 2003 Play • The Chat GPT system displays a (2020) B´ehavioral • I intend to use ChatGPT in the next Aesthetics warm and caring attitude towards Liu and Tao (2022) Intention 6 months Gao and Huang, 2019 Ethics me Del Río-Lanza et al Chowdhury et al., 2022 (2009) Co-creation • I anticipate i would use ChatGPT in Spirituality • I can talk freely with the Chat GPT the next 6 months system about my problems at work Naranjo-Zolotov et al and know that he or she will want (2019) Meyer-Waarden • I plan to continue using ChatGPT in to listen and Cloarec (2022) the next 6 months • Using Chat GPT is fun for me • I intend to recommend ChatGPT • Using Chat GPT is very enjoyable use • Using Chat GPT is very • I intend to use ChatGPT to leverage entertaining superior analytical and • Using ChatGPT is a joy computational capabilities • Using ChatGPT is an adventure • Using ChatGPT is a thrill • I will feel comfortable co-creating • Using ChatGPT will be rewarding content with ChatGPT • Using ChatGPT will be a pleasant • I will feel comfortable to solve experience problems with ChatGPT • Chat GPT services have • The ChatGPT service will allow me consciousness to have my say to co-create • Chat GPT services have a mind of • My ChatGPT experience is their own enhanced as a result of co-creation • Chat GPT services have their own ability and capability free will • I will enjoy collaborating with • Chat GPT services will experience ChatGPT to solve problems/ complete tasks emotions • I think my problem was resolved by • I see ChatGPT as an officemate/ teammate the Chat GPT in the right way • I think the Chat GPT has been • I do not have any issues working with ChatGPT and leveraging its guided with good policies and superior analytical capabilities practices for dealing with problems • I think ChatGPT as an assistant/ • Despite the trouble caused by the workmate will be easy to get along problem, the Chat GPT was able to with respond adequately • The Chat GPT proved flexible in to use online clothing personalisation software and confirmed con solving the problem sumers’ preference on performance and the risks of contrasts between • The Chat GPT tried to solve the expected quality and purchased quality Mcknight et al (2011) showed problem as quickly that excellence increased trust in the robot, which secured the intention • The Chat GPT showed interest in to use it For Cao et al (2021), excellence lies in susceptibility, and fear my problem that the robot will develop suggestions with negative effects regarding • The Chat GPT did everything the user’s goals On the other hand, the complexity of the tasks sur possible to solve my problem rounding the use of a robot decreases its perceived excellence, first in • The Chat GPT was honest when terms of explainability and transparency, which appear lower for com dealing with my problem plex tasks fulfilled with AI, and also as users tend to lower their ex • The Chat GPT proved able and to pectations of AI assistants for goals that appear complex to them have enough authority to solve the (Larsson and Heintz, 2020) In contrast, cognitive trust (Johnson and problem Grayson, 2005) is the overestimation of present expectations for the • The Chat GPT dealt with me performance of a purchase based on past observation of performance courteously when solving the and can lead excellence to present an even higher impact than expected problem on the intention to use ChatGPT This is demonstrated by the frequency • The Chat GPT showed interest in with which robots are mentioned in the media (Thorp and H H., 2023) being fair when solving the In addition, ChatGPT weaknesses have recently been brought up by problem academics and journalists (Thorp and H H., 2023), potentially harming • The treatment and communication the expected excellence that consumers view in ChatGPT Therefore, we with the Chat GPT to solve the interrogate the influence of the excellence of ChatGPT use on the problem were acceptable intention to use ChatGPT: • Chat GPT use is very important to me H2 : ChatGPT use excellence will have a significant effect on the • Chat GPT I use is meaningful to me intention to use it • Chat GPT activities are personally meaningful to me ChatGPT’s use status is presently instrumented with subjective • Based on Chat GPT usage, my norms (Venkatesh et al., 2003; Taylor and Todd, 1995) and social impact on what happens in the recognition (Meyer-Waarden and Cloarec, 2022) The UTAUT integrates community is large the normative value of a technology as an antecedent of intention to use • Based on Chat GPT usage, I have it (Venkatesh et al., 2003) For instance, the fear that a robot would significant influence over what transgress social norms in its suggestions and allegations decreases the happens in the community intention to use it (Cao et al., 2021) The success of the use of AI in an • Based on Chat GPT usage, I have a organisation is pushed by leadership, organizational support and col great deal of control over what leagues sharing knowledge about it, proving that the social context happens in the community regarding the use of a robot influences its success to effectively socially • If I used ChatGPT my life quality integrate into the organisation (Chowdhury et al., 2022) If the use of would be improved to ideal • If I used this ChatGPT my well- 10 being would improve • If I used this ChatGPT, I would feel happier A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 such innovative technologies is felt as vital for organisations, managers robot is a “joy”, “thrill”, or “adventure” and relieves stress, which will demonstrate a higher willingness to integrate robots (Ochmann and significantly boost the intention to use it (Pillai et al., 2020) Moreover, Laumer, 2020) For instance, users of AI mobile banking services justify social robots, being engaging and dynamic, are more popularly accepted their shift to this channel through the “demonetization effect”, the among shoppers (Pillai et al., 2020) For Hui et al., 2007, positive atti democratization of mobile banking in their social environment (Sobti, tudes towards robots anchor into AI, making tasks more interesting and 2019) The use of AI robots is caused by a social need to connect with fun for students The educative environment that frames acceptance and other humans (Thongsri et al., 2018) For example, the ability of an AI successful AI use should appear enjoyable (Kashive et al., 2020) Meyer- application to benefit networking in education increases the intention to Waarden and Cloarec (2022) confirmed that hedonism persuades con use it (Kashive et al., 2020) Social influence increases the intention to sumers to use autonomous cars Likewise, Xian (2021) considered he use social robots (Xian, 2021); recognition retrieved by users from their donic motivation to be a key factor of the intention to use robots in the peers using a robot positively influences their intention to use it (Meyer- leisure services sector However, the capabilities observed in ChatGPT Waarden and Cloarec, 2022) Despite these findings, the intention to use appear majorly utilitarian, computation-based, and generate mainly a robot also depends on individual user habits, such as addiction, textual data (Kecht et al., 2023) In this sense, the gamified aspect of independently of social influence (Xian, 2021) and on a personal ChatGPT emerges as low, questioning the representativity of past find mindset about new technologies, as resistance to change prevents con ings about AI and social robots in their ability to offer play to users of sumers from using robots regardless of mandates to interact with them ChatGPT Therefore, testing whether ChatGPT also brings the play value (Prakash and Das, 2021) In addition, ChatGPT is a “cultural sensation” and whether it has a role in the intention to use this robot arises as a that is highly discussed and popular currently (Thorp and H H., 2023) critical task for research We thus develop Hypothesis 5, which suggests Intention to use it might be a matter of individual attitude towards ChatGPT’s use of play significantly influences the intention to use it trends: optimism (Pillai et al., 2020), attraction, or rejection and scep ticism (Krügel et al., 2023) This leads us to formulate Hypothesis 3: H5 : ChatGPT’s use of play will have a significant effect on the intention to use it H3 : ChatGPT use status will have a significant effect on intention to use it ChatGPT’s use of aesthetics refers in the present study to behavioural anthropomorphism developed by Liu and Tao (2022) The aesthetical In this study, personalization and affective trust proxy ChatGPT value of an experience implies pleasure from the perceived beauty of a users’ esteem as an experiential value in using ChatGPT For instance, consumer product, action, environment, or interaction (with staff) the fact that a robot suits the individual requirements of a user increases offered (Holbrook, 1999) As regards social robots, research findings the intention to interact with it (Pillai et al., 2020) A robot that cour place anthropomorphism at the centre of perceived robot beauty or the teously obeys and responds to a user’s demands (Del Río-Lanza et al., pleasure to interact with it (Kanda et al., 2004; Duffy, 2003) Physical 2009) and behaves politely is more likely to be adopted by a consumer human likeness leads to a higher intention to use social robots (Blut (Kanda et al., 2004) The profiling capabilities demonstrated by a robot et al., 2021) and to follow their recommendations (Liu and Tao, 2022), to support user self-development also cause the intention to use it and reduces the perceived threat of a robot (Lee and Liang, 2016) (Kashive et al., 2020) Overall, interactions where the social robot en through the formation of positive emotions with the anthropomorphic ables users to express themselves through two-way communication (Gao robot (Chiang et al., 2022) In addition, Seo (2022) showed that female and Huang, 2019) and acknowledge users’ preferences are more robots increase the satisfaction of customers with hospitality services, as attractive to consumers Moreover, robots offering users a feeling of gender stereotyping represents females as appealing Liu and Tao (2022) control over decisions present higher user acceptance rates (Zarifis et al., showed that anthropomorphism appears attractive to users in its 2021) Johnson and Grayson’s (2005) concept of affective trust also behavioural aspects: the robot that seems to develop autonomously its links to self-esteem, leading consumers to form an attachment to an offer own decisions, own opinions and emotions, and overall is perceived as and trust it out of confirmation bias, rationalizing their own affections conscious, is more engaging to consumers For Esmaeilzadeh and Vaezi with biased positive expectations of the offer For social robots, we infer (2022), AI consciousness is a critical matter for the service industry similar reasoning is followed by AI users (Abadie et al., 2019), and Consciousness is defined as the robot’s ability to perceive its internal supports the esteem value of interactions with ChatGPT to impact the states, innovate, and communicate while agreeing on a specific set of intention to use it Yet, within the hierarchy of consumers’ basic needs symbols and behaviours to conform to This increases the empathy of the (Maslow, 1943), the needs for well-being, security, and social belonging robot felt by a consumer, and overall increases the propensity to adopt are priorities relative to the need to raise self-esteem In this sense, the AI Robot anthropomorphism also leverages the perceived quality of ChatGPT users’ esteem may not impact the intention to use it as much as a service through its human-like behavioural or physical appearance, other experiential value dimensions In addition, as ChatGPT’s purpose and fosters adoptions and even loyalty intentions of robots (Noor et al., is oriented towards cognitive support, esteem might play a minor 2022) Similarly, the perceived humanity of a robot increases the cosmetic role in increasing the intention to use this service Finally, acceptance of virtual assistants (Zhang et al., 2021) If Anthropomor within the field of Service Marketing, Li et al (2022) and Li et al (2019) phism has been demonstrated as a positive influence to accept robots, it have demonstrated that perceived courteousness, service responsive shows a diverging impact on users, who might feel repulsed by ness, and esteem are influenced by the physical attractiveness of service anthropomorphic robots if they felt that the robots’ intelligence and staff, as a “beauty premium” helping to forgive service failures Since capabilities could threaten their own human intelligence (Gursoy et al., ChatGPT only shows behavioural anthropomorphism and no physical 2019) Moreover, ChatGPT is still a recent phenomenon; not enough appearance; it could show a low esteem value to ChatGPT users Based research has been conducted yet to confirm with confidence whether on this debate, we challenge the influence of ChatGPT user esteem ChatGPT also falls into the case where human likeness has an attractive through Hypothesis 4: power on the user In addition, the present study focuses on behavioural anthropomorphism, as ChatGPT presents no physical appearance or H4 : ChatGPT user esteem will have a significant effect on the inten voice, but only interacts through text In this sense, research on physical, tion to use it psychological, and behavioural links between dimensions of anthropo morphism remains scarce In this line, physical anthropomorphism In this study, we present the experiential dimension of play value could support or even activate the perception of behavioural anthro with perceived enjoyment (Pillai et al., 2020) and hedonic motivation pomorphism, allowing interrogations on ChatGPT’s ability to raise the (Xian, 2021) According to Huang et al (2019), perceived AI value is intention to use it from human-like behaviour Consequently, we ques partly based on the fact that consumers enjoy interactions with the tion the impact of ChatGPT’s anthropomorphism on the intention to use robot In retail, similar findings confirm that collaborating with a service 11 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 ChatGPT in Hypothesis 6 expectations, and potential consequences Hence, we propose Hypoth esis 7 on the influence of ChatGPT’s use of ethics on the intention to use H6 : ChatGPT’s use of aesthetics will have a significant effect on the ChatGPT intention to use it H7 : ChatGPT’s use of ethics will have a significant effect on the ChatGPT’s use of ethics is considered from the perspective of inter intention to use it action and procedural justice in interactions with an AI (Del Río-Lanza et al., 2009) The two concepts refer, respectively, to the perception that This study instrument uses spirituality as an experiential value with a robot respects with honesty and fairness the interests, integrity, and the meaning and impact found in humans use in Naranjo-Zolotov et al.’s feelings of a consumer, and the perception that the robot’s response to (2019) work and with user well-being in AI measured by Meyer-Waar the request of a consumer applies morally good, “right” and flexible den and Cloarec (2022) First, spirituality in consumption refers to the processes (Del Río-Lanza et al., 2009) This sense of justice from a robot introspective process of sense-making and self-development made by oriented towards society and the user triggers positive emotions in consumers from their purchase (Holbrook, 1999) The impact and consumers Cao et al (2021) also studied the impact of perceived meaning attributed to the use of AI have a positive effect on the intention severity, such as the propensity of a robot to harm society with amoral to integrate robots (Naranjo-Zolotov et al., 2019) On the other hand, the behaviours, and showed it is one of the reasons why managers would fact that a robot could improve self-satisfaction and happiness increases fear the use of robots in decision making On the other hand, social ro the prospect of using it (Meyer-Waarden and Cloarec, 2022) Personal bots, which are perceived as helping users to build a positive impact on well-being in terms of pleasing the psychological state through in society, are more accepted Naranjo-Zolotov et al (2019) showed that AI teractions with robots fosters the intention to collaborate with them empowerment capabilities increased human willingness to use AI within among managers (Cao et al., 2021) As a corollary, these managers put their political activities, and Gansser and Reich (2021) extended the their own spirituality as a priority, which can also reduce the intention UTAUT2 to the factor of AI sustainability, demonstrating it emerges as a to use the robot if they feel the robot would act as a threat to their own significant factor in the intention to use the robot In addition, concerns personal development Social robots can support users’ willingness to about the impact on individual privacy, development (Liu and Tao, self-accomplish or to have faith, as Tan (2020) argued the helpfulness of 2022; Taneja et al., 2014; Hui et al., 2007), personal information privacy AI “guides” in spiritual Chinese education, or as Cheong (2020) analysed (Zarifis et al., 2021; Dinev et al., 2016), and the human job market religious social robots’ popularity In contrast, Herna´ndez (2021) ana (Prakash and Das, 2021; Vu and Lim, 2022) prevent AI adoption from lysed AI spirituality and concluded that robots’ capabilities are currently materializing in a larger part of the population However, ChatGPT’s use insufficient to share spiritual values with a human: they appear to be of ethics appears currently as the focus of practitioners’ and researchers’ moral and conscientious but cannot intrinsically project themselves into attention, being questioned about its impact on the usefulness of human the world or to have faith Similarly, spirituality is not the initial purpose work (O’Connor, 2023) or the credulity of consumers socializing with of ChatGPT, which has been designed as generalised knowledge support such a robot (Krügel et al., 2023) Therefore, with organisation leaders, for the public In this sense, ChatGPT can offer spiritual value by helping institutions, and the press questioning the ethical value of ChatGPT users with their personal goals outside of the scope of college studies or (Else, 2023), consumers might be sceptical about the ethics displayed by the organisation by acting as a fitness coach (Thorp and H H., 2023), but ChatGPT, rendering it ineffective to boost the intention to use and co- we currently do not know the weight of such spiritual requests in the use creation of ChatGPT Therefore, we interrogate the impact of ChatGPT’s use of spirituality on the intention to use it in Hypothesis 8 Analysing Twitter data, Haque et al (2022) demonstrated that the public views ChatGPT as lacking critical thinking capabilities, thus not H8 : ChatGPT’s use of spirituality will have a significant effect on the matching the nuances required for ethical decision making In this sense, intention to use it an AI assistant cannot block a user from pursuing an unethical goal with its help Overall, consumers, managers, and students face the perceived Intention to use ChatGPT is adapted from Wang et al (2021), and dilemma of using an easy and efficient service while knowing it might ChatGPT–Human co-creation is adapted from Gao and Huang (2019) not prevent others from engaging in plagiarism-like work or contrib Morosan and DeFranco (2019) viewed the intention to use interactive uting to future job losses In other words, Paul et al (2023) found the robots in the hospitality sector as a premise of value co-creation, public knows that ChatGPT cannot render a user accountable for un considering not the intention to use but the intention to interact, and ethical behaviour, leading them to leverage its performance towards the linking this interactional behaviour to both human and robot efforts realization of negative outcomes in the short term, and distortion of Galdolage (2021) confirmed this relationship by integrating the inten morality in the long However, consumers already use web searches for tion to co-create value between a robot and a human into the UTAUT2 intimate and non-socially desirable queries (Davidowitz, 2017) Ac model, also considering the possibility of co-destruction of value in AI- cording to Bonsu and Baffour-Koduah (2023), concerns about ethics do assisted self-service In addition, Balaji and Roy (2016) considered not prevent students from trying ChatGPT, as the perceived prospect of perceived value co-creation as a factor for the intention to keep using a yielding better results and progress through use has, on average, more technology The prospect of being able to participate in the development influence on them People often gauge the appropriateness of their ac of a shared value is the latent meaning justifying the lasting use of tions based on situational variables, societal norms, and the explicit and technologies Similarly, using service-dominant logic (Vargo and Lusch, implicit rules that govern different environments (Liebrenz et al., 2023) 2004), Zhu et al (2022) showed that the continuous use of a For instance, students are typically allowed to consult materials that the sustainability-oriented platform is caused by users’ co-creative behav exam authority has deemed permissible in an open book exam How iours for the environment; hence, co-creation habits lead to lasting in ever, using an external agent like ChatGPT that has the ability to provide teractions with suitable robots Regarding ChatGPT, the preceding answers beyond the allowed resources would be considered cheating question appears highly relevant, as ChatGPT’s usefulness lies in the because it undermines the purpose of the exam and devalues the effort of conversational and informational support to humans requesting sug others who are complying with the set guidelines However, utilizing gestions and advice in order to create value, such value materializing as ChatGPT in a professional environment, especially during informal co-created between ChatGPT and the human It is precisely the under discussions could be seen as resourceful and proactive, as there are no lying co-creation between authors or students and ChatGPT that stems predefined rules being violated, and the action aligns with the contex from controversy in education, research, or journalism (Pavlik, 2023) tual norms of the setting This dichotomy illustrates that ethics are not through the difficulty of distinguishing between human and robot con necessarily static principles; rather, they are applied based on a variety tributions (Gao et al., 2022), and the unethical objective of some users to of factors that include environment, cultural norms, stakeholder save effort by interacting with ChatGPT (O’Connor, 2023; Else, 2023) 12 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 Through the reverse causality stemming from the above findings, we Table 6 Key figures question whether the intention to interact with a social robot can Sample population data materialize into co-creation behaviour as a factor, also because co- 300 creation could also be an underlying goal of the intention to use a so Items 60 cial robot Moreover, the nature of the service offered by ChatGPT re Target sample size 45 inforces this debate, implying that shared knowledge and shared Incomplete responses Non-meaningful response to subjective cognitive load can be the outcome of its use Consequently, we devel Unusable responses questions; time taken to complete the survey oped Hypothesis 9 asking whether the intention to use ChatGPT has a Response validation criteria (not less than the threshold 10 min); significant impact on ChatGPT–Human co-creation: inconsistent responses to reverse coded (Checking the internal response items; contradictory responses to same item H9 : Intention to use ChatGPT will have a significant impact on consistency) rephrased and asked twice within the ChatGPT–Human co-creation survey; Eligible Responses 195 (sample adequacy statistically tested 4 Methodology and is greater than the required sample) Gender Diversity according to G power test Considering our research objectives, we employed a survey-based Country Male: 100 Female: 95 methodology to empirically test the hypotheses stemming from our Managerial Role United Kingdom (all) proposed theoretical model (see Fig 1) Survey-based primary research Industry Sector All [business manager] methodology has been extensively reported in the contemporary liter Hospitality and Tourism: 42 ature investigating the adoption of emerging technologies in business Years of employments (mean) Retail and Marketing: 49 organisations, the impact of the adoption on business productivity and Years of managerial experience Finance and Insurance: 32 firm performance, and factors impacting the adoption of these tech Employment status Manufacturing: 34 nologies from the managers’ and employees’ perspectives (Chowdhury Mean age Education and Training: 38 et al., 2022; Rodríguez-Espíndola et al., 2022) Used some form of AI-based 6.8 years 3.2 years We collected data by administering a survey that was designed to automated tools Full-time (all) examine the relationships between different constructs derived from the Use ChatGPT personally 39.2 years [30–45 years] hypotheses The items to measure each construct variable were derived Used ChatGPT for some form of All from the research literature discussed in the previous section, and are presented in Table 6 The items were measured using a 5-point Likert office task All scale (1 = completely disagree; 2 = disagree; 3 = neither agree nor All disagree; 4 = agree; 5 = completely agree) The initial instrument was pre-tested with five academic experts and eight business managers The involved in a managerial decision-making role and managing staff) and pilot testing aimed to (1) choose a comprehensive set of items that would had to have used ChatGPT-like social robots Criteria for inclusion were accurately measure the constructs and which are applicable in the case presented as screening questions at the beginning of the survey of ChatGPT; (2) examine that the statements used to measure constructs administered through the Prolific platform to ensure that all respondents are clear, easy to interpret and meaningful; (3) ensure that the proxies possessed: (1) at least three years of experience within the same orga could sufficiently measure each construct; and (4) ensure the statements nisation and sector (i.e., they had a comprehensive knowledge and un representing outcome variable (co-creation) correctly reflected co- derstanding of digital technologies being adopted and deployed with creation between ChatGPT and managers in organisations in different their organisation and in the sector) and comprehensive knowledge and contexts understanding of existing organizational strategies, processes, and de cision making in the context of using and adopting AI applications such The key research question examined in the study was to understand as ChatGPT; (2) knowledge of organizational resources, capabilities, and the experiential factors affecting adoption of ChatGPT-like social robots, capacities that will help facilitate the adoption and deployment of AI- and therefore we were required to recruit managers with specific skills, based tools such as ChatGPT; (3) comprehensive understanding and knowledge, and understanding of ChatGPT-like social robots and expertise regarding the adoption, implementation, use, evolution, and generative AI applications The sample had to have sufficient knowledge business practices related to AI adoption in their organisation; (4) of organizational strategies and decision-making processes (i.e., they are Fig 1 Theoretical model 13 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 experience related to executive decision making and strategy meetings content co-creation While the context of application and specific sectors (being aware of the company’s strategy, mission, vision, goals and play crucial roles in any study, the experiential factors analysed in this professionals); (5) knowledge of AI-based robotic applications being paper should remain largely consistent across various applications implemented in the organisation and the implications for business per within a given sector This is due to our focus on users’ perceptions of the formance; (6) experience with an organisation that has implemented AI- technology, in the absence of any extended period of usage for a based tools to automate business processes; (7) experience within an designated purpose in order to achieve particular goals Furthermore, organisation that uses analytics, AI-based systems, similar digital we are not evaluating the performance of the technology within any computing tools to leverage data-driven decision making and augment specific sector or application context As such, we believe that the human decision making; (8) experience within an organisation that sample selected for this research is adequately equipped to address our features a digital data management platform and strategy to consolidate research question data captured from various sources; (9) sufficient knowledge and un derstanding about ChatGPT, its capabilities and limitations (tested Partial Least Squares Structural Equation Modelling (PLS-SEM) is an through multiple choice questions); (10) sufficient understanding on increasingly popular tool in social sciences for several reasons listed how ChatGPT can be used within their own job and broadly within the below (Hair et al., 2019), which made it a suitable choice for our study sector in different business applications and decision-making context For example, it can efficiently handle complex models with multiple (open question); (11) had the opportunity to test the ChatGPT free tool latent variables and many indicators, which can be a challenge for other provided by OpenAI, and then use it for any professional reasons The methods like Covariance-based SEM (CB-SEM) Moreover, our sample purpose of the inclusion criteria was to select respondents who could size of 195 business managers, while adequate, is not sufficiently large provide meaningful information for our analysis (i.e., have rich first- PLS-SEM is effective in scenarios where the sample size is not very large, hand knowledge and the full capacity to complete the survey) making it an apt choice for your study Additionally, we not only aim to understand the relationships between the variables, but also predict the The acceptance of a nascent technology is generally assessed by behavioural intention to use ChatGPT PLS-SEM is a component-based investigating the factors that facilitate or deter its use by prospective approach that also places emphasis on prediction, which aligns well users This can be extended to the understanding of how well the tech with our research objectives nology integrates into everyday managerial activities, that is, its ability to meet the needs of the managers employing it (Terrade et al., 2009) 5 Findings However, to ensure the technology being used serves its intended pur pose, its acceptance is usually evaluated in three stages along a temporal We used the G* power tool to test the sample adequacy by executing continuum The stages include, a priori acceptability – at this stage, the an F-test for linear multiple regression fixed model with R2 deviation to user forms an opinion about the technology based on the provided de zero The test returned a sample size of 107 for effect size (medium = scriptions, without much using the technology in a specific context to 0.15), for alpha value 0.05, and power set as 0.95, given five predictor attain their desired goals (Bobillier-Chaumon and Dubois, 2009); constructs in our model, showing our sample size (195) is sufficient to acceptance – this stage pertains to studying user behaviours when they validate the findings (Table 7) truly experience using the technology either in a controlled environment or in their regular setting The goal here is to understand user percep The reliability of the constructs was tested using Cronbach’s alpha tions after using the technology, which can encompass influencing fac The results presented in Table 8 show very desirable values of Cronbach tors, judgments about the technology’s characteristics, user experience, alpha (above 0.8), indicating a high level of reliability for the scales and satisfaction (Terrade et al., 2009) However, in this phase, the used We followed the recommendations of Schmiedel et al (2014) to perception is based on a short-term usage period and may be in verify the validity of the proxy items measuring each construct (i.e., controlled experimental settings, which does not genuinely represent construct validity) Exploratory factor analysis (EFA) employing prin the user’s actual environment; appropriation – this phase involves cipal component analysis (PCA) showed that all the proxies used in the studying the prolonged use of the technology, potentially through lon study were formed into ten factors (i.e., equal to the number of con gitudinal studies, to gain a better understanding of how the technology structs in our study) The cross-loading proxies were discarded, and the is adopted in the users’ own environment (Barcenilla and Bastien, PCA was executed until parsimonious factors were obtained 2009) The objective is to scrutinize how the user employs the tech nology, the factors impacting the user experience, and how the tech To test common methods bias, we used Harman’s single factor test nology can evolve within its current application and be redesigned to (Fuller et al., 2016) by performing confirmatory factor analysis (CFA) accommodate emerging applications The results of this analysis showed that the common methods variance (CMV) is 0.35, which is less than the threshold value (0.50) reported in In this research, due to the novelty of ChatGPT, we are investigating the literature Second, we applied the marker variable technique (Lin its priori acceptability This entails that managers should possess a basic dell and Whitney, 2001), which showed an insignificant relationship (r understanding and awareness of the technology, and form their = 0.021, p > 0.41, non-significant) between the original theoretical perception about the intention to use, and subsequently co-create with model and the revised marker variable-based model Therefore, we AI, based on minimal usage, which aligns to the literature articulated considered the potential impact of CMB calculated using CMV to be non- above and the comprehensive screening criteria employed in our study substantial Recent literature, such as Budhwar et al., 2023, and Dwivedi et al., 2023, have thoroughly explored the capabilities, limitations, and potential Table 7 applications of ChatGPT These works suggest that the way in which G*Power parameters and output test family: F tests managers and consumers will utilize this novel application not only Statistical test: linear multiple regression: fixed model, R2 deviation from zero depends on the contextual environment, which is expected to vary, but is Type of power analysis: a priori: compute required sample size also linked to the inherent features of the bot, as well as individual human perceptions and judgments regarding the bot’s usage Consid Input parameters Effect size f2 = 0.15 α err prob = 0.05 ering these insights and taking into account the initial acceptability Output Power (1-β err prob) = 0.95 context of our study, we propose that in these nascent stages of ChatGPT Number of predictors = 2 (while the technology is still in a phase of evolution), factors that shape Noncentrality parameter λ = 16.05 managers’ experiential perspective of the application will either drive or Critical F = 3.08 inhibit the intention to use it This would be true regardless of the sector, Numerator df = 2 and these factors would also influence knowledge acquisition and Denominator df = 104 Total sample size = 107 Actual power = 0.95 14 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 Table 8 Table 10 Construct reliability Goodness-of-fit of the SEM model alpha CR AVE CFI GFI TLI RMSEA χ2 /df ratio 0.901 0.021 2.371 Efficiency EF 0.934 0.972 0.833 Model 0.916 0.928 0.922 0.936 0.650 Excellence EX 0.849 0.940 0.531 0.998 0.999 0.987 Status ST 0.898 0.970 0.624 0.938 0.942 0.738 Esteem ES 0.867 0.906 0.529 experiential values (Holbrook, 1999) and the UTAUT (Venkatesh et al., 0.885 0.902 0.518 2003), and to extend the theoretical and empirical underpinnings of Play PL 0.900 0.926 0.709 social robot acceptance (Duffy, 2003) to symbols and senses experienced 0.908 0.985 0.724 by the user when interacting with a social robot Empirical outcomes Aesthetics AE demonstrated that the efficiency, excellence, status, esteem, ethics, and spirituality expected by a consumer from their experience with ChatGPT Ethics ET are decisive factors for their intention to use ChatGPT, resulting in ChatGPT–human co-creation These findings proved consistent with Spirituality SO reasoning on AI acceptance in the field of service marketing, which depicts the AI interaction journey of a consumer as a progression from Intention to use IU experienced stimuli to attitude towards the robot, which materialized into the co-created value of the service between the consumer and AI Co-creation CO producer to the service-dominant logic (Pillai et al., 2020; Gao and Huang, 2019) Stimuli anticipated by the consumer of robots for activ Following the recommendations outlined in Kock, 2022, we exam ities of customer service (Huang et al., 2019)and retail shopping (Pillai ined the nonlinear bivariate causality direction ratio (NLBCDR) to test et al., 2020) are associated with symbolic interpretations such as feeling endogeneity The acceptable value is >0.7 for non-significant endoge respected (Del Río-Lanza et al., 2009), well-being (Meyer-Waarden and neity For our model, the value of NLBCDR was 0.81, which is greater Cloarec, 2021), or convenience (Xian, 2021; Gansser and Reich, 2021), than the threshold value; therefore, we concluded that endogeneity is and lead to consumer attitudes such as trust (Liu and Tao, 2022), not a major concern to test our proposed model acceptance of robots (Zhang et al., 2021) or concern (Cao et al., 2021) The following attitudes will trigger co-creation behaviours (Gao and The convergent validity of each dimension was tested using average Huang, 2019) Our findings confirmed and applied this literature to the variance extracted (AVE) calculations and composite reliability co use of ChatGPT We have broadened the empirical knowledge about efficients (SCR) Table 6 shows the matrix of correlations of the main robots as service providers to an extensive technology which helps constructs, and the diagonal shows the square root of AVE The AVE for consumers in their willingness to develop their personal knowledge, for each construct is >0.5 and CR is >0.7 (Tables 8 and 9), which are studies, management decisions or lifestyle aspirations, and contributed acceptable showing the validity of the constructs We also found that the to reinforcing the robustness of theoretical frameworks of experiential square root of the AVE is greater than all the inter-construct correlations, value (Holbrook, 1999) and service-dominant logic (Vargo and Lusch, providing evidence of sufficient discriminant validity 2004) for understanding technology acceptance and use (Venkatesh et al., 2003) of social robots (Duffy et al., 1999) The model was further verified by examining the coefficient of determination (R2) values, which showed the predictive power of the In this context, our study has demonstrated that expectations of an two endogenous (predictor) variables The values indicated that the full efficient experience with a social robot, here expected usefulness and model explained 59 % of the variance for intention to use ChatGPT, 66 % ease of use, increases the intention to use it and leads to co-creation of of the variance for AI-powered decision support systems, 56 % of the value with it This is consistent with the technology acceptance model variance for supply chain resilience, 61 % of the variance for co- (Davis, 1989) and helps raise a better understanding of the logic behind creation, and met threshold requirements (>0.36) to demonstrate the significance of perceived usefulness and perceived ease of use, as we large predictive power according to Wetzels et al (2009) We also showed that consumers make sense of such basic traits of technology by examined the effect size (f2) to determine the contribution of an exog associating them with efficiency Understanding the consumer meaning enous construct to an endogenous construct We found that all the direct behind experiencing technologies that are perceived as useful and values were >0.15 but this offer have been appropriated by late users as their own knowledge 0.130 0.08 Rejected evolved and influenced their trusting abilities Intention 0.274 0.01 Accepted 0.213 0.41 Rejected The significance of ChatGPT use status on the intention to use 2 Excellence- > 0.290 0.28 Rejected ChatGPT confirms advances that subjective norms (Taylor and Todd, 1995) and social recognition (Meyer-Waarden and Cloarec, 2022) frame Intention 0.359 0.001 Accepted the acceptance of technologies, strengthening the idea that technologies 0.381 0.001 Accepted also provide a topic upon which users can fulfill a social need to interact 3 Status - > Intention with other humans (Thongsri et al., 2018) This aligns in addition with 0.652 0.001 Accepted studies on the relationship between innovativeness and AI acceptance 4 Esteem - > Intention (Gansser and Reich, 2021; Sobti, 2019; Parasuraman and Colby, 2015) and enriches the existing research by showing that AI acceptance is not 5 Play - > Intention only a question of grasping the opportunity of innovations but also ap pears positive to the collective unconscious: its adoption is influenced by 6 Aesthetics - > the willingness to appeal to human peers and desirability biases Our results, demonstrating the significant weight of ChatGPT user esteem on Intention the intention to use ChatGPT, strengthen the congruity of person alisation for AI adoption (Liu and Tao, 2022; Gao and Huang, 2019) 7 Ethics - > Intention They contribute to enlarging its analysis by applying personalisation to the use of a generalist social robot such as ChatGPT and demonstrate 8 Spirituality- > data analysis appears expected to serve the user in a service profiling which suits them accurately (Kashive et al., 2020) This also demon Intention strates that affective trust pertains to the intention and productive collaboration with a social robot (Johnson and Grayson, 2005), the 9 Intention - > Co- attachment being a source of blind trust despite other factors This could explain potential user dependence (Xian, 2021) and echoes concerns creation about ChatGPT’s influence on consumer behaviour (Else, 2023) We provide empirical results narrating that users not only need reassurance the knowledge and decision support offered by ChatGPT Moreover, our about their own integrity at use (Huang et al., 2019), but also require the findings about ChatGPT use excellence to encapsulate and explain approbation of the social robot about their self, based on ego and research about perceived risks and performance of AI technologies as attachment factors of its acceptance Ethics were shown to significantly influence the intention to use The present study views excellence as the assurance and credibility a ChatGPT, aligning with the results of Del Río-Lanza et al (2009) and consumer attributes to ChatGPT in balancing potential achievements extending them to advanced social AI technologies more than a decade and liabilities The expected excellence of use for ChatGPT reinforced after the publication of their findings We helped show that willingness the intention to use and collaborative behaviour with the robot This for ethical values extends to immaterial nonhuman service providers first confirms our results linked to the findings about the perceived risk such as ChatGPT We further contributed to the proposition that users of AI, such as fear of exposure of one’s property to a data analytic apprehend the use of ChatGPT with ethical concerns in mind, precisely technology: rights, time, its profile (Huang et al., 2019), or its misuse valuing justice, and brought statistical clues to the debate on the fairness (Hui et al., 2009) Second, the below finding explains the significance of of ChatGPT use in education and organisations (O’Connor, 2023) We data privacy security (Liu and Tao, 2022; Dinev et al., 2016) and control (Zarifis et al., 2021) as predictors of the acceptance of robots In this sense, the significant weight of excellence of use when co-creating values with ChatGPT contributes to highlighting the meaning latent to a user’s attitude about potential dangers from AI with the perspective of experiential value, as a user’s integrity security is a symbol of excellence for them On the other hand, our findings confirm that assurance, credibility, and reliability are factors promoting intention to use (Ho and Lin, 2010; Cao et al., 2021), explaining them further by showing that this is a matter of anticipated excellence of the service rendered by the AI We also confirm the benefits of cognitive trust (Johnson and Gray son, 2005), as a combination of past proofs of excellence and user confirmation bias is applied to a technology that emerged suddenly in 16 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 demonstrated that if doubts remain about ChatGPT being a potential expected use experience on the intention to use an AI We illustrated source of adverse selection among writers, students, or managers, such how such theoretical frameworks, the first emerging from immaterial users incorporate ethics as bringing more value to ChatGPT, making and information technology-based research and the second arising from room for optimism about the future use and ethical implications of sensorial and physically based research, can articulate to extend the powerful generative AI On the other hand, we showed that ChatGPT’s collective understanding of social robots’ acceptance and a better un use of spirituality plays a significant role in convincing users to co-create derstanding of the symbols, cues, and values that are created in the mind value with the social robot We presently brought results to a relation of a user considering using a social robot In this combination, we ship that proves under-researched even in the field of social robots We showed that technology use acts as a source for consumers to actively, confirmed that AI is seen as a tool to self-empower, acquire meaning and reactively, and story tell intrinsic or extrinsic value for themselves and positively impact society (Naranjo-Zolotov et al., 2019) and feel others Second, we integrated the concept of co-creation between mentally better (Meyer-Waarden and Cloarec, 2021) This result em humans and robots, illustrating a phenomenon latent to social robots, phasizes the emerging generative AI technologies as not only a means for but never analysed While many studies have reported that users and the functional acquisition of knowledge but also as helpers for wan robots can build interpersonal relationships, we presented with our dering about their own self-directions, faith, or metaphysical prospects model the basis for such an observed interpersonal bond between AI and users Our argument showed that aspects of the experience felt with a Moreover, the present research chose to consider managers, techni robot will motivate users to interact with it and start co-creating value cians, academics, or individuals using ChatGPT as consumers of the We argued that shared experience is the premise of co-construction of service of a social robot through the lenses of experience value (Hol either material or subjective value, and such co-construction pertains to brook, 1999) and UTAUT (Venkatesh et al., 2003) in order to provide the development of an interpersonal relationship between users and AI future researchers with a panoptic view of co-creation between humans This research combined interdisciplinary perspectives to enrich the and social, generative AIs However, this trade-off hinders the compre fields of AI technologies, psychology, and marketing hensiveness of the findings, as co-creation between users and ChatGPT varies considerably across industries and objectives Under the theories Third, we provided an analysis of ChatGPT as a recently emerging of service consumption, experience and co-created values are highly technology that is at the heart of discussions and debates among the volatile (Zeithaml et al., 1985), encouraging future research integrating public (Else, 2023) However, only a few studies have looked at GPT-3, characteristics about the context and purpose of ChatGPT use Under the and no empirical result has currently been identified in the existing broader theoretical lenses about engagement and consumption behav literature In this sense, we bring through the present study a consid iour, use varies across personalities, features of the product, and pur erable contribution by analysing the implications of ChatGPT within the chasing environment (Vargo and Lusch, 2004), making room for further robustness of scientific methods, while knowledge about ChatGPT study on the weight of user identity and psychology on ChatGPT use, currently remains at the stage of being communicated by early adopters and on the technical environment in which ChatGPT integrates, as or being scientifically discussed without empirical evidence We step broader features associated with this AI Indeed, as ChatGPT appears forward to provide empirical answers and knowledge to the controversy rather instrumental to extrinsic consumption matters than its sole use, and divergence occurring about this robot, contributing to framing the taking into account the aspects of the purpose, technology, or task use of social and generative AI to encompass the perspective of tech enveloping the use of ChatGPT would prove fulfilling for future studies nologies, not only protecting the basic needs of the consumer, such as on generative AI Finally, as ChatGPT is a conversational agent, inte security and belonging, as social robots already do, but also under grating variables about the direct social context would extend the pre standing metaphorical cues that they can convey to users to optimize the sent research to strengthen our understanding of the disruptive benefits of increasingly conscious and empathetic robots technology behind ChatGPT For instance, Abadie et al (2019) provided a framework to understand the engagement of users for another example Various media outlets have widely discussed the benefits of new AI of social AI, and narrating that transparency in the information tools for businesses However, it is crucial to understand the limitations exchanged, level of user control, potential consequences of the co- of generative AI models, which can lead to legal and reputation risks, created task, or trust are critical contextual factors to apprehend in such as the use of offensive or copyrighted content, loss of privacy, the study of acceptance of AI assistants (Abadie et al., 2019) fraudulent transactions, and the spreading of false information There fore, techniques need to be developed to enhance the transparency and 7 Research implications explainability of generative and self-adaptive AI models to facilitate the explainability of outcome responses For a tool view (Kim et al., 2021), a 7.1 Theoretical implications few key areas of interest will be how transparency and explainability can impact the competitiveness and productivity of organisations adopting Our undertaking of explanations of the role of consumer experiential ChatGPT, how responsible and ethical policies, practices, and regula value within the UTAUT for co-creation of value between humans and tions can help in the diffusion of generative AI applications, and how ChatGPT, precisely to understand the impact of efficiency, excellence, consolidating risk management frameworks and theoretical perspectives status, esteem, play, aesthetics, ethics, and spirituality on the intention on ethics can impact ChatGPT adoption within organisations to use which leads to co-creation, is a perspective that has been scarcely analysed in the literature, although appearing critical to understanding The ensemble view should explore ChatGPT’s impact on different implications from the democratization of social and generative robots contexts of use, ethical and moral judgement perspectives, and societal among mass markets (Puntoni et al., 2021) Research preceding this contexts Responsible development, deployment, and evolution of work partially includes experience attributes in the study of AI accep ChatGPT can promote human and societal well-being according to the tance (Gansser and Reich, 2021) or overlooks it to focus on interpersonal United Nations’ sustainable development goals 4, 5, 8, 9, and 10 characteristics of the social robot–human relationship (Kim et al., 2020) Finally, the skills view highlights the importance of understanding Existing research also apprehends the value co-creation within the use of ChatGPT’s limitations within different contexts of use and promoting technologies as only human to human (Chen and Lin, 2015), while no ethical decision making The role of government policies, training pro theoretical and empirical outcomes have undertaken the perspective of a viders, higher education, and technology developers to help develop co-creation between a robot and a human these skills among the human workforce is also examined The proxy view acknowledges threats posed by ChatGPT and similar AI bots, such From a theoretical viewpoint, our study first integrated marketing as black-box algorithms, discrimination, and biases, vulgarity, copyright and AI literature through the fundamental frameworks of UTAUT and infringement, plagiarism, fabricated unauthentic textual content, and experiential consumer value to understand and explain the influence of fake media Ethical reviews and bias screening should complement pe riodic risk assessments, and the AI risk management framework (AI RMF 17 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 1.0) can guide organisations in systematically assessing, understanding, might necessitate training in effective prompt engineering techniques and managing risks The review proposes that ethical models, such as utilitarianism, can be used to determine a path forward in resolving 7.3 Policy implications conflicts through a flexible result-oriented lens for formulating and testing policies at each stage of the risk management cycle In conclu The advent of social robotic AI tools such as ChatGPT necessitates the sion, consolidating AI risk management frameworks and ethical theory formation of concise rules and regulations for maintaining ethicality and perspectives can help make socially responsible decisions and ensure the accountability in their use Factors like data privacy, transparency, lia purposeful, cautious, reasoned, and ethical use of generative AI models, bility, and bias reduction should be taken into consideration by poli such as ChatGPT Utilitarianism can guide decision making that does the cymakers when dealing with generative AI technologies These least harm or promotes the most good for society guidelines can aid organisations and individuals in understanding their duties when deploying and interacting with ChatGPT, fostering trust and 7.2 Practical implications minimizing possible risks The conclusions of this research imply that companies must metic Considering the potential threats presented by ChatGPT to society ulously scrutinize the integration of ChatGPT applications Gaining an and various business sectors, policymakers should take a proactive understanding of the elements influencing users’ intent to employ stance in examining the potential societal implications of these tech ChatGPT - such as productivity, superiority, the value of suggestions and nologies (Budhwar et al., 2023) This should include an analysis of its interactions, and conversational skills - can assist firms in pinpointing impact on work, employment, social engagement and divide, digital suitable circumstances and cases for ChatGPT deployment A thoughtful anxiety, and the economy With a broader understanding of these con and strategic roll-out will guarantee that corporate leaders not only sequences, policymakers can enact measures to alleviate any negative accept but eventually adopt ChatGPT outcomes and exploit the advantages of social AI tools like ChatGPT for societal benefit The research points out that ChatGPT can be productively utilized for co-creating content and solving problems In order to exploit its capa It is necessary for policymakers to co-work and co-create strategies bilities, companies should foster a collaborative environment between with business practitioners, researchers, and technology experts which ChatGPT and business managers By involving managers in content will help in attaining a holistic understanding of the capabilities and creation or employing ChatGPT to tackle business issues, companies can consequences of social robotic AI tools like ChatGPT Collaborative en tap into its potential for creating value and fostering innovation deavors can shape policies and regulations that strike a balance between Moreover, companies should put in place measures to track and assess promoting innovation and ensuring responsible implementation the use of ChatGPT to ensure its successful implementation Gathering Routine discussions and information exchanges can guide informed feedback from business managers regarding their experience with decision-making and address emerging challenges effectively ChatGPT and pinpointing areas for improvement will enable firms to continually boost its acceptance and utilization within their business As ChatGPT employs generative AI to produce new content, there is a environments requirement for transparency and clarity regarding its operations Pol icymakers can prompt organisations to develop and implement such When contemplating the use of generative AI applications like generative AI tools to reveal information about the underlying models, ChatGPT, the intent to use is dependent on the application’s ability to data sources, and decision-making processes These initiatives will help deliver precise responses in an efficient manner, without creating to support accountability, enabling users to better comprehend the misleading information Consequently, developers of generative appli reasoning behind ChatGPT’s recommendations, and thus identify and cations should augment the transparency of the underlying model resolve potential biases or ethical issues To assure the reliability, safety, responsible for producing responses This involves sharing information and quality of social AI tools, policymakers can strive to develop in about the rationale and sources behind a response generated for a dustry standards and certification processes These standards can particular prompt With a clear understanding of the reasoning behind encompass facets such as data management, system reliability, clarifi the responses, managers can place their trust in and rely on ChatGPT- cation, and security Certification processes can confirm that social AI like applications for their decision-making duties tools fulfill the necessary standards, building confidence in their use and minimizing potential risks Given the swift progress in social robotics, From a risk management standpoint, managers should evaluate the policymakers should continuously develop mechanisms to monitor the risks associated with dependence on answers provided by generative AI development and utilization of ChatGPT and similar technologies This applications They should especially consider the potential impact of encompasses regularly evaluating the effectiveness of policies and reg these risks on their personal reputation and that of the organisation To ulations, recognizing gaps, and making suitable adjustments A flexible systematically assess, understand, and manage the risks arising from regulatory framework can ensure that policies stay in step with tech uncritically accepting ChatGPT outputs, the NIST AI risk management nological advancements and effectively address emerging concerns framework can be utilized Managers should also corroborate the re sponses with their own domain-specific knowledge and cross-check with 8 Conclusion dependable sources Through a systematic approach to examine the implications of ChatGPT responses, managers can make unbiased moral We combined usage and social factors from the UTAUT model and decisions that prevent both intentional and unconscious threats to in value-oriented factors from Holbrook’s experience-based typology of dividuals or organisations consumer values, which integrates features, attributes, and benefits that provide maximum consumer value to formulate a theoretical model that The relevance of the responses within a specific context is vital for helped us to assess experiential factors impacting behavioural intention these technologies to be adopted in business applications and decision- to use ChatGPT by business managers and which would subsequently making processes While large language models (LLMs) and reinforce lead to robot–human co-creation This is the first study in the business ment learning with human feedback play a part in reducing misalign and management literature to understand the factors that will drive and ment in ChatGPT, the quality of the output responses largely hinges on inhibit the use of ChatGPT, which has been coined as a new disrupter for the quality of prompts given by managers and the pertinence of the organisations, irrespective of the application domain and sector Our training data used in these models Therefore, managers should have results showed that the efficiency and relevance of the ChatGPT appli access to trustworthy and pertinent training data that corresponds with cation—in other words, its context of use—will be key to its adoption, the application’s context to enhance the efficiency, reliability, and irrespective of the social robot automating or augmenting industrial consistency of the responses Furthermore, managers should formulate applications The recommendations provided by ChatGPT and its ability their prompts carefully to prevent misinterpretation by the AI bot, which 18 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 to empower managers for meaningful work will also shape managerial studies can be conducted with managers in both emerging and devel perception towards its adoption and eventually use it for co-creating oped economies and specific sectors to better understand the acceptance content or decision making, where human analytical and computa of the ChatGPT Third, future studies can focus on specific applications tional capabilities are limited compared to ChatGPT In this context, our of ChatGPT in specific sectors to provide a comprehensive understand results showed that how ChatGPT is used in a purposeful and meaningful ing of the context and environment of use, which will help to develop way to assist human decision-makers in their work will be pivotal to its effective and responsible strategies to deploy it within the organisations adoption, irrespective of scepticism and the negative publicity that AI- Fourth, there is no universally accepted definition of social robots, and based tools have received in the past, and the media rhetoric that con current definitions do not consider that social robots can exist in virtual tinues to add fuel to these discussions ecosystems without having a physical presence While we considered ChatGPT as a social robot due to its interaction capabilities, autonomy, While existing literature has shown that the aesthetics and playful conversational nature, affective component, human-like responses, un ness of social robots are essential for their acceptance and eventual derstanding, and observation of social rules (the way it has been pro adoption by humans, from a managerial perspective, these characteris grammed to operate online), irrespective of not having a physical body, tics seem not to impact the intent to adopt This can be attributed to the its classification will depend on how it is accepted within the social fact that managers perceive AI-based robots, irrespective of their limited fabrics of society and organisation dynamics Finally, future studies can cognitive and affective capabilities, as tools that will augment and assist extend our model by integrating constructs from the technology human intelligence rather than replacing them In other words, man acceptance model, unified theory of acceptance and use of technology, agers are interested in business applications of ChatGPT, such as tools theory of planned behaviour, socio-technical systems framework, and social robots, which will help humans and employees focus more on organisation socialisation framework and dynamic capability theory to non-trivial tasks where creative, intuitive, and emotional intelligence better understand capabilities required within organisations for its and higher-order thinking skills are pivotal From a decision-making effective integration perspective, ChatGPT-like tools can deal with complexity, for example, big data processing and analytical capabilities, whereas Future research should investigate the long-term effects and out humans can focus on uncertainty (showing situational and contextual comes of organisations adopting ChatGPT and similar social robots This awareness), sense-making (interpreting and making a judgement on the can include examining the impact on productivity, job roles, employee outputs produced by AI tools), and sense-giving (articulating the infor satisfaction, and overall organizational performance Longitudinal mation and persuasion) studies can provide insights into the evolving nature of ChatGPT adop tion and its implications for businesses Understanding the influence of The quality of recommendations and reliable interaction with the contextual factors on ChatGPT acceptance and usage is essential Future tool is pivotal to developing a competitive advantage in the era of dig studies can explore how factors such as industry-specific requirements, italisation and technology turbulence, where digital systems are organizational culture, and task complexity affect the intention to use emerging at a very rapid rate In this context, managers view ChatGPT as and actual usage of ChatGPT By examining different contexts, re tools that have a way to enhance their own productivity and that of the searchers can identify the conditions under which ChatGPT is most organisation They have come to understand the importance of using effective and beneficial Research studies are needed to investigate the such tools, and therefore the quality of information generated by such ethical and social implications of deploying ChatGPT in various do tools and the way it is generated (i.e., interaction mode) are necessary mains This can include examining issues such as privacy concerns, dimensions to make the usage effective, which in turn will lead to agility human-robot interaction dynamics, and the impact on social relation in decision making Finally, our study showed that quality, value, and ships Ethical frameworks and guidelines can be developed to ensure satisfaction are key dimensions that will impact the adoption of responsible and ethical use of social robots like ChatGPT Research ChatGPT-like tools and social robots within organisations by managers, should focus its attention on developing methods and techniques to similar to the research on these dimensions in consumer space While enhance the explainability and transparency of ChatGPT’s decision- quality is related to the purpose of use, and interaction with ChatGPT, making processes This can involve exploring interpretable AI models, value lies in the recommendations provided by the tool to achieve designing user interfaces that provide insights into ChatGPT’s managerial goals in a specific application, and satisfaction lies in the reasoning, and developing mechanisms for users to understand how and ease of using the tool to co-create content (co-complete tasks) While the why certain recommendations or responses are generated prior technological advancements focused on altering or replacing routine manual tasks, ChatGPT-like tools involve cognitive, relational, Future studies can focus on optimizing the user experience of and structural complexities, which will significantly impact task design, ChatGPT by examining factors such as personalization, user interface human roles, and responsibilities in digital business organisations The design, and adaptability to user preferences Understanding how to problem field has changed from how generative AI systems can facilitate enhance the efficiency, reliability, and meaningfulness of ChatGPT’s human intelligence, augmentation, and decision making to what design recommendations and interactions can lead to improved user accep features and capabilities within these tools will facilitate their adoption tance and satisfaction It is also essential to investigate the application of by managers and employees While studies on collective intelligence and ChatGPT in specific domains beyond content co-creation and problem- employee-AI collaboration are sparse, the emergence of ChatGPT-like solving For example, exploring its use in operations and supply chain tools warrants a new research stream concerning the purposeful and management, virtual metaverse, healthcare, hospitality and tourism can intelligent application of these tools, which will enhance employees’ uncover novel ways in which ChatGPT can add value and improve and managers’ capabilities and, subsequently, business growth Our organisationally valued outcomes Understanding the domain-specific results put the focus back on designing these systems and applications, challenges and opportunities will contribute to the effective utilization moving away from the structural aspects of job roles and of ChatGPT in diverse contexts Researchers should investigate potential responsibilities biases and fairness concerns in ChatGPT’s recommendations and re sponses This involves evaluating the representation of diverse per The results of our study should be read in light of their limitations spectives, addressing algorithmic biases, and ensuring fairness across First, we conducted a perceptional survey to understand the experiential different user groups Developing methods to mitigate bias and promote factors that will either drive or inhibit ChatGPT adoption This is a fairly fairness will enhance the trust and reliability of ChatGPT Future studies new technology; therefore, case studies and longitudinal studies with can also explore the dynamics and collaboration processes between both employees and managers provided a comprehensive understanding ChatGPT and human collaborators in more depth Understanding how of the relevance and usefulness of the tool, which will shape human humans perceive and interact with ChatGPT, the division of labour, and perception Secondly, we studied a sample of managers from the UK, the challenges and benefits of human-robot collaboration can inform the which may limit the generalizability of the results Therefore, future 19 A Abadie et al Technological Forecasting& Social Change 201 (2024) 123202 development of effective collaborative frameworks Castelvecchi, D., 2022 Are ChatGPT and AlphaCode going to replace programmers? Nature Retrieved February 8, 2023, from https://www.nature.com/articles Declaration of competing interest /d41586-022-04383-z The authors declare that they have no known competing financial Cetin, G., Dincer, F.I., 2014 Influence of customer experience on loyalty and word-of- interests or personal relationships that could have appeared to influence mouth in hospitality operations Anatolia 25 (2), 181–194 the work reported in this paper Cheetham, M., Suter, P., J¨ancke, L., 2011 The human likeness dimension of the Data availability “uncanny valley hypothesis”: behavioral and functional MRI findings Front Hum Neurosci 5, 126 The data that has been used is confidential Chen, S.C., Lin, C.P., 2015 The impact of customer experience and perceived value on References sustainable social relationship in blogs: an empirical study Technol Forecast Soc Chang 96, 4050 Abadie, A., Carillo, K., Fosso Wamba, S., Badot, O., 2019 Is Waze joking? Perceived irrationality dynamics in user-robot interactions In: Proceedings of the 52nd Hawaii Chen, J.S., Le, T.T.Y., Florence, D., 2021 Usability and responsiveness of artificial International Conference on System Sciences intelligence chatbot on online customer experience in e-retailing Int J Retail Distrib Manag 49 (11), 1512–1531 Al-Taee, M.A., Kapoor, R., Garrett, C., Choudhary, P., 2016 Acceptability of robot assistant in management of type 1 diabetes in children Diabetes Technol Ther 18 Cheong, P.H., 2020 Robots, religion and communication: rethinking piety, practices and (9), 551–554 pedagogy in the era of artificial intelligence In: Religion in the Age of Digitalization Routledge, pp 86–96 Arsenyan, J., Mirowska, A., 2021 Almost human? A comparative case study on the social media presence of virtual influencers Int J Hum Comput Stud 155, 102694 Chiang, A.H., Trimi, S., Lo, Y.J., 2022 Emotion and service quality of anthropomorphic robots Technol Forecast Soc Chang 177, 121550 Atkinson, D., Hancock, P., Hoffman, R., Lee, J.D., Rovira, E., Stokes C., C., Wagner, A., 2012 Trust in Computers and Robot: the Uses and Boundaries of the Analogy of Chowdhury, S., Budhwar, P., Dey, P.K., Joel-Edgar, S., Abadie, A., 2022 AI-employee Interpersonal Trust, Proceedings of the Human Factors and Ergonomics Society, 56th collaboration and business performance: integrating knowledge-based view, socio- Annual Meeting, Boston MA technical systems and organisational socialisation framework J Bus Res 144, 31–49 Aydın, O¨ , Karaarslan, E., 2022 OpenAI ChatGPT generated literature review: digital twin in healthcare In: Aydın, O¨ (Ed.), Emerging Computer Technologies 2 I˙zmir Chui, M., Hall, B., Mayhew, H., Singla, A., 2022, December 6 The state of AI in Akademi Dernegi, pp 22–31 2022—and a half decade in review Retrieved February 8, 2023, from https://t inyurl.com/33j62ssd Aymerich-Franch, L., 2020 Why it is time to stop ostracizing social robots Nat Mach Intell 2 (7), 364 Chui, M., Roberts, R., & Yee, L (2022b, December 20) How Generative AI & ChatGPT Will Change Business, McKinsey Retrieved May 10, 2023, from https://tinyurl.com Balagopalan, A., Madras, D., Yang, D.H., Hadfield-Menell, D., Hadfield, G.K., /yvp2sty7 Ghassemi, M., 2023 Judging facts, judging norms: training machine learning models to judge humans requires a modified approach to labeling data Sci Adv 9 (19), Cohen, J., Cohen, P., West, S.G., Aiken, L.S., 2013 Applied Multiple Regression/ eabq0701 Correlation Analysis for the Behavioral Sciences Routledge Balaji, M.S., Roy, S.K., 2016 Value co-creation with internet of things technology in the Cormons, L., Poulet, C., Pellier, D., Pesty, S., Fiorino, H., 2020 February Testing social retail industry J Mark Manag 33 (1–2), 7–31 robot acceptance: what if you could be assessed for dementia by a robot? A pilot study In: In 2020 6th International Conference on Mechatronics and Robotics Banks, J., 2020 Theory of mind in social robots: replication of five established human Engineering (ICMRE) IEEE, pp 92–98 tests Int J Soc Robot 12 (2), 403–414 David, D., Th´erouanne, P., Milhabet, I., 2022 The acceptability of social robots: a Barcenilla, J., Bastien, J.M.C., 2009 L’acceptabilit´e des nouvelles technologies: quelles scoping review of the recent literature Comput Hum Behav 107419 relations avec l’ergonomie, l’utilisabilit´e et l’exp´erience utilisateur? Trav Hum 72 (4), 311–331 Davidowitz, S., 2017 Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about Who we Really Are HarperCollins, New York, NY Bartneck, C., Forlizzi, J., 2004 A design-centred framework for social human-robot interaction In: RO-MAN 2004 13th IEEE International Workshop on Robot and Human Davis, F.D., 1989 Perceived usefulness, perceived ease of use, and user acceptance of Interactive Communication (IEEE Catalog No.04TH8759), Kurashiki, Japan, information technology MIS Q 319–340 pp 591–594 De Graaf, M.M., Allouch, S.B., 2013 Exploring influencing variables for the acceptance Bietz, J.R., 2018 A Comparative Study of the Perceptions of K-12 Teachers on the Use of of social robots Robot Auton Syst 61 (12), 1476–1486 Socially Assistive Robots in the Classroom ProQuest LLC Del Río-Lanza, A.B., V´azquez-Casielles, R., Díaz-Martín, A.M., 2009 Satisfaction with Blut, M., Wang, C., Wünderlich, N.V., Brock, C., 2021 Understanding anthropomorphism service recovery: perceived justice and emotional responses J Bus Res 62 (8), in service provision: a meta-analysis of physical robots, chatbots, and other AI 775–781 J Acad Mark Sci 49, 632–658 Destephe, M., Brandao, M., Kishi, T., Zecca, M., Hashimoto, K., Takanishi, A., 2015 Bobillier-Chaumon, M., Dubois, M., 2009 L’adoption des technologies en situation Walking in the uncanny valley: importance of the attractiveness on the acceptance of professionnelle: quelles articulations possibles entre acceptabilit´e et acceptation? a robot as a working partner Front Psychol 6, 204 Trav Hum 72 (4), 355–382 Diel, A., Weigelt, S., MacDorman, K.F., 2021 A meta-analysis of the uncanny valley’s Bolton, R.N., McColl-Kennedy, J.R., Cheung, L., Gallan, A., Orsingher, C., Witell, L., independent and dependent variables ACM Trans Hum.-Robot Interact 11 (1), Zaki, M., 2018 Customer experience challenges: bringing together digital, physical 1–33 and social realms J Serv Manag 29 (5), 776–808 Dinev, T., Albano, V., Xu, H., D’Atri, A., Hart, P., 2016 Individuals’ attitudes towards Bonsu, E.M., Baffour-Koduah, D., 2023 From the consumers’ side: determining students’ electronic health records: a privacy calculus perspective In: Advances in Healthcare perception and intention to use ChatGPT in Ghanaian higher education J Educ Soc Informatics and Analytics, 1950 Multicult 4 (1), 1–29 DiSalvo, C., Gemperle, F., 2003, June From seduction to fulfillment: the use of Breazeal, C., 2002 Regulation and entrainment in human—robot interaction Int J anthropomorphic form in design In: Proceedings of the 2003 International Robot Res 21 (10− 11), 883–902 Conference on Designing Pleasurable Products and Interfaces, pp 67–72 Breazeal, C., & Scassellati, B (1999) A Context-Dependent Attention System for a Social Dodgson, M., Gann, D.M., Phillips, N., 2013 Organizational learning and the technology Robot (In IJCAI) ’99: Proceedings of the 16th International Joint Conference on of foolishness: the case of virtual worlds at IBM Organ Sci 24 (5), 1358–1376 Artificial Intelligence (Vol 2), pp 1146–1151 Dowling, M., Lucey, B., 2023 ChatGPT for (finance) research: the Bananarama Budhwar, P., Chowdhury, S., Wood, G., Tung, R.L., Varma, A., 2023 Human resource conjecture Financ Res Lett 53, 103662 management in the age of generative artificial intelligence: perspectives and research directions on ChatGPT Hum Resour Manag J 1–54 Duffy, B.R., 2003 Anthropomorphism and the social robot Robot Auton Syst 42 (34), 177–190 Burleigh, T.J., Schoenherr, J.R., Lacroix, G.L., 2013 Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of Duffy, B.R., Rooney, C., O’Hare, G.M., O’Donoghue, R., 1999 What is a social robot? In: digitally created faces Comput Hum Behav 29 (3), 759–771 10th Irish Conference on Artificial Intelligence & Cognitive Science, University College Cork, Ireland, 1-3 September 1999 September Cabanac, G., Labb´e, C., Magazinov, A., 2021 Tortured phrases: A dubious writing style emerging in science Evidence of critical issues affecting established journals arXiv Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A preprint arXiv:2107.06751 M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al- Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Cao, G., Duan, Y., Edwards, J.S., Dwivedi, Y.K., 2021 Understanding managers’ attitudes Wright, R., 2023 “So what if ChatGPT wrote it?” multidisciplinary perspectives on and behavioral intentions towards using artificial intelligence for organizational opportunities, challenges and implications of generative conversational AI for decision-making Technovation 106, 102312 research, practice and policy Int J Inf Manag 71, 102642 Cardon, P.W., Getchell, K., Carradini, S., Fleischmann, C., Stapp, J., 2023, March 18 Edwards, A., Edwards, C., Westerman, D., Spence, P.R., 2019 Initial expectations, Generative AI in the workplace: employee perspectives of ChatGPT benefits and interactions, and beyond with social robots Comput Hum Behav 90, 308–314 organizational policies SocArxiv https://doi.org/10.31235/osf.io/b3ezy Retieved May 10, 2023 from Else, H., 2023 Abstracts written by ChatGPT fool scientists Researchers cannot always differentiate between AI-generated and original abstracts Nature 423 Retrieved February 8, 2023, from https://tinyurl.com/at6f4pf9.613(7944) Esmaeilzadeh, H., Vaezi, R., 2022 Conscious empathic AI in service J Serv Res 25 (4), 549–564 Fan, W.J., Liu, J.N., Zhu, S.W., Pardalos, P.M., 2020 Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS) Ann Oper Res 294 (1–2), 567–592 Fong, T., Nourbakhsh, I., Dautenhahn, K., 2003 A survey of socially interactive robots Robot Auton Syst 42 (3–4), 143–166 20