Author:
Mátyás Jenő Hartyándi Doctoral School of Business and Management Corvinus University of Budapest, Budapest, Hungary

Search for other papers by Mátyás Jenő Hartyándi in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-3177-2663
Open access

Abstract

Millions have adopted tools like ChatGPT in recent years, yet indifference and resistance among employees remain. This qualitative study employs monodramatic projective techniques to explore employees' hidden assumptions and unconscious beliefs in a division attempting to integrate Generative Artificial Intelligence (AI, GAI). Through pretensive work, soliloquy, symbolic representation, modeling with intermediate objects, concretization, and role reversal techniques, the interviewees' internal representations of GAI and trust were materialized in physical artifacts, such as a ball of straw or a potted plant. The study identified three principal themes: GAI's appearance as a Janus-faced presence, unmet performance promises, and avoided proximity. Findings highlight ambiguities in acceptance and show that adoption was driven more by industry hype and normative pressures than genuine organizational needs, leading to disorganized implementation dependent on individual employee characteristics, mistrust, and disenchantment. The study's main contribution lies in refining human-robot interaction (HRI) models and psychodrama methods for GAI, emphasizing the significance of physicality and embodiment in technology-mediated relationships, identifying trust as a complex phenomenon with potential reciprocal causation, and emphasizing the importance of affective attitudes, illustrating how adoption projects can falter despite cognitive openness – all insights crucial for understanding self-driven, bottom-up GAI adaptation in an organizational context.

Abstract

Millions have adopted tools like ChatGPT in recent years, yet indifference and resistance among employees remain. This qualitative study employs monodramatic projective techniques to explore employees' hidden assumptions and unconscious beliefs in a division attempting to integrate Generative Artificial Intelligence (AI, GAI). Through pretensive work, soliloquy, symbolic representation, modeling with intermediate objects, concretization, and role reversal techniques, the interviewees' internal representations of GAI and trust were materialized in physical artifacts, such as a ball of straw or a potted plant. The study identified three principal themes: GAI's appearance as a Janus-faced presence, unmet performance promises, and avoided proximity. Findings highlight ambiguities in acceptance and show that adoption was driven more by industry hype and normative pressures than genuine organizational needs, leading to disorganized implementation dependent on individual employee characteristics, mistrust, and disenchantment. The study's main contribution lies in refining human-robot interaction (HRI) models and psychodrama methods for GAI, emphasizing the significance of physicality and embodiment in technology-mediated relationships, identifying trust as a complex phenomenon with potential reciprocal causation, and emphasizing the importance of affective attitudes, illustrating how adoption projects can falter despite cognitive openness – all insights crucial for understanding self-driven, bottom-up GAI adaptation in an organizational context.

1 Introduction

Organizational change is often met with resistance or confusion (Bao 2009; Nilsen et al. 2016; Choi et al. 2020; Kateb et al. 2022; Kalmus – Nikiforova 2024). As more and more companies are starting to use algorithmic, machine learning, or artificial intelligence-based solutions, businesses that adopt and utilize externally developed technology in their operations face similar challenges of technology adoption as customer end-users, including integration issues (Familoni – Onyebuchi 2024), lack of adequate training (Pinski – Benlian 2024), and security and privacy concerns (Yao et al. 2024).

Over the last few years, millions of employees have learned to use various Generative Artificial Intelligence (GAI) tools like ChatGPT. These technologies involve computational methods that use training data to produce seemingly original, meaningful output, such as writing, pictures, or audio (Feuerriegel et al. 2024). This study aims to explore and understand employees' hidden assumptions and unconscious beliefs toward GAI, focusing on their trust in the technology.

2 Background

Quantitative methodology has long dominated technology acceptance research, often focusing on latent attitudes measured indirectly through structured surveys and statistical models (Lee – Baskerville 2003). Despite the minority role, some qualitative studies have investigated the impact of leadership (Kavanagh – Ashkanasy 2006), personality (Ouadahi 2008), age (Renaud – Van Biljon 2008), tool anthropomorphism (Gursoy et al. 2019), emotions (Gkinko – Elbanna 2022), and organizational tactics emphasizing trustworthiness (Hasija – Esper 2022) on technology acceptance.

A recent systematic review (Kelly et al. 2023) reviewed 60 studies on AI acceptance and showed that intention, willingness, and use behavior of AI were positively predicted by perceived usefulness, performance expectancy, attitudes, trust, and effort expectancy across multiple industries. This paper seeks only to review the most relevant studies on Generative AI (GAI) adoption.

2.1 AI in widespread models

The Diffusion of Innovation (DOI, Rogers 1962; 1983) model describes how social systems adopt technology through stages of awareness, persuasion, decision, implementation, and confirmation, progressing from learning about the technology to evaluating its effectiveness for continued use. To highlight recent trends, various studies underline that trust dynamics within social networks significantly influence the diffusion of innovations (Fujii 2022; Leon et al. 2022).

The Technology Acceptance Model (TAM) explains why technology is accepted and used. Its original version focuses on perceived usefulness (PU) and perceived ease of use (PEOU) as key drivers of behavioral intention and actual usage (Davis et al. 1989). TAM 2 expands on these with factors like subjective norm, job relevance, and result demonstrability (Venkatesh – Davis 2000), while TAM 3 further details the influence on PEOU, such as computer self-efficacy, anxiety, and enjoyment (Venkatesh – Bala 2008). Renowned for its simplicity and clarity, TAM remains a foundational framework, recently expanded to include trust and transparency in intelligent systems like AI and robots (Vorm – Combs 2022).

The Unified Theory of Acceptance and Use of Technology (UTAUT) synthesizes multiple acceptance theories, including TAM and DOI (Venkatesh et al. 2003). Its latest version applies to both consumer and employer settings, identifying eleven variables influencing behavioral intention and use behavior, including performance expectancy, effort expectancy, social influence, price value, hedonic motivation, and habit, among others, offering high explanatory power but significant complexity (Blunt et al. 2022). Recently, trust emerged as a key potential component in adapting UTAUT to AI (Chatterjee et al. 2023).

2.2 The AI Device Use Acceptance model

While these widely used models have integrated (G)AI and emphasized the critical role of trust and transparency in its acceptance and use, they were not designed explicitly for GAI. Recently, an AI-specific AI Device Use Acceptance (AIDUA) model was proposed by Dogan Gursoy and his colleagues (2019) and further expanded by Xiaoyue Ma and Yudi Huo (2023). While this model focuses on customer acceptance, particularly chatbots in customer services, it incorporates the unique features of AI technologies, including the specific characteristics of perceived usefulness and ease of use in GAI technologies (Lu et al. 2019). Therefore, it offers valuable insights for better understanding of employee acceptance.

AIDUA is founded on the Cognitive Dissonance Theory and the Cognitive Appraisal Theory. Cognitive Dissonance Theory (Festinger 1962) posits that individuals experience psychological discomfort (dissonance) when holding two or more contradictory beliefs or attitudes, leading them to alter their cognitions or behaviors to reduce the dissonance. Cognitive Appraisal Theory (Lazarus 1991a; Lazarus 1991b) suggests that emotional responses are triggered by an individual's cognitive evaluation (appraisal) of a situation, determining whether it is perceived as a threat or challenge and influencing the resulting emotional and behavioral responses. Based on these theories, AIDUA theorizes that users go through several stages of appraisals when making decisions (Fig. 1). First, they assess the importance of the stimulus (primary appraisal), then examine their behavioral options (secondary appraisal), and finally form emotions toward the stimulus that leads to actual behavior (outcome stage). Primary appraisal consists of:

  • Social influence: The extent to which a person thinks influential others think they should use new technology (see UTAUT; Venkatesh et al. 2003).

  • Hedonic motivation: see UTAUT 2 (Venkatesh et al. 2012).

  • Novelty value: The degree to which a product's freshness and originality make it stand out from competitors (Im et al. 2015).

  • Perceived humanness: The extent to which technology has human characteristics, such as appearance, self-awareness, and mood (Kim – McGill 2018).

Fig. 1.
Fig. 1.

The AI device use acceptance (AIDUA) model

Source: author, based on Ma and Huo (2023).

Citation: Society and Economy 2025; 10.1556/204.2025.00002

These variables impact performance and effort expectancy, influencing users' cognitive and affective attitudes toward a willingness or objection to use GAI technology. This outcome stage is further significantly impacted by the control variable of age, which negatively influences the willingness to reject ChatGPT, indicating that younger users are less likely to abandon it and demonstrate greater openness to new technology (Ma – Huo 2023).

2.3 Technology-Organization-Environment (TOE)

Rather than conducting an isolated examination of technology, some authors suggest viewing GAI as a component of a socio-technical system (Cooper – Foster 1971) made up of diverse technological, economic, and political structures, all of which can only be understood by additionally accounting for macro level consequences (Bengel 2020; Saetra 2021).

By looking at three crucial factors, the Technology-Organization-Environment (TOE) framework, originally developed by Louis G. Tornatzky and his colleagues (1990) but further enhanced (Baker 2012; Awa et al. 2017), can be utilized to successfully define how employees are accepting of GAI technology in organizations (Smit et al. 2024).

  • Technology: The acceptance of GAI technology depends on its perceived simplicity, compatibility with existing systems, and the expectation of enhancing task performance and productivity.

  • Organization: Employee adoption of GAI is affected by management support, the enterprise's size, and the scope of its operations.

  • Environment: Normative and mimetic pressures, such as industry trends and competitor initiatives, drive GAI adoption, with employees more likely to accept it when they perceive it as essential for staying competitive.

Although TOE is only a general framework, it draws attention to the fact that organizational processes differ from individual consumer acceptance. For example, one study (Awa et al. 2017) found that the role of hedonic motivation in employee acceptance is generally insignificant, and adoption is more driven by TOE factors than by individual ones. The TOE framework's applicability to the research of AI adoption is also supported by another study (Radhakrishnan et al. 2022) that discovered organizational elements affecting the adoption of AI, including corporate culture, strategic roadmaps, senior management support, and the availability of trained personnel. Other findings show that organizational culture, personal habits, and job insecurity influence employees' intention to use AI (Dabbous et al. 2022). They found that the relationship between job insecurity and intention to use is fully mediated by perceived self-image and perceived usefulness, and the relationship between habit and intention to use is partially mediated by them.

2.4 Human-robot interaction

Finally, reviewing a few relevant studies from a parallel corner of human-machine interaction research, the human-robot interactions (HRI) field may be valuable. Over the last century, from being just tools, robots have developed into cooperative companions in several industries, including manufacturing and health (Baker et al. 2018). AI requires an interface to interact with its environment, often relying on embodied agents, for example, physical robots that enable software to engage with their surroundings (Duan et al. 2022). Bojan Obrenovic and his colleagues (2024) further highlight the convergence between GAI tools and robots, emphasizing their shared capacity for human-like interaction. GAI mirrors HRI's collaborative and intuitive dynamics with its conversational style, adaptive learning, and context recognition. Both domains leverage anthropomorphism, enabling systems to mimic social and emotional intelligence and fostering trust and engagement. These factors blur the distinction between physical robots and digital AI tools, extending the relevance of HRI models in AI acceptance research.

Several authors have identified trust as a crucial prerequisite for HRI (Sanders et al. 2011; Hancock et al. 2011). Based on previous research (Billings et al. 2012) that identified the appearance, performance, and proximity of robots as the key aspects of trust in them, Olga Simon and her colleagues (2020) asked participants to use Lego bricks to visualize dimensions of trust toward robots, constructing representations of their internal models. They have concluded that trust-relevant robot appearance could be broken down into aspects of color, gender, shape, and size. At the same time, multifunctionality, predictability, and control are the aspects of the performance dimension. They operationalized proximity as the robot's mobility plus interconnectivity between robots and humans. For a summary, see Table 1.

Table 1.

Key aspects of trust in human-robot interactions

DimensionsAspects
AppearanceColor, Gender, Shape, Size
PerformanceMultifunctionality, Predictability & Control
ProximityMobility, Interconnectivity

Source: author, based on Simon et al. (2020).

Many GAI applications are well-suited for assistive and interlocutor roles, leading users to anthropomorphize them and perceive their outputs as akin to human intelligence (Gursoy et al. 2019). Whether viewed as teammates or tools, trust toward GAI is also critically important in its acceptance. It became one of the most frequently included variables in recent AI acceptance studies (Kelly et al. 2023). Trust was identified as a significant predictor of behavioral intentions in the extended TAM and was found to influence use behavior when considering other UTAUT model variables. In the eyes of the public, trustworthiness is one of the critical issues determining whether AI is treated as humanity's boon or bane (Gerlich 2023). For this reason, it may be worthwhile to utilize the results of HRI research as a lens to investigate the aspect of trust in GAI.

3 Research design

To gain a deeper understanding of employees' hidden assumptions and unconscious beliefs toward Generative AI (GAI), this study investigates how employees (co-)construct the notion of GAI related to work. In particular, the research seeks to qualitatively explore this topic through the perspective of trust in GAI technology among employees.

3.1 Research methodology

The limitations of self-administered questionnaire surveys and qualitative methods based on self-report (in-depth interviews and focus groups) have long been recognized (Alvesson 2003; De Leeuw 2008; Gauntlett 2007). The complexity of human-machine interaction, especially trust in GAI, is imbued with many underlying tensions and paradoxes, making it a sensitive topic to address (Liamputtong 2007). A methodology was needed that allowed participants to explore the topic safely.

To counter these challenges, this study has chosen a projective qualitative research methodology (Donoghue 2000). Projective techniques assume that their indirective nature encourages respondents to get beyond good-sounding or unconsidered answers and reveal unconscious parts of their personality in response to stimuli (Will et al. 1996; Steinman 2009). Among projective techniques, expressive ones are especially suitable for exploring the respondent's inner world (Krugman 1960; Donoghue 2000). These building or modeling techniques (Lane 1999) are getting popular in research. For example, a rising number of articles utilized Lego Serious Play® in connection with business, management, and organization studies (Schwabenland – Kofinas 2003). Another expressive projective method, namely psychodrama, has been successfully used in various fields for a long time, including marketing (Dichter 1943) and leadership development (Lippitt 1943). These methods are all founded on an extended version of constructivist learning theory (Piaget 1955) claiming 'everything [is] understood by being constructed' (Papert – Harel 1991: 3.) and the evolutionary capability of humans to imagine (co-)pretended realities (Kapitany et al. 2022).

The psychodramatic method, rooted in the works of Jacob L. Moreno (1934; 1949), employs pretensive work to uncover hidden assumptions and unconscious beliefs, representing or modeling them within the dramatic space or surplus reality (Moreno et al. 2000). While psychodrama was initially developed as a group format, its monodramatic version (Sacks – Fonseca 2004; Giacomucci – Giacomucci 2021) is highly suitable for individual processes such as (executive) coaching (Schaller 2022). Consequently, this study could utilize it as a monodramatic, expressive, and projective qualitative technique. Relevant psychodrama techniques include (Cruz et al. 2018):

  • Soliloquy or monologue: Thinking out loud to express thoughts, feelings, and intentions.

  • Symbolic representation: Depicting complex situations and relationships in an emotionally authentic way.

  • Intermediate objects: Using physical items to symbolically model internal objects, such as emotions, relationships, or aspects of the self.

  • Concretization: Representing abstract concepts or bodily sensations as external forms through props or role-playing.

  • Role reversal: Physically and psychologically placing oneself in the position of another element in the dramatic space, thereby adopting a different perspective.

As participants' internal representations become visible, tangible, and plastic in the dramatic space, this partially physical and partially (co-)pretended model of inner reality can be studied independently and regarded as an artifact output. For specific examples, see Supplementary materials 1 and 3.

The psychodrama director's techniques help participants shift their focus from the formalities of the interview to a more introspective exploration. The interview situation gradually transforms into a self-awareness opportunity as they warm up. Instead of merely articulating self-confessed opinions, interviewees confront their hidden assumptions and unconscious beliefs, often bringing unarticulated thoughts and repressed emotions to the surface.

Although the psychodramatic method is inherently diagnostic and interventionist, this study specifically employed it to uncover the interviewees' unconscious internal models of GAI. By prioritizing this exploratory aspect, the research aimed to reveal deep-seated perspectives that might not emerge through conventional qualitative methods.

3.2 Data collection and sample

Our employed theoretical sampling approach is combined as we sought a case that is typical, normal, and average yet can serve as a critical case, enabling potential transferability and applicability to other cases – as it is possible to generalize from a single case (Flyvbjerg 2006). The following criteria were established:

  1. Completeness Criterion: The research must encompass an entire organizational unit or a cohesive working group.

  2. Content Criteria:

    1. a.The organizational unit must be technologically competent and regularly utilize digital tools in its work processes.
    2. b.The unit should not be involved in developing GAI or similar tools, ensuring it does not consist of data scientists or AI professionals.
    3. c.The unit should experiment with integrating GAI tools without management directives, forced implementation, or ‘hard’ system rollouts, facing similar technology adoption challenges as customer end-users, making its members comparable to them.

Opportunistic and convenience considerations also influenced the final selection of the organization, as the research was conducted in the first organization that met these primary criteria.

Data was collected from a small Hungarian organization with 55–60 employees, positioned at the boundary between small and medium-sized enterprises, operating within the professional service (tertiary) sector, offering consulting services. Without significant mimetic pressures from their competitors, one of their divisions decided to experiment with integrating GAI tools by setting up a small project team to test them, fitting all the content criteria. Their project team experimented with tasks such as text generation and transformation, image creation, and presentation production, utilizing tools like OpenAI (ChatGPT, DALL-E), Designs.ai, Gamma, and Canva, and shared their lessons with the division. This typical, non-specialist organizational unit, engaging in self-driven, bottom-up, organic GAI exploration, seemed highly relevant as a critical case for deriving insights into organizational technology adoption challenges, particularly those resembling the experiences of customer end-users.

Contact was established through professional relationships. Nine one-hour interviews were conducted between December of 2023 and July of 2024, covering all eight full-time members of a divisional team and one other member of their GAI project team. It was unclear whether the research had reached theoretical saturation (Klenke 2008), as the data collection included every member of the targeted division and other involved stakeholders like Interviewee #4 (see Table 2). The final three interviews were scheduled after the organization conducted internal GAI workshops for three key reasons. First, the research aimed to better understand employees' experiences and evolving relationships with GAI rather than merely capturing a snapshot of the division. Second, it enabled an assessment of the impacts of in-house learning and development interventions. Third, this approach facilitated the collection of more informed and experience-based mental representations from the remaining participants, addressing the limited GAI familiarity and motivation observed in earlier interviews.

Table 2.

Interviewee data

OrderGenerationExperienceGenderSelf-declared AI maturityAt the time of the interview
#1ZJuniorWomanNegligible experience
#2XSeniorManTheoretical knowledgeGAI project member
#3ZJuniorWomanSome experienceGAI project member
#4ZAssistantWomanModerate experienceGAI project member
#5XSeniorWomanNegligible experience
#6XSeniorWomanModerate experience
#7ZJuniorWomanSome experienceparticipated in the GAI workshop
#8YMediorWomanSignificant experienceparticipated in the GAI workshop
#9YMediorManNegligible experience

Source: author.

The interviewees are presented in the order of their interviews (see Table 2) and categorized by generation, professional experience, gender, GAI maturity, and other relevant information.

  • Generation: Interviewees span across Generation X (born 1965–1980), Generation Y (1981–1996), and Generation Z (1997–2012).

  • Experience levels: Seniors have over ten years of industry experience, juniors have a maximum of a few years, and mediors fall between these extremes. Interviewee #4 was an assistant from another support division.

  • Gender distribution: Seven interviewees identified as women, and two as men.

  • GAI maturity (self-reported, retrospectively categorized):

    • Negligible: (Close to) zero experience with GAI.

    • Theoretical Knowledge: Minimal practical experience but considerable subject knowledge.

    • Some: Basic knowledge with limited experimentation.

    • Moderate: Intermediate-level experience beyond the basics.

    • Significant: Advanced knowledge, wide-ranging tools, and daily use routines.

  • Other: Interviewees #2–4 comprised the organization's GAI project team. Only #7 and #8 participated in the in-house GAI workshops from the final three interviews, and #9 did not.

The study used the expertise of a psychodrama-trained director who conducted the interviews while the researcher could concentrate on making observations and taking notes.

3.3 Monodramatic interview structure

While projective qualitative methods are often understood as unstructured (Steinman 2009), psychodramatic processes follow specific guidelines. For interview questions, see Supplementary material 2. Before the monodramatic exploration started, as a warm-up, the interviewer asked interviewees about their expectations for the interview and their personal understanding of GAI, encouraging them to define it in their own terms.

The monodramatic interview followed the general outline of a short psychodrama session (Moreno 1949). Interviewees were asked to visualize things in the dramatic space using themselves, furniture, or objects available in the room and to voice their feelings and thoughts. If the interviewees had placed something in the dramatic space, they were not only asked to talk about it but also to reverse roles and speak from the point of view of that party or thing (Cruz et al. 2018).

  1. The interviewees started by representing themselves in their work environment. Some participants took up a specific position in the interview room; others used intermediate objects to model their professional position or identity.

  2. GAI technology was then positioned within the dramatic space, represented by a selected intermediate object chosen to symbolize it.

  3. The third step examined the relationship between the interviewee's position and the GAI.

  4. Finally, the broader organizational and sociopolitical context could also be represented in the dramatic space if the interviewee independently indicated them as subjectively relevant actors. For instance, if they mentioned specific stakeholders or external socio-economic forces influencing their GAI relationship, they materialized these parties using intermediate objects.

At the end of the interview, the interviewer asked the interviewee to reflect on the monodramatic interview experience briefly and allowed comments beyond dramatic exploration.

3.4 Data analysis

To build up a data corpus, photographs of the intermediate objects were converted into summary visual diagrams (see Supplementary material 1), observations of the interviewees' behavior were compiled into a separate database, and the audio recordings of the interviews were transcribed into text using the AlRite program, with manual corrections and the addition of verbal gestures such as laughter (see Supplementary material 3).

Since this research examines the interviewees' internal representations rather than their perceived opinions, the data analysis focused primarily on the created artifacts. These externalized internal models, comprising physical intermediate objects, behavioral cues, and verbal expressions, were interpreted using information from behavioral observation records and interview transcripts, thus reaching methodological triangulation (Denzin 1989). To have a manageable amount, the data corpus was reduced to a research-relevant data set using the method of meaning compression (Kvale 1996). The data set was then analyzed using the thematic analysis method (Holloway – Todres 2003), which allows patterns and themes to be searched for, analyzed, and interpreted in the entire data set. The deductive or top-down form of the method (theoretic thematic analysis, TTA) allows the data to be coded to a pre-existing theoretical framework (Braun – Clarke 2006) to conduct a detailed analysis of the externalized representations of the interviewees. Thematic coding strategy is structurally related to the open-axis-selective steps of Strauss and Corbin (1990) but is more general and easier to fit into theoretical frameworks.

The process of externalization during the monodramatic interviews gave digital GAI a visible, tangible, machine-like presence. This trend, coupled with the mentioned convergence between GAI and robotics (Obrenovic et al. 2024), blurred the distinction between digital GAI tools and physical robots. Consequently, the 'trust in robots' model (Simon et al. 2020; Table 1) emerged as a fitting theoretical framework for this study, particularly as it aligned with our constructivist approach.

In the first step of coding, a pre-defined codebook based on the theoretical framework was developed and applied to the dataset following the TTA approach (see Supplementary material 4). It was observed that gender, an aspect of Appearance, had no relevant data in the data set. Moreover, a significant portion of uncoded data could be summarized as the aspect of distance. During the intermediate axial coding phase, the code of gender was removed. The new code of distance merged data on physical distances between symbolic objects with expressed emotional distances.

In the third round, codes were grouped into themes, and a thematic map was created to illustrate the alignment between the theoretical framework and the data. The results from these rounds were iteratively reviewed and refined, with the final themes labeled to emphasize how they support, refine, or challenge the dimensions in the theoretical framework.

Significant effort was invested in reporting the data in an analytical manner that does not conceal inconsistencies to assure reliability and gain a comprehensive understanding. This involved immersing in the data, repeatedly reading the transcripts, and comparing them with other data sources. To further ensure reliability, the study paid particular attention to the latent layers of meaning in the data set (Boyatzis 1998). For example, the Appearance dimension contains not only codes related to the color, shape, and size of the intermediate objects used but also codes derived from information physically or verbally expressed by the interviewees, providing a more precise depiction of their mental representations (e.g., “Small then very large, #1a). Combining three data sources to cross-verify findings strengthened the study's validity. For member checking, participants were consulted to verify the accuracy of the interpreted data, ensuring that their experiences were authentically represented (Thomas 2016).

4 Findings

Figure 2 summarizes the thematic map of the theoretical thematic analysis.

Fig. 2.
Fig. 2.

Thematic map of trust toward Generative AI (GAI)

Source: author.

Citation: Society and Economy 2025; 10.1556/204.2025.00002

4.1 Appearance

The appearance of internal models materialized in the artifacts seems to be contradictory. One notable characteristic is that, while the term Generative AI (GAI) implies a form of intelligence, it was never depicted as human-like within the dramatic space. Participants represented GAI as a Non-human vitality: Devoid of any genderedness in all interviews, challenging to define in shape (#1a), held together by walls (1b) while expanding (#2; #5), ‘small then very large’ (#1a), and capable of penetration (#7) and multiplication (#8).

GAI was often represented by non-figurative decorative objects (#1a; #1b; #5) or larger pot plants (#2; #8; #9). Representing GAI with a plant may be an oxymoron, but it also needs proper nurturing (data, training, and fine-tuning) to thrive. Just as a plant enhances its environment when well-maintained, properly integrated GAI can significantly boost productivity and creativity. Without adequate care, both can fail to reach their full potential.

Except for houseplants, all intermediate objects were of a manageable size and easy to handle by hand. Most of these served practical purposes, such as writing, like a pen (#3) or a board marker (#6), or opening things as they rotate, like a can opener (#4) or a pinecone reinterpreted as a drill head (#7). This pattern highlights another recurring theme: the portrayal of GAI as a Handy tool.

The appearance of GAI can be summarized as a Janus-faced presence for the division under study. A clear example of this duality is Interviewee #1, who, without any prior instruction, immediately distinguished between two meanings of GAI: ‘On a feeling level, it's behind me. Like, I can't see it, I can't even see what it’s like. It's distant. But in reality, I think about it as something… somewhere over there.’ (#1; see Supplementary material 3 for context). These themes showcase the distinct conceptual framing of GAI as inherently non-human.

4.2 Performance

The ‘multifaceted’ (#7) nature of GAI is widely acknowledged in the interviews, with metaphors such as ‘has many branches’ (#7; #8) and ‘can be opened in thirty different ways’ (#4) reflecting its perceived versatility. GAI is described as ‘full of possibilities’ (#1b) and ‘features’ (#3), yet there is uncertainty, as ‘It's not known what it really knows’ (#5). Its capacity is compared to the full potential of the human brain (#6) or to ‘unfathomable depths’ (#1b). However, users express disappointment when the outputs fall short of expectations, being merely ‘usable’ (#3) or ‘rough’ (#4). As one user commented, ‘I have to touch it to make it work!’ (#1b). GAI proves valuable only when integrated with human effort.” This Blunted potential is evident in at least two areas: the functions are often seen as ‘gimmicks’ with ‘no solid business use case yet’ (#2), and while they enhance tasks ‘I don't like to do’ (#9), there is little interest in learning to use them.

Interviewees expressed a mix of cautious optimism and significant concerns regarding GAI's predictability and control. On the one hand, GAI was described as ‘relatively credible, reliable’ (#8), reflecting a degree of trust and alignment with user needs in its current capabilities, as ‘for now, [GAI is] at the service of the people’ (#6). However, many participants emphasized that it is ‘unsolvable’ (#1a), ‘opaque’ (#1b), and challenging to manage, being ‘too fast’ (#1b) and ‘uncontrollable’ (#1b), which elicited uncertainty (#2) and concern (#5). The greatest fear was that as an independent force, it ‘makes way for itself’ (#7), ‘we cannot precisely control its development’ (#9), as ‘the powerful own and control it' (#3), and leading to unease and Fear of losing control over its trajectory and ultimate impact. While GAI's current state generated more disappointment than optimism among the group, the wild potential and lack of control fostered distrust, highlighting Unmet promises about the performance of GAI.

4.3 Proximity

None of the interviewees placed the intermediate object representing GAI close to their professional selves. It was consistently positioned behind their back (#1a; #5), on the periphery (#2; #4), far away (#6; #7; #9), hidden (#3), or at best visible but not easily reachable (#1b). This distancing was particularly evident in their roles as employees (#8), with GAI described as ‘outside the world of work’ (#9) and rarely used by even experienced users outside of private life (#6). This reflects a sense of professional disinterest, an attitude of Only when we have to, which is surprising, especially for the project team. As one member summarized, ‘I could go there, but as long as I have a pen, what for?’ (#3).

The group's passivity and detachment become even more pronounced when contrasted with GAI's perceived interconnectivity, mobility, and dynamism. It was described as an unstoppable force connected to everything (#1a), 'slowly entwines everything' (#2), and 'always hits a wall, but drills through every obstacle.' (#7). While some interviewees adopted a wait-and-see approach (#7), delaying integration but expecting GAI to reach them eventually, others expressed concern, keeping a watchful eye on it even from a distance (#2), indicating conscious and unconscious Avoidance. These themes highlight a growing tension between GAI's rapid expansion and employees' hesitant, reserved attitude and emotional detachment, revealing a disconnect between technological advancement and human readiness to engage with it.

5 Discussion

The emerging themes describe why and how the observed organization's technology exploration and acceptance process was plagued with multiple issues. Our research study also confirms and nuances many of the findings of previous studies.

All three dimensions of trust in robots (Billings et al. 2012) appeared relevant for Generative AI (GAI), confirming the model's relevance and applicability. The most significant aspect of Appearance was the GAI representation's shape, followed by its size. However, the relevance and impact of color and gender on trust need further research. Performance was confirmed as a key dimension influencing trust in GAI. Results suggest that the self-managed emotional and physical distance between GAI and the self may be a new Proximity aspect (Simon et al. 2020). In line with previous studies (Fujii 2022; Leon et al. 2022; Vorm – Combs 2022; Chatterjee et al. 2023; Kelly et al. 2023), this study shows that for successful workplace implementation, employees need to perceive AI as an opportunity and not a threat (Bhargava et al. 2021). Analyzing human–GAI interactions through an HRI framework with monodramatic techniques also revealed the significance of physicality and embodiment in technology-mediated relationships.

The two distinct GAI representations in Interview #1, and more broadly, GAI's Janus-faced presence, exemplify the psychological defense mechanism of splitting as described in Melanie Klein's (1996) object relations theory. Splitting involves mentally dividing objects or experiences into extreme components to manage conflicting emotions, often leading to polarized perceptions. Klein posits that the subject (individual) develops their psyche and relationships based on early interactions with significant others (objects), where the quality of these early relationships profoundly shapes the subject's internal world and their capacity to form and maintain relationships throughout life. Object-relations theory has been utilized to explore problematic digital behavior in the workplace (Whitty – Carr 2006). Its close relative, attachment theory (Bowlby 1982), demonstrates that attachment style can be used to predict how people feel about AI, as attachment anxiety is associated with less trust in AI (Gillath et al. 2021). Although the latter research was based on hypothetical AI scenarios, it remains relevant, as two of its three studies focused on applications (AI-based medical diagnostic aids and personal relationship aids) that GAI tools, such as ChatGPT, can also produce from available data. These further reinforce the conclusions of other studies (Greiner et al. 2021), emphasizing the importance of emotional factors.

Regarding acceptance models, the central theme of Unmet performance promises indicates low perceived usefulness, performance expectation, job relevance, and a lack of result demonstrability in the division's work-related tasks, all relevant factors in TAM 2 (Venkatesh – Davis 2000). GAI-experienced interviewee's preference to use it in private life emphasizes compatibility issues, mirroring the extension of UTAUT (Blunt et al. 2022). They felt that while GAI cannot currently replace humans at work, its rapid development and unpredictability raise concerns, touching on job insecurity (Dabbous et al. 2022). These support a study where the top three reasons against using GAI tools were perceived risk, language barrier, and technological anxiety (Pillai et al. 2024).

In AIDUA terms, while the Handy tool theme suggests high-performance expectancy combined with low effort expectancy, GAI's conceptualization as a Non-human vitality severely impacted perceived humanness (Ma – Huo 2023). Only Interviewee #4 mentioned that using GAI is somewhat fun and exciting, showing low hedonic motivation and limited novelty value in the division (Im et al. 2015). Social influence also remained weak as the management was encouraging but not prescribing GAI usage. All these factors lead to a very limited primary appraisal that negatively influences behavioral options (secondary appraisal), reducing the chances of developing a habit of using GAI for work (Blunt et al. 2022; Dabbous et al. 2022).

Contrary to the AIDUA model, while age and seniority did not appear to impact GAI adoption directly, male interviewees (#2; #9) heavily criticized GAI's usefulness, aligning with findings that gender influences users' willingness to reject ChatGPT (Ma – Huo 2023). While the AIDUA model operationalizes trust as the result of a primary appraisal, the findings, in line with Object-Relations Theory and Attachment Theory, suggest that trust in GAI is a complex phenomenon involving both prior and posterior phases, likely with bidirectional causality. These indicate that a more nuanced AIDUA model is needed.

The findings also highlight that hedonic motivation often holds little significance in organizational contexts, and successful adoption requires that the scope of operations significantly benefit from GAI use (Awa et al. 2017). The lack of organizational pressure and drive further hindered progress. “Soft” management efforts, such as forming a GAI project team or hosting in-house workshops, may be necessary but insufficient (Radhakrishnan et al. 2022). These results indirectly confirm previous research suggesting that contextual factors like technological, organizational, and environmental aspects (Awa et al. 2017; Radhakrishnan et al. 2022) are often more relevant for organizational technology adoption than traditional customer-based models.

While the hype around GAI might create high normative pressure to adopt (Leaver – Srdarov 2023), the low mimetic (competitor) pressure in their industry diminished its impact. This was likely one of the primary explanations for the entire undertaking. Not wanting to miss out on the GAI hype, the division tested its implementation. However, the initiative remained disorganized due to a lack of clear organizational needs and goals, as well as a lack of internal and external pressure. Thus, adoption depended almost entirely on the characteristics of individual employees and their respective teams.

The division showed a distinct, avoidant affective attitude toward GAI, exemplified by the unenthusiastic GAI project team, where the most eager member was an assistant temporarily assigned from a different division (Interviewee #4). Though cognitively open to GAI, the division's strong distrust and disillusionment at the affective level illustrate why and how a technology adoption project can be ineffective, even when stakeholders appear open and optimistic at a cognitive level. This might be transferable to many other cases.

Despite the study's focus on a relatively small firm in the professional service sector, with only 55–60 employees, the findings may also apply to other industries and medium- and large-sized enterprises in cases where the technology adoption attempt involves voluntary, self-driven, bottom-up, organic GAI exploration and adaptation within a smaller unit, such as the observed division of eight full-time members. New management models for GAI adoption should address these situations and challenges, particularly in the SME sector. Additionally, organizations can utilize the results of this research to enhance their GAI implementation process.

6 Conclusions

This qualitative study examined the internal representations of employees from a Hungarian organizational division in the professional service sector that attempted to integrate Generative AI (GAI) technologies from various providers, including OpenAI (ChatGPT, DALL-E), Designs.ai, Gamma, and Canva, for functions such as text generation and transformation, image creation, and presentation creation. The interviewee's internal models about GAI and trust were externalized by deploying monodramatic projective techniques taken from psychodrama, like soliloquy, symbolic representation, concretization, intermediate objects, and role reversal (Cruz et al. 2018). These revealed hidden assumptions and unconscious beliefs in a tangible format, offering insights into employee perspectives on organizational technology acceptance.

The thematic codes of this research, the appearance of GAI as a Janus-faced presence (simultaneously a Handy tool and a Non-human vitality), the Unmet promises of its performance (marked by Blunted potential yet coupled with Fear of losing control), and its Avoided proximity (characterized by a reluctance to engage unless necessary, contrasted with the perception of GAI as an Unstoppable force) highlight an inherent ambiguity toward GAI, reflecting organizational acceptance gone awry.

The study's key contributions include fine-tuning HRI models and monodramatic methods for GAI, highlighting the significance of physicality and embodiment in technology-mediated relationships. It identifies trust in GAI as a complex phenomenon with potential reciprocal causality and emphasizes the importance of affective attitudes over cognitive acceptance. Additionally, it confirms the relevance of contextual factors over traditional customer-based models in organizational technology acceptance and illustrates how adoption projects can fail despite cognitive openness.

While the hype around GAI might have driven many companies to attempt GAI integration, gradual, “organic” organizational adaptation strategies face many hurdles. Without a clear strategy and strong management support, the individual characteristics of employees, particularly perceived trust, become the key determinants of adoption.

Results imply that technological change management should prioritize organizational factors and emotional dynamics to achieve a smoother and more humane adoption of new technologies. Organizations must establish precise business needs and strategic goals to align employee efforts and maintain relevance. To address concerns about reliability and perceived lack of control, decision-makers should implement transparent policies, provide targeted training, and establish open channels for employee feedback. Additionally, identifying and supporting early adopters within the organization can help them serve as role models and advocates, fostering greater acceptance and effective use of GAI technologies.

The study's limitation as a single, albeit critical, case study conducted in a Hungarian organization's division highlights the need for further research. Future studies should explore the complex nature of trust in GAI, particularly its reciprocal causality, and how contextual and socio-technical factors interact in different organizational settings. Management perspectives, affective dimensions of technology acceptance, and longitudinal approaches can reveal how trust, usability, and GAI integration evolve over time. A follow-up study could investigate how monodramatic interviewing fosters individual and organizational change through heightened awareness, offering more profound insights into the role of embodiment in employee attitudes. Additionally, examining GAI's impact on work identities, role definitions, and team dynamics could provide a broader understanding of its influence. The innovative application of Morenian dramatic techniques (Cruz et al. 2018) demonstrates this methodology's potential for uncovering hidden assumptions and unconscious beliefs, offering valuable insights for research and practice in other contexts.

Acknowledgment

Supported by the ÚNKP-23-3-II-Corvinus-51 New National Excellence Program of the Ministry of Culture and Innovation from the source of the National Research, Development and Innovation Fund. The author also sincerely appreciates the support of his supervisors, Sándor Takács and András Gelei.

Supplementary material

Supplementary data to this article can be found online at https://doi.org/10.1556/204.2025.00002.

References

  • Alvesson, M. (2003): Beyond Neopositivists, Romantics, and Localists: A Reflexive Approach to Interviews in Organizational Research. Academy of Management Review 28(1): 1333. https://doi.org/10.5465/amr.2003.8925191.

    • Search Google Scholar
    • Export Citation
  • Awa, H. O.Ukoha, O.Igwe, S. R. (2017): Revisiting Technology-Organization-Environment (TOE) Theory for Enriched Applicability. The Bottom Line 30: 222. https://doi.org/10.1108/BL-12-2016-0044.

    • Search Google Scholar
    • Export Citation
  • Baker, J. (2012): The Technology–Organization–Environment Framework. In: Dwivedi, Y.Wade, M. – Schneberger, S. (eds): Information Systems Theory. Integrated Series in Information Systems 28. New York: Springer, pp. 231245. https://doi.org/10.1007/978-1-4419-6108-2_12.

    • Search Google Scholar
    • Export Citation
  • Baker, A.Phillips, E.Ullman, D.Keebler, J. R. (2018): Toward an Understanding of Trust Repair in Human-Robot Interaction: Current Research and Future Directions. The ACM Transactions on Interactive Intelligent Systems 8(4): 130. https://doi.org/10.1145/3181671.

    • Search Google Scholar
    • Export Citation
  • Bao, Y. (2009): Organizational Resistance to Performance‐enhancing Technological Innovations: A Motivation‐Threat‐Ability Framework. Journal of Business & Industrial Marketing 24(2): 119130. https://doi.org/10.1108/08858620910931730.

    • Search Google Scholar
    • Export Citation
  • Bengel, D. (2020): Organizational Acceptance of Artificial Intelligence: Identification of AI Acceptance Factors Tailored to the German Financial Services Sector .Wiesbaden: Springer.

    • Search Google Scholar
    • Export Citation
  • Bhargava, A.Bester, M.Bolton, L. (2021): Employees’ Perceptions of the Implementation of Robotics, Artificial Intelligence, and Automation (RAIA) on Job Satisfaction, Job Security, and Employability. Journal of Technology in Behavioral Science 6: 106113. https://doi.org/10.1007/s41347-020-00153-8.

    • Search Google Scholar
    • Export Citation
  • Billings, D. R.Schaefer, K. E.Chen, J. Y. C.Hancock, P. A. (2012): Human-Robot Interaction: Developing Trust in Robots. 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109110. https://doi.org/10.1145/2157689.2157709.

    • Search Google Scholar
    • Export Citation
  • Blunt, M.Chong, A. Y. L.Tsigna, Z.Venkatesh, V. (2022): Meta-Analysis of the Unified Theory of Acceptance and Use of Technology (UTAUT): Challenging its Validity and Charting a Research Agenda in the Red Ocean. Journal of the Association for Information Systems 23(1): 1395. https://doi.org/10.17705/1jais.00719.

    • Search Google Scholar
    • Export Citation
  • Bowlby, J. (1982): Attachment and Loss , Vol. 1. London: Random House.

  • Boyatzis, R. E. (1998): Transforming Qualitative Information: Thematic Analysis and Code Development .Thousand Oaks: Sage Publications.

  • Braun, V.Clarke, V. (2006): Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3(2): 77101. https://doi.org/10.1191/1478088706qp063oa.

    • Search Google Scholar
    • Export Citation
  • Chatterjee, S.Rana, N. P.Khorana, S.Mikalef, P.Sharma, A. (2023): Assessing Organizational Users’ Intentions and Behavior to AI Integrated CRM Systems: A Meta-UTAUT Approach. Information Systems Frontiers 25: 12991313. https://doi.org/10.1007/s10796-021-10181-1.

    • Search Google Scholar
    • Export Citation
  • Choi, D.Chung, C. Y.Seyha, T.Young, J. (2020): Factors Affecting Organizations’ Resistance to the Adoption of Blockchain Technology in Supply Networks. Sustainability 12(21): 8882. https://doi.org/10.3390/su12218882.

    • Search Google Scholar
    • Export Citation
  • Cooper, R.Foster, M. (1971): Sociotechnical Systems. American Psychologist 26(5): 467474. https://doi.org/10.1037/h0031539.

  • Cruz, A.Sales, C. M.Alves, P.Moita, G. (2018): The Core Techniques of Morenian Psychodrama: A Systematic Review of Literature. Frontiers in Psychology 9: 1263. https://doi.org/10.3389/fpsyg.2018.01263.

    • Search Google Scholar
    • Export Citation
  • Dabbous, A.Aoun Barakat, K.Merhej Sayegh, M. (2022): Enabling Organizational Use of Artificial Intelligence: An Employee Perspective. Journal of Asia Business Studies 16(2): 245266. https://doi.org/10.1108/JABS-09-2020-0372.

    • Search Google Scholar
    • Export Citation
  • Davis, F. D.Bagozzi, R. P.Warshaw, P. R. (1989): Technology Acceptance Model. Journal of Management Science 35(8): 9821003. https://doi.org/10.1007/978-3-030-45274-2.

    • Search Google Scholar
    • Export Citation
  • De Leeuw, E. (2008): Self-Administered Questionnaires and Standardized Interviews. In: Alasuutari, P.Bickman, L.Brannen, J. (eds.): Handbook of Social Research Methods .London: SAGE Publications, pp. 313327. https://doi.org/10.4135/9781446212165.

    • Search Google Scholar
    • Export Citation
  • Denzin, N. K. (1989): The Research Act: A Theoretical Introduction to Sociological Methods, 3rd ed. Englewood Cliffs: Prentice Hall.

  • Dichert, E. (1943): Raw Material for the Copywriter’s Imagination Through Modern Psychology. Printers Ink, 5 March, 6368.

  • Donoghue, S. (2000): Projective Techniques in Consumer Research. Journal of Consumer Sciences 28: 4753.

  • Duan, J.Yu, S.Tan, H. L.Zhu, H.Tan, C. (2022): A Survey of Embodied AI: From Simulators to Research Tasks. IEEE Transactions on Emerging Topics in Computational Intelligence 6(2): 230244. https://doi.org/10.1109/TETCI.2022.3141105.

    • Search Google Scholar
    • Export Citation
  • Familoni, B. T.Onyebuchi, N. C. (2024): Advancements and Challenges in AI Integration for Technical Literacy: A Systematic Review. Engineering Science & Technology Journal 5(4): 14151430. https://doi.org/10.51594/estj.v5i4.1042.

    • Search Google Scholar
    • Export Citation
  • Festinger, L. (1962): Cognitive Dissonance. Scientific American 207(4): 93106. http://www.jstor.org/stable/24936719.

  • Feuerriegel, S.Hartmann, J.Janiesch, C.Zschech, P. (2024): Generative AI. Business & Information Systems Engineering 66: 111126. https://doi.org/10.1007/s12599-023-00834-7.

    • Search Google Scholar
    • Export Citation
  • Flyvbjerg, B. (2006): Five Misunderstandings About Case Study Research. Qualitative Inquiry 12(2): 219245. https://doi.org/10.1177/1077800405284363.

    • Search Google Scholar
    • Export Citation
  • Fujii, M. (2022): Simulations of the Diffusion of Innovation by Trust–Distrust Model Focusing on the Network Structure. Review of Socionetwork Strategies 16: 527544. https://doi.org/10.1007/s12626-022-00113-z.

    • Search Google Scholar
    • Export Citation
  • Gauntlett, D. (2007): Creative Explorations: New Approaches to Identities and Audiences. London: Routledge. https://doi.org/10.4324/9780203961407.

    • Search Google Scholar
    • Export Citation
  • Gerlich M. (2023): Perceptions and Acceptance of Artificial Intelligence: A Multi-Dimensional Study. Social Sciences 12(9): 502. https://doi.org/10.3390/socsci12090502.

    • Search Google Scholar
    • Export Citation
  • Giacomucci, S.Giacomucci, S. (2021): Other Experiential Approaches Similar to Psychodrama. In: Giacomucci, S. (ed.): Social Work, Sociometry, and Psychodrama: Experiential Approaches for Group Therapists, Community Leaders, and Social Workers, pp. 291308. https://doi.org/10.1007/978-981-33-6342-7_15.

    • Search Google Scholar
    • Export Citation
  • Gillath, O.Ai, T.Branicky, M. S.Keshmiri, S.Davison, R. B.Spaulding, R. (2021): Attachment and Trust in Artificial Intelligence. Computers in Human Behavior 115: 106607. https://doi.org/10.1016/j.chb.2020.106607.

    • Search Google Scholar
    • Export Citation
  • Gkinko, L.Elbanna, A. (2022): Hope, Tolerance and Empathy: Employees' Emotions When Using an AI-enabled Chatbot in a Digitalised Workplace. Information Technology & People 35(6): 17141743. https://doi.org/10.1108/ITP-04-2021-0328.

    • Search Google Scholar
    • Export Citation
  • Greiner, C.Jovy-Klein, F.Peisl, T. (2021): AI as Co-workers: An Explorative Research on Technology Acceptance Based on the Revised Bloom Taxonomy. In: Arai, K.Kapoor, S.Bhatia, R. (eds.): Proceedings of the Future Technologies Conference (FTC) 2020. Advances in Intelligent Systems and Computing. Cham: Springer, pp. 2735. https://doi.org/10.1007/978-3-030-63128-4_3.

    • Search Google Scholar
    • Export Citation
  • Gursoy, D.Chi, O. H.Lu, L.Nunkoo, R. (2019): Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. International Journal of Information Management 49: 157169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008.

    • Search Google Scholar
    • Export Citation
  • Hancock, P. A.Billings, D. R.Oleson, K. E.Chen J. Y. C.De Visser, E.Parasuraman, R. (2011): A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. The Journal of the Human Factors and Ergonomics Society 53(5): 51727. https://doi.org/10.1177/0018720811417254.

    • Search Google Scholar
    • Export Citation
  • Hasija, A.Esper, T. L. (2022): In Artificial Intelligence (AI) We Trust: A Qualitative Investigation of AI Technology Acceptance. Journal of Business Logistics 43: 388412. https://doi.org/10.1111/jbl.12301.

    • Search Google Scholar
    • Export Citation
  • Holloway, I.Todres, L. (2003): The Status of Method: Flexibility, Consistency and Coherence. Qualitative Research 3: 345357. https://doi.org/10.1177/1468794103033004.

    • Search Google Scholar
    • Export Citation
  • Im, S.Bhat, S.Lee, Y. (2015): Consumer Perceptions of Product Creativity, Coolness, Value and Attitude. Journal of Business Research 68(1): 166172. https://doi.org/10.1016/j.jbusres.2014.03.014.

    • Search Google Scholar
    • Export Citation
  • Kalmus, J.Nikiforova, A. (2024): To Accept or Not to Accept? An IRT-TOE Framework to Understand Educators’ Resistance to Generative AI in Higher Education. arXiv preprint arXiv:2407.20130.

    • Search Google Scholar
    • Export Citation
  • Kapitany, R.Hampejs, T.Goldstein, T. R. (2022): Pretensive Shared Reality: From Childhood Pretense to Adult Imaginative Play. Frontiers in Psychology 13: 19. https://doi.org/10.3389/fpsyg.2022.774085.

    • Search Google Scholar
    • Export Citation
  • Kateb, S.Ruehle, R. C.Kroon, D. P.van Burg, E.Huber, M. (2022): Innovating Under Pressure: Adopting Digital Technologies in Social Care Organizations During the COVID-19 Crisis. Technovation 115: 102536. https://doi.org/10.1016/j.technovation.2022.102536.

    • Search Google Scholar
    • Export Citation
  • Kavanagh, M. H.Ashkanasy, N. M. (2006): The Impact of Leadership and Change Management Strategy on Organizational Culture and Individual Acceptance of Change during a Merger. British Journal of Management 17: 81103. https://doi.org/10.1111/j.1467-8551.2006.00480.x.

    • Search Google Scholar
    • Export Citation
  • Kelly, S.Kaye, S-A.Oviedo-Trespalacios, O. (2023): What Factors Contribute to the Acceptance of Artificial Intelligence? A Systematic Review. Telematics and Informatics 77: 101925. https://doi.org/10.1016/j.tele.2022.101925.

    • Search Google Scholar
    • Export Citation
  • Kim, H. Y.McGill, A. L. (2018): Minions for the Rich? Financial Status Changes How Consumers See Products with Anthropomorphic Features. Journal of Consumer Research 45(2): 429450. https://doi.org/10.1093/jcr/ucy006.

    • Search Google Scholar
    • Export Citation
  • Klein, M. (1996): Notes on Some Schizoid Mechanisms. The Journal of Psychotherapy Practice and Research 5(2): 160179.

  • Klenke, K. (2008): Qualitative Research in the Study of Leadership. Bingley: Emerald Group.

  • Krugman, H. E. (1960): The “Draw a Supermarket” Technique. Public Opinion Quarterly 24(1): 148149. https://doi.org/10.1086/266939.

    • Search Google Scholar
    • Export Citation
  • Kvale, S. (1996): InterViews: An Introduction to Qualitative Research Interviewing. London: SAGE.

  • Lane, D. C. (1999): Social Theory and System Dynamics Practice. European Journal of Operational Research 113(3): 501527. https://doi.org/10.1016/S0377-2217(98)00192-1.

    • Search Google Scholar
    • Export Citation
  • Lazarus, R. S. (1991a): Cognition and Motivation in Emotion. American Psychologist 46(4): 352367. https://doi.org/10.1037/0003-066X.46.4.352.

    • Search Google Scholar
    • Export Citation
  • Lazarus, R. S. (1991b): Progress on a Cognitive-Motivational-Relational Theory of Emotion. American Psychologist 46(8): 819834. https://doi.org/10.1037/0003-066X.46.8.819.

    • Search Google Scholar
    • Export Citation
  • Leaver, T.Srdarov, S. (2023): ChatGPT Isn't Magic: The Hype and Hypocrisy of Generative Artificial Intelligence (AI) Rhetoric. M/C Journal 26(5): 5. https://doi.org/10.5204/mcj.3004.

    • Search Google Scholar
    • Export Citation
  • Lee, A. S.Baskerville, R. L. (2003): Generalizing Generalizability in Information Systems Research. Information Systems Research 14(3): 221243. https://doi.org/10.1287/isre.14.3.221.16560.

    • Search Google Scholar
    • Export Citation
  • Leon, V.Etesami, S. R.Nagi, R. (2022): Diffusion of Innovation under Limited-Trust Equilibrium. In: Proceedings of the 61st IEEE Conference on Decision and Control (CDC), pp. 31453150. https://doi.org/10.1109/CDC51059.2022.9992669.

    • Search Google Scholar
    • Export Citation
  • Liamputtong, P. (2007): Researching the Vulnerable: A Guide to Sensitive Research Methods. London: SAGE. https://doi.org/10.4135/9781849209861.

    • Search Google Scholar
    • Export Citation
  • Lippitt, R. (1943): The Psychodrama in Leadership Training. Sociometry 6(3): 286292. https://doi.org/10.2307/2785182.

  • Lu, L.Cai, R.Gursoy, D. (2019): Developing and Validating a Service Robot Integration Willingness Scale. International Journal of Hospitality Management 80: 3651. https://doi.org/10.1016/j.ijhm.2019.01.005.

    • Search Google Scholar
    • Export Citation
  • Ma, X.Huo, Y. (2023): Are Users Willing to Embrace ChatGPT? Exploring the Factors on the Acceptance of Chatbots from the Perspective of AIDUA Framework. Technology in Society 75: 102362. https://doi.org/10.1016/j.techsoc.2023.102362.

    • Search Google Scholar
    • Export Citation
  • Moreno, J. L. (1934): Who Shall Survive? A New Approach to the Problem of Human Interrelations. Washington: Nervous and Mental Disease Publishing Co.

    • Search Google Scholar
    • Export Citation
  • Moreno, J. L (1949): Psychodrama Vol. 1. New York: Beacon.

  • Moreno, Z. T.Blomkvist, L. D.Rützel, T. (2000): Psychodrama, Surplus Reality and the Art of Healing. London: Routledge.

  • Nilsen, E. R.Dugstad, J.Eide, H.Gullslett, M. K.Eide, T. (2016): Exploring Resistance to Implementation of Welfare Technology in Municipal Healthcare Services–A Longitudinal Case Study. BMC Health Services Research 16(1): 114. https://doi.org/10.1186/s12913-016-1913-5.

    • Search Google Scholar
    • Export Citation
  • Obrenovic, B.Gu, X.Wang, G.Godinic, D.Jakhongirov, I. (2024): Generative AI and Human–Robot Interaction: Implications and Future Agenda for Business, Society, and Ethics. AI & Society. https://doi.org/10.1007/s00146-024-01889-0.

    • Search Google Scholar
    • Export Citation
  • Ouadahi, J. (2008): A Qualitative Analysis of Factors Associated with User Acceptance and Rejection of a New Workplace Information System in the Public Sector: A Conceptual Model. Canadian Journal of Administrative Sciences 25(3): 201213. https://doi.org/10.1002/cjas.65.

    • Search Google Scholar
    • Export Citation
  • Papert, S.Harel, I. (1991): Situating Constructionism. Constructionism 36(2): 111.

  • Piaget, J. (1955): The Child's Construction of Reality. London: Routledge.

  • Pillai, R.Ghanghorkar, Y.Sivathanu, B.Algharabat, R.Rana, N. P. (2024): Adoption of Artificial Intelligence (AI) Based Employee Experience (EEX) Chatbots. Information Technology & People 37(1): 449478. https://doi.org/10.1108/ITP-04-2022-0287.

    • Search Google Scholar
    • Export Citation
  • Pinski, M.Benlian, A. (2024): AI Literacy for Users – A Comprehensive Review and Future Research Directions of Learning Methods, Components, and Effects. Computers in Human Behavior: Artificial Humans 2(1): 100062. https://doi.org/10.1016/j.chbah.2024.100062.

    • Search Google Scholar
    • Export Citation
  • Radhakrishnan, J.Gupta, S.Prashar, S. (2022): Understanding Organizations’ Artificial Intelligence Journey: A Qualitative Approach. Pacific Asia Journal of the Association for Information Systems 14(6): 4377. https://doi.org/10.17705/1pais.14602.

    • Search Google Scholar
    • Export Citation
  • Renaud, K.Van Biljon, J. (2008): Predicting Technology Acceptance and Adoption by the Elderly: A Qualitative Study. Proceedings of the 2008 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries: Riding the Wave of Technology, pp. 210219. https://doi.org/10.1145/1456659.145668.

    • Search Google Scholar
    • Export Citation
  • Rogers, E. M. (1962): Diffusion of Innovations, 1st ed. New York: Free Press of Glencoe.

  • Rogers, E. M. (1983): Diffusion of Innovations ,3rd ed. New York: Free Press of Glencoe.

  • Sacks, J. M.Fonseca, J. (2004): Contemporary Psychodrama: New Approaches to Theory and Technique. New York: Routledge.

  • Sætra, H. S. (2021): AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 13(4): 1738. https://doi.org/10.3390/su13041738.

    • Search Google Scholar
    • Export Citation
  • Sanders T. L.Oleson, K. E.Billings, D. R.Chen, J. Y. C.Hancock, P. A. (2011): A Model of Human-Robot Trust: Theoretical Model Development. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 55: 14321436. https://doi.org/10.1177/1071181311551298.

    • Search Google Scholar
    • Export Citation
  • Schaller, R. (2022): Crucial Moments of Change in Monodrama: The Creative Contradiction. Zeitschrift für Psychodrama und Soziometrie 21(1): 718. https://doi.org/10.1007/s11620-021-00645-6.

    • Search Google Scholar
    • Export Citation
  • Schwabenland, C.Kofinas, A. (2023): Ducks, Elephants and Sharks: Using LEGO® Serious Play® to Surface the ‘Hidden Curriculum’ of Equality, Diversity and Inclusion. Management Learning 54(3): 318337. https://doi.org/10.1177/13505076231166850.

    • Search Google Scholar
    • Export Citation
  • Simon, O.Neuhofer, B.Egger, R. (2020): Human-Robot Interaction: Conceptualising Trust in Frontline Teams Through LEGO® Serious Play®. Tourism Management Perspectives 35: 100692. https://doi.org/10.1016/j.tmp.2020.100692.

    • Search Google Scholar
    • Export Citation
  • Smit, D.Eybers, S.van der Merwe, A.Wies, R.Human, N.Pielmeier, J. (2024): Integrating the TOE Framework and DOI Theory to Dissect and Understand the Key Elements of AI Adoption in Sociotechnical Systems. South African Computer Journal 36(2): 123145.

    • Search Google Scholar
    • Export Citation
  • Steinman, R. B. (2009): Projective Techniques in Consumer Research. International Bulletin of Business Administration 5: 3745.

  • Strauss, A. L.Corbin, J. (1990): Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park: SAGE.

  • Thomas, D. R. (2016): Feedback from Research Participants: Are Member Checks Useful in Qualitative Research? Qualitative Research in Psychology 14(1): 2341. https://doi.org/10.1080/14780887.2016.1219435.

    • Search Google Scholar
    • Export Citation
  • Tornatzky, L. G.Fleischer, M.Chakrabarti, A. K. (1990): The Processes of Technological Innovation. Lexington: DC Heath & Company.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Bala, H. (2008): Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences 39(2): 273315. https://doi.org/10.1111/j.1540-5915.2008.00192.x.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Davis, F. D. (2000): A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science 46(2): 186204. https://doi.org/10.1287/mnsc.46.2.186.11926.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Morris, M.Davis, G.Davis, F. (2003): User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3): 425478. https://doi.org/10.2307/30036540.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Thong, J. Y.Xu, X. (2012): Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly 36(1): 157178. https://doi.org/10.2307/41410412.

    • Search Google Scholar
    • Export Citation
  • Vorm, E. S.Combs, D. J. Y. (2022): Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). International Journal of Human–Computer Interaction 38(18–20): 18281845. https://doi.org/10.1080/10447318.2022.2070107.

    • Search Google Scholar
    • Export Citation
  • Whitty, M. T.Carr, A. N. (2006): New Rules in the Workplace: Applying Object-Relations Theory to Explain Problem Internet and Email Behaviour in the Workplace. Computers in Human Behavior 22(2): 235250. https://doi.org/10.1016/j.chb.2004.06.005.

    • Search Google Scholar
    • Export Citation
  • Will, V.Eadie, D.MacAskill, S. (1996): Projective and Enabling Techniques Explored. Marketing Intelligence and Planning 14: 3843. https://doi.org/10.1108/02634509610131144.

    • Search Google Scholar
    • Export Citation
  • Yao, Y.Duan, J.Xu, K.Cai, Y.Sun, Z.Zhang, Y. (2024): A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. High-Confidence Computing 4(2): 100211. https://doi.org/10.1016/j.hcc.2024.100211.

    • Search Google Scholar
    • Export Citation

Supplementary Materials

  • Alvesson, M. (2003): Beyond Neopositivists, Romantics, and Localists: A Reflexive Approach to Interviews in Organizational Research. Academy of Management Review 28(1): 1333. https://doi.org/10.5465/amr.2003.8925191.

    • Search Google Scholar
    • Export Citation
  • Awa, H. O.Ukoha, O.Igwe, S. R. (2017): Revisiting Technology-Organization-Environment (TOE) Theory for Enriched Applicability. The Bottom Line 30: 222. https://doi.org/10.1108/BL-12-2016-0044.

    • Search Google Scholar
    • Export Citation
  • Baker, J. (2012): The Technology–Organization–Environment Framework. In: Dwivedi, Y.Wade, M. – Schneberger, S. (eds): Information Systems Theory. Integrated Series in Information Systems 28. New York: Springer, pp. 231245. https://doi.org/10.1007/978-1-4419-6108-2_12.

    • Search Google Scholar
    • Export Citation
  • Baker, A.Phillips, E.Ullman, D.Keebler, J. R. (2018): Toward an Understanding of Trust Repair in Human-Robot Interaction: Current Research and Future Directions. The ACM Transactions on Interactive Intelligent Systems 8(4): 130. https://doi.org/10.1145/3181671.

    • Search Google Scholar
    • Export Citation
  • Bao, Y. (2009): Organizational Resistance to Performance‐enhancing Technological Innovations: A Motivation‐Threat‐Ability Framework. Journal of Business & Industrial Marketing 24(2): 119130. https://doi.org/10.1108/08858620910931730.

    • Search Google Scholar
    • Export Citation
  • Bengel, D. (2020): Organizational Acceptance of Artificial Intelligence: Identification of AI Acceptance Factors Tailored to the German Financial Services Sector .Wiesbaden: Springer.

    • Search Google Scholar
    • Export Citation
  • Bhargava, A.Bester, M.Bolton, L. (2021): Employees’ Perceptions of the Implementation of Robotics, Artificial Intelligence, and Automation (RAIA) on Job Satisfaction, Job Security, and Employability. Journal of Technology in Behavioral Science 6: 106113. https://doi.org/10.1007/s41347-020-00153-8.

    • Search Google Scholar
    • Export Citation
  • Billings, D. R.Schaefer, K. E.Chen, J. Y. C.Hancock, P. A. (2012): Human-Robot Interaction: Developing Trust in Robots. 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109110. https://doi.org/10.1145/2157689.2157709.

    • Search Google Scholar
    • Export Citation
  • Blunt, M.Chong, A. Y. L.Tsigna, Z.Venkatesh, V. (2022): Meta-Analysis of the Unified Theory of Acceptance and Use of Technology (UTAUT): Challenging its Validity and Charting a Research Agenda in the Red Ocean. Journal of the Association for Information Systems 23(1): 1395. https://doi.org/10.17705/1jais.00719.

    • Search Google Scholar
    • Export Citation
  • Bowlby, J. (1982): Attachment and Loss , Vol. 1. London: Random House.

  • Boyatzis, R. E. (1998): Transforming Qualitative Information: Thematic Analysis and Code Development .Thousand Oaks: Sage Publications.

  • Braun, V.Clarke, V. (2006): Using Thematic Analysis in Psychology. Qualitative Research in Psychology 3(2): 77101. https://doi.org/10.1191/1478088706qp063oa.

    • Search Google Scholar
    • Export Citation
  • Chatterjee, S.Rana, N. P.Khorana, S.Mikalef, P.Sharma, A. (2023): Assessing Organizational Users’ Intentions and Behavior to AI Integrated CRM Systems: A Meta-UTAUT Approach. Information Systems Frontiers 25: 12991313. https://doi.org/10.1007/s10796-021-10181-1.

    • Search Google Scholar
    • Export Citation
  • Choi, D.Chung, C. Y.Seyha, T.Young, J. (2020): Factors Affecting Organizations’ Resistance to the Adoption of Blockchain Technology in Supply Networks. Sustainability 12(21): 8882. https://doi.org/10.3390/su12218882.

    • Search Google Scholar
    • Export Citation
  • Cooper, R.Foster, M. (1971): Sociotechnical Systems. American Psychologist 26(5): 467474. https://doi.org/10.1037/h0031539.

  • Cruz, A.Sales, C. M.Alves, P.Moita, G. (2018): The Core Techniques of Morenian Psychodrama: A Systematic Review of Literature. Frontiers in Psychology 9: 1263. https://doi.org/10.3389/fpsyg.2018.01263.

    • Search Google Scholar
    • Export Citation
  • Dabbous, A.Aoun Barakat, K.Merhej Sayegh, M. (2022): Enabling Organizational Use of Artificial Intelligence: An Employee Perspective. Journal of Asia Business Studies 16(2): 245266. https://doi.org/10.1108/JABS-09-2020-0372.

    • Search Google Scholar
    • Export Citation
  • Davis, F. D.Bagozzi, R. P.Warshaw, P. R. (1989): Technology Acceptance Model. Journal of Management Science 35(8): 9821003. https://doi.org/10.1007/978-3-030-45274-2.

    • Search Google Scholar
    • Export Citation
  • De Leeuw, E. (2008): Self-Administered Questionnaires and Standardized Interviews. In: Alasuutari, P.Bickman, L.Brannen, J. (eds.): Handbook of Social Research Methods .London: SAGE Publications, pp. 313327. https://doi.org/10.4135/9781446212165.

    • Search Google Scholar
    • Export Citation
  • Denzin, N. K. (1989): The Research Act: A Theoretical Introduction to Sociological Methods, 3rd ed. Englewood Cliffs: Prentice Hall.

  • Dichert, E. (1943): Raw Material for the Copywriter’s Imagination Through Modern Psychology. Printers Ink, 5 March, 6368.

  • Donoghue, S. (2000): Projective Techniques in Consumer Research. Journal of Consumer Sciences 28: 4753.

  • Duan, J.Yu, S.Tan, H. L.Zhu, H.Tan, C. (2022): A Survey of Embodied AI: From Simulators to Research Tasks. IEEE Transactions on Emerging Topics in Computational Intelligence 6(2): 230244. https://doi.org/10.1109/TETCI.2022.3141105.

    • Search Google Scholar
    • Export Citation
  • Familoni, B. T.Onyebuchi, N. C. (2024): Advancements and Challenges in AI Integration for Technical Literacy: A Systematic Review. Engineering Science & Technology Journal 5(4): 14151430. https://doi.org/10.51594/estj.v5i4.1042.

    • Search Google Scholar
    • Export Citation
  • Festinger, L. (1962): Cognitive Dissonance. Scientific American 207(4): 93106. http://www.jstor.org/stable/24936719.

  • Feuerriegel, S.Hartmann, J.Janiesch, C.Zschech, P. (2024): Generative AI. Business & Information Systems Engineering 66: 111126. https://doi.org/10.1007/s12599-023-00834-7.

    • Search Google Scholar
    • Export Citation
  • Flyvbjerg, B. (2006): Five Misunderstandings About Case Study Research. Qualitative Inquiry 12(2): 219245. https://doi.org/10.1177/1077800405284363.

    • Search Google Scholar
    • Export Citation
  • Fujii, M. (2022): Simulations of the Diffusion of Innovation by Trust–Distrust Model Focusing on the Network Structure. Review of Socionetwork Strategies 16: 527544. https://doi.org/10.1007/s12626-022-00113-z.

    • Search Google Scholar
    • Export Citation
  • Gauntlett, D. (2007): Creative Explorations: New Approaches to Identities and Audiences. London: Routledge. https://doi.org/10.4324/9780203961407.

    • Search Google Scholar
    • Export Citation
  • Gerlich M. (2023): Perceptions and Acceptance of Artificial Intelligence: A Multi-Dimensional Study. Social Sciences 12(9): 502. https://doi.org/10.3390/socsci12090502.

    • Search Google Scholar
    • Export Citation
  • Giacomucci, S.Giacomucci, S. (2021): Other Experiential Approaches Similar to Psychodrama. In: Giacomucci, S. (ed.): Social Work, Sociometry, and Psychodrama: Experiential Approaches for Group Therapists, Community Leaders, and Social Workers, pp. 291308. https://doi.org/10.1007/978-981-33-6342-7_15.

    • Search Google Scholar
    • Export Citation
  • Gillath, O.Ai, T.Branicky, M. S.Keshmiri, S.Davison, R. B.Spaulding, R. (2021): Attachment and Trust in Artificial Intelligence. Computers in Human Behavior 115: 106607. https://doi.org/10.1016/j.chb.2020.106607.

    • Search Google Scholar
    • Export Citation
  • Gkinko, L.Elbanna, A. (2022): Hope, Tolerance and Empathy: Employees' Emotions When Using an AI-enabled Chatbot in a Digitalised Workplace. Information Technology & People 35(6): 17141743. https://doi.org/10.1108/ITP-04-2021-0328.

    • Search Google Scholar
    • Export Citation
  • Greiner, C.Jovy-Klein, F.Peisl, T. (2021): AI as Co-workers: An Explorative Research on Technology Acceptance Based on the Revised Bloom Taxonomy. In: Arai, K.Kapoor, S.Bhatia, R. (eds.): Proceedings of the Future Technologies Conference (FTC) 2020. Advances in Intelligent Systems and Computing. Cham: Springer, pp. 2735. https://doi.org/10.1007/978-3-030-63128-4_3.

    • Search Google Scholar
    • Export Citation
  • Gursoy, D.Chi, O. H.Lu, L.Nunkoo, R. (2019): Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. International Journal of Information Management 49: 157169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008.

    • Search Google Scholar
    • Export Citation
  • Hancock, P. A.Billings, D. R.Oleson, K. E.Chen J. Y. C.De Visser, E.Parasuraman, R. (2011): A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. The Journal of the Human Factors and Ergonomics Society 53(5): 51727. https://doi.org/10.1177/0018720811417254.

    • Search Google Scholar
    • Export Citation
  • Hasija, A.Esper, T. L. (2022): In Artificial Intelligence (AI) We Trust: A Qualitative Investigation of AI Technology Acceptance. Journal of Business Logistics 43: 388412. https://doi.org/10.1111/jbl.12301.

    • Search Google Scholar
    • Export Citation
  • Holloway, I.Todres, L. (2003): The Status of Method: Flexibility, Consistency and Coherence. Qualitative Research 3: 345357. https://doi.org/10.1177/1468794103033004.

    • Search Google Scholar
    • Export Citation
  • Im, S.Bhat, S.Lee, Y. (2015): Consumer Perceptions of Product Creativity, Coolness, Value and Attitude. Journal of Business Research 68(1): 166172. https://doi.org/10.1016/j.jbusres.2014.03.014.

    • Search Google Scholar
    • Export Citation
  • Kalmus, J.Nikiforova, A. (2024): To Accept or Not to Accept? An IRT-TOE Framework to Understand Educators’ Resistance to Generative AI in Higher Education. arXiv preprint arXiv:2407.20130.

    • Search Google Scholar
    • Export Citation
  • Kapitany, R.Hampejs, T.Goldstein, T. R. (2022): Pretensive Shared Reality: From Childhood Pretense to Adult Imaginative Play. Frontiers in Psychology 13: 19. https://doi.org/10.3389/fpsyg.2022.774085.

    • Search Google Scholar
    • Export Citation
  • Kateb, S.Ruehle, R. C.Kroon, D. P.van Burg, E.Huber, M. (2022): Innovating Under Pressure: Adopting Digital Technologies in Social Care Organizations During the COVID-19 Crisis. Technovation 115: 102536. https://doi.org/10.1016/j.technovation.2022.102536.

    • Search Google Scholar
    • Export Citation
  • Kavanagh, M. H.Ashkanasy, N. M. (2006): The Impact of Leadership and Change Management Strategy on Organizational Culture and Individual Acceptance of Change during a Merger. British Journal of Management 17: 81103. https://doi.org/10.1111/j.1467-8551.2006.00480.x.

    • Search Google Scholar
    • Export Citation
  • Kelly, S.Kaye, S-A.Oviedo-Trespalacios, O. (2023): What Factors Contribute to the Acceptance of Artificial Intelligence? A Systematic Review. Telematics and Informatics 77: 101925. https://doi.org/10.1016/j.tele.2022.101925.

    • Search Google Scholar
    • Export Citation
  • Kim, H. Y.McGill, A. L. (2018): Minions for the Rich? Financial Status Changes How Consumers See Products with Anthropomorphic Features. Journal of Consumer Research 45(2): 429450. https://doi.org/10.1093/jcr/ucy006.

    • Search Google Scholar
    • Export Citation
  • Klein, M. (1996): Notes on Some Schizoid Mechanisms. The Journal of Psychotherapy Practice and Research 5(2): 160179.

  • Klenke, K. (2008): Qualitative Research in the Study of Leadership. Bingley: Emerald Group.

  • Krugman, H. E. (1960): The “Draw a Supermarket” Technique. Public Opinion Quarterly 24(1): 148149. https://doi.org/10.1086/266939.

    • Search Google Scholar
    • Export Citation
  • Kvale, S. (1996): InterViews: An Introduction to Qualitative Research Interviewing. London: SAGE.

  • Lane, D. C. (1999): Social Theory and System Dynamics Practice. European Journal of Operational Research 113(3): 501527. https://doi.org/10.1016/S0377-2217(98)00192-1.

    • Search Google Scholar
    • Export Citation
  • Lazarus, R. S. (1991a): Cognition and Motivation in Emotion. American Psychologist 46(4): 352367. https://doi.org/10.1037/0003-066X.46.4.352.

    • Search Google Scholar
    • Export Citation
  • Lazarus, R. S. (1991b): Progress on a Cognitive-Motivational-Relational Theory of Emotion. American Psychologist 46(8): 819834. https://doi.org/10.1037/0003-066X.46.8.819.

    • Search Google Scholar
    • Export Citation
  • Leaver, T.Srdarov, S. (2023): ChatGPT Isn't Magic: The Hype and Hypocrisy of Generative Artificial Intelligence (AI) Rhetoric. M/C Journal 26(5): 5. https://doi.org/10.5204/mcj.3004.

    • Search Google Scholar
    • Export Citation
  • Lee, A. S.Baskerville, R. L. (2003): Generalizing Generalizability in Information Systems Research. Information Systems Research 14(3): 221243. https://doi.org/10.1287/isre.14.3.221.16560.

    • Search Google Scholar
    • Export Citation
  • Leon, V.Etesami, S. R.Nagi, R. (2022): Diffusion of Innovation under Limited-Trust Equilibrium. In: Proceedings of the 61st IEEE Conference on Decision and Control (CDC), pp. 31453150. https://doi.org/10.1109/CDC51059.2022.9992669.

    • Search Google Scholar
    • Export Citation
  • Liamputtong, P. (2007): Researching the Vulnerable: A Guide to Sensitive Research Methods. London: SAGE. https://doi.org/10.4135/9781849209861.

    • Search Google Scholar
    • Export Citation
  • Lippitt, R. (1943): The Psychodrama in Leadership Training. Sociometry 6(3): 286292. https://doi.org/10.2307/2785182.

  • Lu, L.Cai, R.Gursoy, D. (2019): Developing and Validating a Service Robot Integration Willingness Scale. International Journal of Hospitality Management 80: 3651. https://doi.org/10.1016/j.ijhm.2019.01.005.

    • Search Google Scholar
    • Export Citation
  • Ma, X.Huo, Y. (2023): Are Users Willing to Embrace ChatGPT? Exploring the Factors on the Acceptance of Chatbots from the Perspective of AIDUA Framework. Technology in Society 75: 102362. https://doi.org/10.1016/j.techsoc.2023.102362.

    • Search Google Scholar
    • Export Citation
  • Moreno, J. L. (1934): Who Shall Survive? A New Approach to the Problem of Human Interrelations. Washington: Nervous and Mental Disease Publishing Co.

    • Search Google Scholar
    • Export Citation
  • Moreno, J. L (1949): Psychodrama Vol. 1. New York: Beacon.

  • Moreno, Z. T.Blomkvist, L. D.Rützel, T. (2000): Psychodrama, Surplus Reality and the Art of Healing. London: Routledge.

  • Nilsen, E. R.Dugstad, J.Eide, H.Gullslett, M. K.Eide, T. (2016): Exploring Resistance to Implementation of Welfare Technology in Municipal Healthcare Services–A Longitudinal Case Study. BMC Health Services Research 16(1): 114. https://doi.org/10.1186/s12913-016-1913-5.

    • Search Google Scholar
    • Export Citation
  • Obrenovic, B.Gu, X.Wang, G.Godinic, D.Jakhongirov, I. (2024): Generative AI and Human–Robot Interaction: Implications and Future Agenda for Business, Society, and Ethics. AI & Society. https://doi.org/10.1007/s00146-024-01889-0.

    • Search Google Scholar
    • Export Citation
  • Ouadahi, J. (2008): A Qualitative Analysis of Factors Associated with User Acceptance and Rejection of a New Workplace Information System in the Public Sector: A Conceptual Model. Canadian Journal of Administrative Sciences 25(3): 201213. https://doi.org/10.1002/cjas.65.

    • Search Google Scholar
    • Export Citation
  • Papert, S.Harel, I. (1991): Situating Constructionism. Constructionism 36(2): 111.

  • Piaget, J. (1955): The Child's Construction of Reality. London: Routledge.

  • Pillai, R.Ghanghorkar, Y.Sivathanu, B.Algharabat, R.Rana, N. P. (2024): Adoption of Artificial Intelligence (AI) Based Employee Experience (EEX) Chatbots. Information Technology & People 37(1): 449478. https://doi.org/10.1108/ITP-04-2022-0287.

    • Search Google Scholar
    • Export Citation
  • Pinski, M.Benlian, A. (2024): AI Literacy for Users – A Comprehensive Review and Future Research Directions of Learning Methods, Components, and Effects. Computers in Human Behavior: Artificial Humans 2(1): 100062. https://doi.org/10.1016/j.chbah.2024.100062.

    • Search Google Scholar
    • Export Citation
  • Radhakrishnan, J.Gupta, S.Prashar, S. (2022): Understanding Organizations’ Artificial Intelligence Journey: A Qualitative Approach. Pacific Asia Journal of the Association for Information Systems 14(6): 4377. https://doi.org/10.17705/1pais.14602.

    • Search Google Scholar
    • Export Citation
  • Renaud, K.Van Biljon, J. (2008): Predicting Technology Acceptance and Adoption by the Elderly: A Qualitative Study. Proceedings of the 2008 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries: Riding the Wave of Technology, pp. 210219. https://doi.org/10.1145/1456659.145668.

    • Search Google Scholar
    • Export Citation
  • Rogers, E. M. (1962): Diffusion of Innovations, 1st ed. New York: Free Press of Glencoe.

  • Rogers, E. M. (1983): Diffusion of Innovations ,3rd ed. New York: Free Press of Glencoe.

  • Sacks, J. M.Fonseca, J. (2004): Contemporary Psychodrama: New Approaches to Theory and Technique. New York: Routledge.

  • Sætra, H. S. (2021): AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 13(4): 1738. https://doi.org/10.3390/su13041738.

    • Search Google Scholar
    • Export Citation
  • Sanders T. L.Oleson, K. E.Billings, D. R.Chen, J. Y. C.Hancock, P. A. (2011): A Model of Human-Robot Trust: Theoretical Model Development. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 55: 14321436. https://doi.org/10.1177/1071181311551298.

    • Search Google Scholar
    • Export Citation
  • Schaller, R. (2022): Crucial Moments of Change in Monodrama: The Creative Contradiction. Zeitschrift für Psychodrama und Soziometrie 21(1): 718. https://doi.org/10.1007/s11620-021-00645-6.

    • Search Google Scholar
    • Export Citation
  • Schwabenland, C.Kofinas, A. (2023): Ducks, Elephants and Sharks: Using LEGO® Serious Play® to Surface the ‘Hidden Curriculum’ of Equality, Diversity and Inclusion. Management Learning 54(3): 318337. https://doi.org/10.1177/13505076231166850.

    • Search Google Scholar
    • Export Citation
  • Simon, O.Neuhofer, B.Egger, R. (2020): Human-Robot Interaction: Conceptualising Trust in Frontline Teams Through LEGO® Serious Play®. Tourism Management Perspectives 35: 100692. https://doi.org/10.1016/j.tmp.2020.100692.

    • Search Google Scholar
    • Export Citation
  • Smit, D.Eybers, S.van der Merwe, A.Wies, R.Human, N.Pielmeier, J. (2024): Integrating the TOE Framework and DOI Theory to Dissect and Understand the Key Elements of AI Adoption in Sociotechnical Systems. South African Computer Journal 36(2): 123145.

    • Search Google Scholar
    • Export Citation
  • Steinman, R. B. (2009): Projective Techniques in Consumer Research. International Bulletin of Business Administration 5: 3745.

  • Strauss, A. L.Corbin, J. (1990): Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park: SAGE.

  • Thomas, D. R. (2016): Feedback from Research Participants: Are Member Checks Useful in Qualitative Research? Qualitative Research in Psychology 14(1): 2341. https://doi.org/10.1080/14780887.2016.1219435.

    • Search Google Scholar
    • Export Citation
  • Tornatzky, L. G.Fleischer, M.Chakrabarti, A. K. (1990): The Processes of Technological Innovation. Lexington: DC Heath & Company.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Bala, H. (2008): Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences 39(2): 273315. https://doi.org/10.1111/j.1540-5915.2008.00192.x.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Davis, F. D. (2000): A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science 46(2): 186204. https://doi.org/10.1287/mnsc.46.2.186.11926.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Morris, M.Davis, G.Davis, F. (2003): User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3): 425478. https://doi.org/10.2307/30036540.

    • Search Google Scholar
    • Export Citation
  • Venkatesh, V.Thong, J. Y.Xu, X. (2012): Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly 36(1): 157178. https://doi.org/10.2307/41410412.

    • Search Google Scholar
    • Export Citation
  • Vorm, E. S.Combs, D. J. Y. (2022): Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). International Journal of Human–Computer Interaction 38(18–20): 18281845. https://doi.org/10.1080/10447318.2022.2070107.

    • Search Google Scholar
    • Export Citation
  • Whitty, M. T.Carr, A. N. (2006): New Rules in the Workplace: Applying Object-Relations Theory to Explain Problem Internet and Email Behaviour in the Workplace. Computers in Human Behavior 22(2): 235250. https://doi.org/10.1016/j.chb.2004.06.005.

    • Search Google Scholar
    • Export Citation
  • Will, V.Eadie, D.MacAskill, S. (1996): Projective and Enabling Techniques Explored. Marketing Intelligence and Planning 14: 3843. https://doi.org/10.1108/02634509610131144.

    • Search Google Scholar
    • Export Citation
  • Yao, Y.Duan, J.Xu, K.Cai, Y.Sun, Z.Zhang, Y. (2024): A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. High-Confidence Computing 4(2): 100211. https://doi.org/10.1016/j.hcc.2024.100211.

    • Search Google Scholar
    • Export Citation
  • Collapse
  • Expand

Editor-in-chief: Balázs SZENT-IVÁNYI

Co-Editors:

  • Péter MARTON (Corvinus University, Budapest)
  • István KÓNYA (Corvinus University, Budapest)
  • László SAJTOS (The University of Auckland)
  • Gábor VIRÁG (University of Toronto)

Associate Editors:

  • Tamás BOKOR (Corvinus University, Budapest)
  • Sándor BOZÓKI (Corvinus University Budapest)
  • Bronwyn HOWELL (Victoria University of Wellington)
  • Hintea CALIN (Babeş-Bolyai University)
  • Christian EWERHART (University of Zürich)
  • Clemens PUPPE (Karlsruhe Institute of Technology)
  • Zsolt DARVAS (Bruegel)
  • Szabina FODOR (Corvinus University Budapest)
  • Sándor GALLAI (Corvinus University Budapest)
  • László GULÁCSI (Óbuda University)
  • Dóra GYŐRFFY (Corvinus University Budapest)
  • György HAJNAL (Corvinus University Budapest)
  • Krisztina KOLOS (Corvinus University Budapest)
  • Alexandra KÖVES (Corvinus University Budapest)
  • Lacina LUBOR (Mendel University in Brno)
  • Péter MEDVEGYEV (Corvinus University Budapest)
  • Miroslava RAJČÁNIOVÁ (Slovak University of Agriculture)
  • Ariel MITEV (Corvinus University Budapest)
  • Éva PERPÉK (Corvinus University Budapest)
  • Petrus H. POTGIETER (University of South Africa)
  • Sergei IZMALKOV (MIT Economics)
  • Anita SZŰCS (Corvinus University Budapest)
  • László TRAUTMANN (Corvinus University Budapest)
  • Trenton G. SMITH (University of Otago)
  • György WALTER (Corvinus University Budapest)
  • Zoltán CSEDŐ (Corvinus University Budapest)
  • Zoltán LŐRINCZI (Ministry of Human Capacities)

Society and Economy
Institute: Corvinus University of Budapest
Address: Fővám tér 8. H-1093 Budapest, Hungary
Phone: (36 1) 482 5406
E-mail: balazs.szentivanyi@uni-corvinus.hu

Indexing and Abstracting Services:

  • CABELLS Journalytics
  • DOAJ
  • International Bibliographies IBZ and IBR
  • International Political Science Abstracts
  • JSTOR
  • SCOPUS
  • RePEc
  • Referativnyi Zhurnal

 

2023  
Scopus  
CiteScore 1.5
CiteScore rank Q2 (Sociology and Political Science)
SNIP 0.496
Scimago  
SJR index 0.243
SJR Q rank Q3

Society and Economy
Publication Model Gold Open Access
Submission Fee none
Article Processing Charge 900 EUR/article with enough waivers
Regional discounts on country of the funding agency World Bank Lower-middle-income economies: 50%
World Bank Low-income economies: 100%
Further Discounts Sufficient number of full waiver available. Editorial Board / Advisory Board members: 50%
Corresponding authors, affiliated to an EISZ member institution subscribing to the journal package of Akadémiai Kiadó: 100%
Subscription Information Gold Open Access

Society and Economy
Language English
Size B5
Year of
Foundation
1972
Volumes
per Year
1
Issues
per Year
4
Founder Budapesti Corvinus Egyetem
Founder's
Address
H-1093 Budapest, Hungary Fővám tér 8.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
ISSN 1588-9726 (Print)
ISSN 1588-970X (Online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Oct 2024 0 0 0
Nov 2024 0 0 0
Dec 2024 0 0 0
Jan 2025 0 0 0
Feb 2025 0 2354 77
Mar 2025 0 25894 133
Apr 2025 0 0 0