With increasing use of personal AI assistants in everyday life, an understanding of the psychological factors that drive their sustained use, especially among the most tech-savvy Generation Z, is necessary. The present research investigates the functions of perceived AI intelligence, anthropomorphism, and trust in AI in shaping the sustained adoption of personal AI assistants by Gen Z users. On the basis of data obtained through surveys from 170 Indian Gen Z, 131 usable responses were retained for the analysis. The research is grounded in the Stimulus Organism Response (SOR) model. Partial least squares structural equation modeling was used to test the proposed relationships. The findings indicate that anthropomorphism and perceived intelligence significantly enhance trust in AI, which in turn significantly predicts the intention to keep using it. While perceived intelligence has direct and indirect effects on usage, trust fully mediates the effect of anthropomorphism. This work adds to the AI adoption body of knowledge by highlighting trust as an important psychological process linking user perceptions to long-term behavior. It also provides developers with helpful guidance on designing AI systems that are emotionally and cognitively engaging, particularly in quickly growing digital economies.
Introduction
1. Background and Context
Artificial Intelligence (AI), especially through Personal AI Assistants (PAIAs) like Siri, Alexa, Google Assistant, and ChatGPT, has become central to everyday life. These systems have evolved from simple voice tools to intelligent, adaptive agents, largely due to advances in machine learning and natural language processing.
Generation Z (born 1997–2012), having grown up with smart technology, perceives AI as a natural part of life. Over 90% of Gen Z interact with AI tools weekly, indicating a normalized reliance on digital assistants.
2. Research Gap and Aim
Traditional models like Technology Acceptance Model (TAM) don't fully explain Gen Z's engagement with AI. Instead, deeper psychological factors—such as perceived intelligence (PI) and anthropomorphism (PA)—influence trust, which is a key driver of continued AI use.
The study addresses:
How perceived intelligence and anthropomorphism influence trust in AI.
How trust mediates the relationship between these perceptions and continued usage intention (CUAI).
The research applies the Stimulus–Organism–Response (S-O-R) framework:
Stimuli: Perceptions of AI (PI and PA)
Organism: Trust in AI (TAI)
Response: Continued usage intention
3. Key Hypotheses
H1: PI → Trust in AI
H2: PA → Trust in AI
H3: PI → Continued Usage
H4: PA → Continued Usage
H5: Trust → Continued Usage
H6: Trust mediates PI → Continued Usage
H7: Trust mediates PA → Continued Usage
4. Methodology
Design: Cross-sectional, quantitative study using PLS-SEM (SmartPLS v4.1.1.2)
Participants: 131 Gen Z users of AI assistants (e.g., ChatGPT, Siri)
Sampling: Purposive, based on recent use and age group
Measures: 5-point Likert scales adapted from validated sources
Analysis: Structural model and mediation tested via bootstrapping
5. Results
Measurement Model:
Most constructs met reliability and validity thresholds.
Perceived Intelligence had some weak items (e.g., PI2 = 0.484), affecting convergent validity (AVE = 0.453), suggesting refinement is needed.
Trust fully or partially mediating the effect of PI and PA on usage intentions
This supports the mediating role of trust between how intelligent or human-like AI is perceived and Gen Z’s intent to keep using it.
6. Contributions
Theoretical: Extends S-O-R model to AI adoption among digital natives.
Practical: Offers insights for designing AI that builds trust via intelligence and human-like cues.
Geographic Relevance: Addresses a research gap by focusing on digitally emerging economies like India.
Conclusion
The study confirms that trust is not static, but a dynamic psychological process shaped by perceptions of intelligence and anthropomorphism. Future work should:
Improve measurement scales for perceived intelligence
Expand sample size across multiple countries
Examine long-term behavioral changes
References
[1] A. Mehrabian, J. R. (1974). An approach to environmental psychology.
[2] Alanzi, T., Alsalem, A. A., Alzahrani, H., Almudaymigh, N., Alessa, A., Mulla, R., AlQahtani, L., Bajonaid, R., Alharthi, A., Alnahdi, O., & Alanzi, N. (2023). AI-Powered Mental Health Virtual Assistants Acceptance: An Empirical Study on Influencing Factors Among Generations X, Y, and Z. Cureus, 15(11). https://doi.org/10.7759/cureus.49486
[3] Bartneck, C., Kuli?, D., Croft, E., & Zoghbi, S. (2009). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71–81. https://doi.org/10.1007/s12369-008-0001-3
[4] Bhattacherjee, A. (2001). Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Quarterly, 25(3), 351. https://doi.org/10.2307/3250921
[5] Bunea, O. I., Corbo?, R. A., Mi?u, S. I., Triculescu, M., & Trifu, A. (2024). The Next-Generation Shopper: A Study of Generation-Z Perceptions of AI in Online Shopping. Journal of Theoretical and Applied Electronic Commerce Research , 19(4), 2605–2629. https://doi.org/10.3390/jtaer19040125
[6] Chancey, E. T., Bliss, J. P., Proaps, A. B., & Madhavan, P. (2015). The Role of Trust as a Mediator Between System Characteristics and Response Behaviors. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(6), 947–958. https://doi.org/10.1177/0018720815582261
[7] Chen, Q. Q., & Park, H. J. (2021). How anthropomorphism affects trust in intelligent personal assistants. Industrial Management & Data Systems, 121(12), 2722–2737. https://doi.org/10.1108/IMDS-12-2020-0761
[8] Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. https://doi.org/10.1016/j.ipm.2022.102940
[9] Chiragkumar B. Rathod. (2025). AI and Generation Z: Exploring Perceptions, Attitudes, and Usage Intentions. Journal of Information Systems Engineering and Management, 10(2), 155–168. https://doi.org/10.52783/jisem.v10i2.1535
[10] Choi, K., Wang, Y., & Sparks, B. (2019). Travel app users’ continued use intentions: it’s a matter of value and trust. Journal of Travel & Tourism Marketing, 36(1), 131–143. https://doi.org/10.1080/10548408.2018.1505580
[11] Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543
[12] Dewalska-Opitek, A., Witczak, O., Szostak, A., Dziura, M., & Wroniszewska-Drabek, B. (2024). Generation Z’s Trust Toward Artificial Intelligence:Attitudes and Opinions. European Research Studies Journal, XXVII(Special Issue B), 33–52. https://doi.org/10.35808/ersj/3475
[13] Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
[14] F. Hair Jr, J., Sarstedt, M., Hopkins, L., & G. Kuppelwieser, V. (2014). Partial least squares structural equation modeling (PLS-SEM). European Business Review, 26(2), 106–121. https://doi.org/10.1108/EBR-10-2013-0128
[15] Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
[16] Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28(1), 241–250. https://doi.org/10.1016/j.chb.2011.09.006
[17] Lam, T. (2025). Continuous use of AI technology: the roles of trust and satisfaction. Aslib Journal of Information Management. https://doi.org/10.1108/AJIM-07-2024-0548
[18] Leguina, A. (2015). A primer on partial least squares structural equation modeling (PLS-SEM). International Journal of Research & Method in Education, 38(2), 220–221. https://doi.org/10.1080/1743727x.2015.1005806
[19] Lotfalian Saremi, M., & Bayrak, A. E. (2021, August 17). A Survey of Important Factors in Human - Artificial Intelligence Trust for Engineering System Design. Volume 6: 33rd International Conference on Design Theory and Methodology (DTM). https://doi.org/10.1115/DETC2021-70550
[20] Moussawi, S., & Koufaris, M. (2019). Perceived Intelligence and Perceived Anthropomorphism of Personal Intelligent Agents: Scale Development and Validation. https://doi.org/10.24251/HICSS.2019.015
[21] Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2023). The role of user perceptions of intelligence, anthropomorphism, and self-extension on continuance of use of personal intelligent agents. European Journal of Information Systems, 32(3), 601–622. https://doi.org/10.1080/0960085X.2021.2018365
[22] Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
[23] Sarstedt, M., & Cheah, J.-H. (2019). Partial least squares structural equation modeling using SmartPLS: a software review. Journal of Marketing Analytics, 7(3), 196–202. https://doi.org/10.1057/s41270-019-00058-3
[24] Sarstedt, M., Ringle, C. M., & Hair, J. F. (2021). Partial Least Squares Structural Equation Modeling. In Handbook of Market Research (pp. 1–47). Springer International Publishing. https://doi.org/10.1007/978-3-319-05542-8_15-2
[25] Song, X., Li, Y., Leung, X. Y., & Mei, D. (2024). Service robots and hotel guests’ perceptions: anthropomorphism and stereotypes. Tourism Review, 79(2), 505–522. https://doi.org/10.1108/TR-04-2023-0265
[26] Troshani, I., Rao Hill, S., Sherman, C., & Arthur, D. (2021). Do We Trust in AI? Role of Anthropomorphism and Intelligence. Journal of Computer Information Systems, 61(5), 481–491. https://doi.org/10.1080/08874417.2020.1788473
[27] Vieira, J., Gomes da Costa, C., & Santos, V. (2024). Talent Management and Generation Z: A Systematic Literature Review through the Lens of Employer Branding. Administrative Sciences, 14(3), 49. https://doi.org/10.3390/admsci14030049
[28] Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005