The rapid growth of artificial intelligence (AI), algorithmic governance, automation systems, and digital surveillance has significantly reshaped how institutions evaluate, categorize, and regulate individuals. Decisions related to hiring, performance evaluation, financial approval, and risk assessment are increasingly mediated by computational systems that prioritize efficiency, scalability, and predictive accuracy. This shift represents more than technological progress; it transforms the structural conditions under which individuals are recognized and valued. While technology ethics scholarship has focused on fairness, bias, and accountability, it has paid limited attention to the psychological processes through which digital systems may influence moral perception.
Drawing on dehumanization theory—particularly Haslam’s distinction between mechanistic and animalistic dehumanization—this paper proposes the Technology–Psychology–Ethics (TPE) framework. The framework argues that technological systems function as structural antecedents that reduce empathy and perceived autonomy, thereby fostering dehumanization processes that influence ethical outcomes such as diminished human dignity, reduced moral concern, and perceptions of injustice. By integrating psychological theory with algorithmic governance research, the TPE framework provides an interdisciplinary explanation of how technological exposure may reshape moral recognition and offers guidance for developing more human-centered digital systems.
Introduction
Modern technological systems such as artificial intelligence, automation, predictive analytics, and digital surveillance increasingly influence how individuals are evaluated and governed in areas like hiring, policing, healthcare, education, and finance. While these technologies are often considered objective and efficient, they may also reshape how people are perceived and treated within institutions. Instead of being viewed as unique individuals, people may increasingly be assessed through data, performance metrics, and algorithmic classifications, which can reduce complex human identities to simplified digital profiles.
Traditionally, dehumanization was associated with extreme situations like war and intergroup conflict. However, recent research shows that it can also occur subtly in bureaucratic and technological environments. In such contexts, individuals may feel treated as numbers, data points, or replaceable resources rather than as full human beings. Continuous monitoring, automated decision-making, and algorithmic categorization can reduce empathy, limit personal autonomy, and weaken opportunities for contextual understanding.
Existing research on this issue is divided across disciplines. Psychological studies explain how dehumanization affects perception and empathy but rarely examine technological systems. In contrast, technology ethics research focuses on fairness, bias, and transparency in algorithms but often overlooks psychological mechanisms. This separation creates a gap in understanding how technology shapes moral perception and human dignity.
To address this gap, the paper proposes the Technology–Psychology–Ethics (TPE) framework, an interdisciplinary model linking technological systems, psychological processes, and ethical outcomes. The framework suggests that exposure to technologies such as AI, automation, algorithmic classification, and surveillance can reduce empathy, autonomy, and interpersonal warmth. These psychological changes may lead to mechanistic dehumanization (treating people like objects or machines) or animalistic dehumanization (denying qualities like civility and rationality). Ultimately, these processes may result in ethical consequences such as reduced recognition of human dignity, perceived unfairness, and weakened moral responsibility.
Overall, the study argues that technological systems are not purely technical tools but social structures that influence how humanness is perceived and valued, highlighting the need for more human-centered governance and ethical technology design.
Conclusion
Technological systems are no longer simply tools that support institutional processes; they have become central forces shaping contemporary social life. Artificial intelligence, algorithmic governance, automation, and digital surveillance now influence how individuals are evaluated, categorized, monitored, and granted access to opportunities. Across workplaces, educational institutions, financial systems, and public governance, these digital infrastructures shape not only decisions but also the deeper conditions under which people are recognized and valued. As such systems become normalized, they increasingly influence how humanness itself is interpreted within institutional contexts.
This paper has argued that ethical challenges in digital societies cannot be fully understood through technical analysis alone. While fairness, transparency, and accountability are critical, they address only one dimension of the issue. The Technology–Psychology–Ethics (TPE) framework highlights how technological exposure can affect psychological processes such as empathy, perceived autonomy, and interpersonal warmth. When these processes are weakened, mechanistic and animalistic forms of dehumanization may emerge. These shifts are often gradual and structural rather than intentional, but over time they can reshape perceptions of dignity, fairness, and moral responsibility.
By integrating dehumanization theory with research on algorithmic governance, the TPE framework provides a structured explanation of how technological systems influence moral perception. Ethical risks do not arise solely from biased outputs or flawed datasets; they may also result from subtle changes in how individuals are recognized. When people are increasingly viewed through data profiles, productivity metrics, or risk classifications, relational and contextual dimensions of personhood may be overshadowed by computational evaluation.
Importantly, the framework does not assume that technology is inherently dehumanizing. Ethical consequences depend on design choices and governance priorities. Systems that include human oversight, transparency, contextual explanation, and empathy-aware design can help preserve dignity and relational recognition. In contrast, systems that prioritize efficiency without considering psychological effects may unintentionally normalize instrumental treatment and moral distance.
Ultimately, preserving human dignity in digital societies requires integrating psychological insight into technological governance. Digital infrastructures shape not only behavior but also perception and moral evaluation. As automation and artificial intelligence continue to expand, safeguarding humanness becomes both a psychological and ethical responsibility. The TPE framework offers a foundation for future empirical research and governance strategies aimed at ensuring that technological progress remains aligned with empathy, autonomy, and moral accountability.
References
[1] Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3(3), 193–209. https://doi.org/10.1207/s15327957pspr0303_3
[2] Barocas, S., &Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732.
[3] Bastian, B., & Haslam, N. (2011). Experiencing dehumanization: Cognitive and emotional effects of everyday dehumanization. Journal of Personality and Social Psychology, 101(2), 295–310. https://doi.org/10.1037/a0023658
[4] Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability and Transparency (pp. 149–159).
[5] Blauner, R. (1964). Alienation and freedom: The factory worker and his industry. University of Chicago Press.
[6] Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89, 1–33.
[7] Coeckelbergh, M. (2015). Artificial agents, good care, and modernity. Theoretical Medicine and Bioethics, 36(4), 265–277.
[8] Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and self-determination. Psychological Inquiry, 11(4), 227–268.
[9] Davis, M. H. (1983). Measuring individual differences in empathy. Journal of Personality and Social Psychology, 44(1), 113–126.
[10] Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
[11] Golossenko, A., Palumbo, H., Mathai, M., & Tran, H.-A. (2023). Am I being dehumanized? Development and validation of the Experience of Dehumanization Measure (EDHM). British Journal of Social Psychology. https://doi.org/10.1111/bjso.12633
[12] Gray, K., Young, L., &Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23(2), 101–124.
[13] Harris, L. T., & Fiske, S. T. (2014). Dehumanized perception: A psychological means to facilitate atrocities, torture, and genocide? ZeitschriftfürPsychologie, 222(4), 175–181. https://doi.org/10.1027/2151-2604/a000065
[14] Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology Review, 10(3), 252–264. https://doi.org/10.1207/s15327957pspr1003_4
[15] Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual Review of Psychology, 65, 399–423.
[16] Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29.
[17] Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). Guilford Press.
[18] Markowitz, D. M., &Slovic, P. (2020). Social, psychological, and demographic characteristics of dehumanization toward immigrants. Proceedings of the National Academy of Sciences, 117(17), 9260–9269. https://doi.org/10.1073/pnas.1921790117
[19] Maynard, J. L., &Luft, A. (2023). Humanizing dehumanization research. Current Research in Ecological and Social Psychology, 5, 100102. https://doi.org/10.1016/j.cresp.2023.100102
[20] Moore, P., & Robinson, A. (2016). The quantified self: What counts in the neoliberal workplace. New Media & Society, 18(11), 2774–2792.
[21] O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
[22] Pasquale, F. (2015). The black box society. Harvard University Press.
[23] Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries. International Journal of Communication, 10, 3758–3784.
[24] Rubbab, U. E., Khattak, S. A., Shahab, H., & Akhter, N. (2022). Impact of organizational dehumanization on employee knowledge hiding. Frontiers in Psychology, 13, 803905. https://doi.org/10.3389/fpsyg.2022.803905
[25] Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs.