The contemporary international security environment is increasingly shaped by competition that unfolds below the threshold of open armed conflict. Commonly described as grey-zone warfare, this mode of contestation relies on ambiguity, deniability, and gradual escalation to achieve strategic objectives while avoiding direct military confrontation. At the same time, artificial intelligence has emerged as a transformative technology that reshapes how power is exercised across political, informational, cyber, and physical domains. This paper argues that artificial intelligence functions as a structural enabler of grey-zone warfare by lowering operational costs, enhancing precision and persistence, and obscuring attribution and intent. Rather than merely augmenting existing tactics, AI fundamentally alters the logic of sub-threshold conflict, making continuous competition both feasible and strategically attractive. Through conceptual analysis and an examination of real-world incidents, particularly AI-enabled information and cyber operations in the Russia–Ukraine context prior to 2022, this study demonstrates how AI intensifies grey-zone dynamics while undermining traditional deterrence, legal accountability, and strategic stability. The paper concludes by assessing the broader implications of AI-driven grey-zone warfare for global security governance and the future of conflict.
Introduction
The text examines the rise of grey-zone warfare as the dominant form of strategic competition in the twenty-first century, occurring in the space between peace and open armed conflict. Unlike twentieth-century wars characterized by decisive battles and territorial conquest, modern rivalry is shaped by nuclear deterrence, economic interdependence, globalization, and international law, which constrain large-scale interstate war. As a result, states and non-state actors increasingly pursue ambiguous, deniable, and incremental actions that apply political, economic, informational, and security pressure without triggering conventional military responses.
Grey-zone warfare exploits legal and normative grey areas, blends civilian and military tools, and relies on persistence and psychological pressure rather than direct force. Its objective is not rapid victory but the gradual erosion of an adversary’s political cohesion, institutional legitimacy, economic resilience, and public trust. Common instruments include cyber operations, disinformation campaigns, economic coercion, proxy forces, lawfare, and calibrated military signaling. Real-world examples include Russian operations in Ukraine prior to 2022, China’s actions in the South China Sea, and election interference campaigns in Western democracies.
The paper highlights artificial intelligence (AI) as a key structural enabler of grey-zone warfare. AI does not merely enhance existing tactics but fundamentally transforms sub-threshold conflict by making operations more continuous, scalable, automated, persistent, and difficult to attribute. AI enables mass and personalized propaganda, automated social engineering, cyber vulnerability discovery, predictive targeting, and real-time narrative generation through machine learning and large language models. These capabilities lower costs, reduce human risk, saturate defensive systems, and weaken traditional deterrence and accountability mechanisms.
The study frames AI-enabled grey-zone conflict as a shift from episodic crises to long-term, low-visibility strategic pressure, challenging existing models of deterrence, international law, and crisis management. It also raises significant implications for attribution, legal responsibility, strategic stability, and global governance, noting that current institutional and normative frameworks are poorly equipped to manage AI-mediated competition. Overall, the paper argues that grey-zone warfare, increasingly powered by AI, has become a structural and enduring feature of contemporary international security.
Conclusion
The fact that AI-enabled grey-zone conflict can cause significant issues with the credibility of deterrence, attribution processes, and long-term strategic stability is a problem because it shifts the theoretical foundations of the theory of International Relations (IR) and the security practice. The traditional deterrence theory that have their roots in the rational actor theory, attribution clarity and the credible retaliation are proving ineffective, in the world that aggressive behavior is denyable, automated, decentralized and legally gray. There is such issue associated with the AI as the attribution problem, which is a paramount problem of IR in that it conceals intent, acceleration of the speed of operations, and invasiveness of the digital environment of civilians, which weakens the ability of the states to attribute responsibility or to respond in an equal measure. This, as per realist perceptions, would tear the signaling of balance of power, because it enables an occurrence of further coercion, without the classical escalation. Systems of liberal institutionalists fail to establish a sense of accountability due to the divided jurisdiction, slow norms making and enforcement in the sphere of cyberspace and informational field. The concept of stability is also more complex in constructivist dynamics since the manipulation of the narrative the AI enables can redefine the perception of threats, legitimacy claims, and identity based conflict framing. Brought in combination these effects enhance the chances of miscalculation, escalation ambiguity, deterrence erosion and strategic instability, which is a means to entrench a security environment, in which the struggle is continuous low intensity competition, rather than discrete high acuity conflict.
References
[1] Hoffman, F. G. (2007). Conflict in the 21st century: The rise of hybrid wars (p. 51). Arlington, VA: Potomac Institute for Policy Studies.
[2] Mazarr, M. J. (2015). Mastering the gray zone: understanding a changing era of conflict.
[3] Kumar, N., & Patel, N. M. (2025). Social engineering attack in the era of generative AI. International Journal for Research in Applied Science and Engineering Technology, 13(1), 1737-1747.
[4] Mearsheimer, J. J. (2025). War and international politics. International Security, 49(4), 7-36.
[5] Keohane, R. O., & Nye Jr, J. S. (1973). Power and interdependence. Survival, 15(4), 158-165.
[6] Wendt, A. (1999). Social theory of international politics (Vol. 67). Cambridge university press.
[7] Kello, L. (2017). The virtual weapon and international order. Yale University Press.
[8] Kumar, N., Parekha, C., & Sheth, R. (2025). Exploring 6G Wireless Networks: A Comprehensive Analysis. Virtual Reality and Augmented Reality with 6G Communication, 51-88.
[9] Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. WW Norton & Company.
[10] Hopgood, A. A. (2021). Intelligent systems for engineers and scientists: a practical guide to artificial intelligence. CRC press.
[11] Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
[12] Loik, R. (2025). EU–NATO Cooperation and Perspectives on Countering Hybrid Threats. In Russian Influence Operations and the War in Ukraine: Hybrid Warfare and Disinformation Campaigns (pp. 207-234). Cham: Springer Nature Switzerland.
[13] Yousefi, M., & Habibi, R. (2022). Analysis of US Strategic Documents (National Security Strategy, National Defense Strategy, and National Military Strategy). Journal Strategic Studies of Public Policy, 11(41).
[14] Fridman, O. (2018). Russian\" hybrid warfare\": Resurgence and politicization. Oxford University Press.
[15] Kumar, N., Deshkar, D., & Patel, N. (2025, November). Fine-Tuning Language Models for Social Engineering: A Technical Feasibility Study. In 2025 IEEE 7th International Conference on Computing, Communication and Automation (ICCCA) (pp. 1-6). IEEE.
[16] Deshkar, D. (2025). QUANTITATIVE AND COMPUTATIONAL MATHEMATICAL ANALYSIS OF GEN-AI-FACILITATED SOCIAL ENGINEERING THREATS. International Journal of Applied Mathematics, 38(10s), 2159-2179.