The Non-Human Identity Governance Crisis in Cloud Environments: A Systematic Review, Threat Taxonomy, and Governance Framework for Agentic AI Workloads
Modern cloud infrastructure has experienced a profound shift in identity composition — one that existing security governance models were not architected to handle. As of the first half of 2025, machine-based entities (NHIs) — including service accounts, API keys, OAuth tokens, X.509 certificates, CI/CD pipeline credentials, and autonomous AI agents — outnumber their human counterparts by a factor of 144 to 1 across enterprise cloud environments, reflecting an annual growth rate of 56%. Yet the governance apparatus has not kept pace: nearly all NHIs (97%) operate with excess permissions, and formal decommissioning procedures for machine credentials are absent in more than four out of five organizations. This paper advances the field through three contributions. First, a systematic review of 68 peer-reviewed and industry sources maps the current state of NHI knowledge and exposes critical governance deficiencies across identity management, cloud security, and agentic AI domains. Second, a formal threat taxonomy — NHI-TT v1.0 — organizes machine identity attack vectors into five analytically distinct dimensions: lifecycle exploitation, privilege escalation, delegation chain abuse, supply chain compromise, and behavioral evasion. Third, a five-pillar NHI Governance Framework (NHI-GF) is proposed and evaluated; its pillars — Universal Discovery, Lifecycle Governance, Dynamic Least-Privilege Enforcement, Behavioral Monitoring, and Supply Chain Trust Verification — are designed specifically for the ephemeral, non-deterministic credential requirements introduced by agentic AI workloads. NHI-GF is validated against three documented breach incidents (tj-actions, March 2025; Salesloft-Drift, 2025; CircleCI, January 2023) and benchmarked against four prevailing governance approaches. The evaluation confirms that no existing framework achieves complete coverage of the NHI threat surface, and that agentic AI introduces identity governance requirements that are structurally distinct from those addressable through conventional, human-centric IAM processes.
Introduction
Traditional identity security systems were designed assuming that users are human. However, modern cloud environments are dominated by non-human identities (NHIs) such as service accounts, API tokens, and AI agents, which now vastly outnumber human users (up to 144:1). Despite their prevalence, these identities are poorly governed, often over-privileged, and rarely monitored, making them a major security risk.
The rise of cloud automation and agentic AI systems has intensified the problem. Machine identities are created dynamically, operate at scale, and may have changing permissions, which traditional Identity and Access Management (IAM) frameworks cannot effectively handle.
Key Problems with NHIs
Five structural issues make NHIs difficult to manage:
Lifecycle Opacity – Identities are created and persist without proper tracking or deactivation.
Supply Chain Security – Secure third-party integrations and dependencies
For AI agents, the framework introduces task-based temporary tokens and intent-based monitoring, ensuring permissions are granted only when needed and actions match intended tasks.
Conclusion
This paper has approached the non-human identity governance problem in cloud environments from three complementary directions: first, by identifying the structural properties that make NHIs categorically different from human identities and that cause conventional IAM frameworks to fall short; second, by constructing a formal threat taxonomy that maps the complete attack surface arising from NHI proliferation; and third, by proposing and evaluating a governance framework capable of addressing that taxonomy across both static machine credentials and the emerging class of agentic AI identities.
The evidence presented in this paper points to an urgent conclusion. With machine identities outnumbering human ones at a ratio of 144:1 and an over-privilege rate of 97%, the volume and misconfiguration of NHIs in modern cloud deployments has long surpassed what manual governance can realistically address. The arrival of agentic AI compounds this challenge by introducing credential requirements — dynamic, short-lived, and delegation-capable — that differ fundamentally from those of traditional service accounts. The comparative evaluation confirms that none of the leading existing frameworks, including NIST SP 800-207 or the CISA Zero Trust Maturity Model, provides adequate coverage of the full NHI threat surface.
NHI-GF offers a systematic response across five pillars — universal discovery, lifecycle governance, dynamic least privilege, behavioral monitoring, and supply chain trust — that together achieve full NHI-TT v1.0 coverage. Its novel contributions relative to existing frameworks are the formal definition of agentic NHI properties, the task-scoped token issuance algorithm for autonomous workloads, and the intent-action consistency monitoring approach that addresses the behavioral non-determinism of agentic AI.
The immediate implication for cloud security practitioners is clear: NHI governance must be treated as a primary security investment, not a secondary hygiene concern subordinated to perimeter and endpoint controls. The enterprises most vulnerable to machine identity breaches are not necessarily those with the weakest outer defenses — they are those carrying the largest, least-audited populations of machine credentials operating beyond any governance boundary. As autonomous AI agents become a standard feature of cloud deployments, that characterization will apply to an expanding share of organizations globally. The taxonomy, framework, and algorithms presented in this paper offer a principled and actionable foundation for closing that exposure before it becomes a systemic liability.
References
[1] Entro Security Labs, \"NHI & Secrets Risk Report H1 2025: Analysis of 27M+ Non-Human Identities,\" Entro Security, July 2025. [Online]. Available: https://entro.security/nhi-report-2025
[2] Cloud Security Alliance & Astrix Security, \"The State of Non-Human Identity Security,\" CSA Research Report, June 2024. [Online]. Available: https://cloudsecurityalliance.org/research
[3] IBM Security, \"IBM X-Force Threat Intelligence Index 2025,\" IBM Corporation, 2025. [Online]. Available: https://www.ibm.com/reports/threat-intelligence
[4] M. J. Page et al., \"The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews,\" BMJ, vol. 372, p. n71, 2021.
[5] S. Rose, O. Borchert, S. Mitchell, and S. Connelly, \"Zero Trust Architecture,\" NIST Special Publication 800-207, Nat. Inst. Stand. Technol., Gaithersburg, MD, Aug. 2020.
[6] Cybersecurity and Infrastructure Security Agency, \"Zero Trust Maturity Model, Version 2.0,\" U.S. CISA, Apr. 2023. [Online]. Available: https://www.cisa.gov/zero-trust-maturity-model
[7] Cloud Security Alliance, \"Cloud Controls Matrix v4.0,\" CSA, 2021. [Online]. Available: https://cloudsecurityalliance.org/research/cloud-controls-matrix
[8] OWASP Foundation, \"OWASP Top 10 Non-Human Identity Risks,\" OWASP, 2025. [Online]. Available: https://owasp.org/www-project-top-10-non-human-identities/
[9] D. Hardt, Ed., \"The OAuth 2.0 Authorization Framework,\" IETF RFC 6749, Oct. 2012.
[10] N. Sakimura, J. Bradley, M. Jones, B. de Medeiros, and C. Mortimore, \"OpenID Connect Core 1.0,\" OpenID Foundation, Nov. 2014.
[11] S. Cantor et al., \"Assertions and Protocols for the OASIS SAML V2.0,\" OASIS Standard, Mar. 2005.
[12] M. Jones, B. Campbell, and C. Mortimore, \"OAuth 2.0 Token Exchange,\" IETF RFC 8693, Jan. 2020.
[13] Cloud Native Computing Foundation, \"SPIFFE and SPIRE: Universal Identity Control Plane for Distributed Systems,\" CNCF Project Specification, 2022.
[14] E. Bauer and R. Adams, \"Reliability and Availability of Cloud Computing,\" Wiley-IEEE Press, 2012.
[15] K. Hashizume, D. G. Rosado, E. Fernandez-Medina, and E. B. Fernandez, \"An analysis of security issues for cloud computing,\" J. Internet Serv. Appl., vol. 4, no. 1, pp. 1–13, 2013.
[16] A. Singh and K. Chatterjee, \"Cloud security issues and challenges: A survey,\" J. Netw. Comput. Appl., vol. 79, pp. 88–115, Feb. 2017.
[17] [M. Almorsy, J. Grundy, and I. Müller, \"An analysis of the cloud computing security problem,\" in Proc. 2010 APSEC Cloud Workshop, Sydney, Australia, 2010, pp. 1–6.
[18] G. Raj, A. Arora, and A. K. Trivedi, \"A survey of cloud-native security practices,\" Int. J. Cloud Comput., vol. 10, no. 3, pp. 201–228, 2021.
[19] R. Cole, S. Ring, and J. Fossen, \"Certificate Lifecycle Management in Enterprise Environments: Patterns and Failures,\" IEEE Security Privacy, vol. 19, no. 4, pp. 34–42, Jul.–Aug. 2021.
[20] Nat. Inst. Stand. Technol., \"The NIST Cybersecurity Framework 2.0,\" NIST, Gaithersburg, MD, Feb. 2024.
[21] R. Chadha, T. Bowen, C. Chiang, J. Salter, and P. Zeitz, \"A Cyber Battle Management System for Conducting Cyber Warfare,\" in Proc. 2014 Int. Conf. Cyber Conflict, Tallinn, Estonia, 2014.
[22] CyberArk, \"2025 Identity Security Threat Landscape Report,\" CyberArk Software Ltd., 2025.
[23] Z. He et al., \"Emerged Security and Privacy of LLM Agent: A Survey with Case Studies,\" arXiv:2501.03462, Jan. 2025.
[24] Y. Mirsky et al., \"The Threat of Offensive AI to Organizations,\" Comput. Secur., vol. 124, p. 103006, Jan. 2023.
[25] K. Huang, S. A. Vineeth et al., \"A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control,\" arXiv preprint, Mar. 2025.
[26] Y. Mirsky, A. Demontis, J. Klaas et al., \"A Survey of Agentic AI and Cybersecurity,\" arXiv:2601.05293, Jan. 2026.
[27] A. Greenberg, \"The Untold Story of SolarWinds, the Boldest Supply-Chain Hack Ever,\" Wired, May 2021.
[28] Nat. Inst. Stand. Technol., \"Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations,\" NIST SP 800-161r1, May 2022.
[29] G. Ladisa, H. Plate, M. Martinez, and O. Barais, \"A Taxonomy of Attacks on Open-Source Supply Chains,\" in Proc. IEEE Symp. Security Privacy, San Francisco, CA, 2023, pp. 1509–1526.
[30] CircleCI, \"CircleCI Security Alert: Rotate Any Secrets Stored in CircleCI,\" CircleCI Security Advisory, Jan. 2023.
[31] Obsidian Security, \"Security for AI Agents: Protecting Intelligent Systems in 2025,\" Obsidian Security Research, Nov. 2025.
[32] Cloud Security Alliance & Strata Identity, \"Securing Autonomous AI Agents: Survey Report,\" CSA, Feb. 2026.
[33] Nat. Inst. Stand. Technol., \"Module-Lattice-Based Key-Encapsulation Mechanism Standard,\" Federal Information Processing Standard 203, Aug. 2024.