This paper examines the usage of NeuroEvolution of Augmenting Topologies (NEAT) to design an AI-driven dynamic game environment inspired by Flappy Bird mechanics, with a theme from the Naruto anime. It proposes a Player vs. AI competitive framework wherein AI agents represented asNarutoclonesevolveacrosssuccessivegenerationsto face increasingly difficult challenges of gameplay. Using NEAT, each clone starts with a unique neural network that is improved based on fitness criteria such as navigating obstacles and survival time.
The game combines dynamic difficulty scaling, requiring the player to outlast evolving AI clones, with visually interesting mechanics such as changing backgrounds and responsive gameplay. Configurations forNEATinvolvedtanhactivation functions,controlled mutation rates, and optimizations to ensure efficient adaptation and robust AI performance. Within more than 10 generations, AI agents showed a 150% improvement in survival metrics, clearly showing the effectiveness of NEAT in evolving neural networks for real-time applications.
This study also identifies limitations, such as NEAT\'s computational overhead and reliance on simplistic inputs, and proposes future directions to enhance AI adaptability and scalability. The findings emphasize NEAT\'s potential for dynamic gaming, robotics, and other domains requiring real-time AI evolution. This research not only advancesthe application of NEAT in interactive systems but also contributes to the broader discourse on adaptive AI in competitive environments.
Introduction
Summary:
This study explores the use of the NEAT (NeuroEvolution of Augmenting Topologies) algorithm to develop adaptive AI agents for a Naruto-themed game inspired by Flappy Bird. The game, built using Python and PyGame, features AI-controlled Naruto clones that evolve over time to become more competitive against a human-controlled player character. Using NEAT, these clones improve each generation through evolutionary processes based on fitness criteria such as survival time and obstacle navigation.
Key objectives of the research include evaluating NEAT's real-time adaptability, developing an engaging Player vs. AI experience, analyzing how different NEAT parameters (e.g., mutation rates, activation functions) affect AI performance, and benchmarking AI performance across generations. The game uses a simple control mechanism (spacebar to jump) and a dynamic environment with evolving obstacles to continuously challenge both the AI and the player.
The methodology involves:
Designing a game environment where AI agents and the player compete.
Configuring NEAT with custom inputs (height, obstacle distance) and outputs (jump/no jump), and fine-tuning parameters such as mutation rates and species compatibility.
Running a generational evolutionary process where successful AI agents reproduce and evolve.
Integrating player interaction to create a dynamic and competitive gameplay loop.
The literature review highlights the NEAT algorithm’s success in evolving neural networks for gaming, its integration with reinforcement learning, and the technical design of game frameworks. Prior studies have shown NEAT’s flexibility and adaptability in real-time systems, making it a strong candidate for both gaming and broader applications like robotics and healthcare.
Conclusion
This research successfully demonstrated the application of the NEAT (NeuroEvolution of Augmenting Topologies) algorithm in a dynamic environment of an AI-driven Naruto-Flappy Bird-style game. Key findings include;
• The NEAT algorithm supported AI agents, which are a series of Naruto clones: successive generations evolved to adapt much more challenging game conditions.
• Well, tanhastheactivation functionalong with the optimized mutation rates and fitness criteria succeeded in evolving neuralnetworks to move around dynamic obstacles.
• Game design, where it was AI and player interacting with each other, was an entertaining competition between Player vs. AI. AI performance kept increasing which made the game hard for the player to achieve a win.
• The fitness scores of AI agents increased 150% in 10 generations, thus depicting the effectivenessofNEATintheimprovementofAI capabilities.
• Dynamicbackgroundswithincreasingdifficulty levels enhanced a layer of progression and replayability in the gameplay.
Whilethe studyhighlights thestrengths ofNEAT, italso revealed certain limitations:
• Time Complexity: The computational cost of running NEAT runs very high with the number of generations, which may limit the scalability for larger populations or even more complex environments.
• Poor Early Performance: The AI agents in early generations were showing chaotic and nonoptimal behaviors, requiring several generations to stabilize and optimize.
• Player-AI Interaction: The competitive aspect reliedstronglyontheplayer\'sinputskills,which varied widely, making it challenging to generalise AI performance metrics in Player vs. AI scenarios.
• SimpleAIInputs:TheinputspaceforAIagents was relatively simple and involved height, obstacle distance, and gap position. In more complexgamesrequiringricherinputfeatures,it is unlikely to generalize well.
References
[1] Stanley, K.O., Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. EvolutionaryComputation,10(2),99-127.https://doi.org/10.1162/106365602320169811
[2] Selvan, J. P., Game, P.S. (2022). Playing a 2D game indefinitely using NEAT and reinforcement learning. arXiv preprintarXiv:2207.14140. https://doi.org/10.48550/arXiv.2207.14140
[3] Kaul,S.,Fayaz,S.A.,Zaman,M.,Butt,M.A.(2022).Is decision tree obsolete in its original form? A burning debate. Revue d\'Intelligence Artificielle, 36(1), 105-113.https://doi.org/10.18280/ria.360112
[4] Rehman,A.,Butt,M.A.,Zaman,M.(2021).Asurvey ofmedicalimageanalysisusingdeeplearning approaches.In20215thInternationalConferenceon Computing Methodologies and Communication (ICCMC) Erode, India, pp. 1334-1342.https://doi.org/10.1109/ICCMC51019.2021.9418385
[5] Amir, S., Zaman, M., Ahmed, M. (2022). Numerical andexperimentalinvestigationofmeteorologicaldata usingadaptivelinearM5model treefor theprediction ofrainfall.ReviewofComputerEngineeringResearch.http://dx.doi.org/10.18488/76.v9i1.2961
[6] Fayaz,S.A.,Zaman,M.,Butt,M.A.(2021).Anapplication of logistic model tree (LMT) algorithm to ameliorate prediction accuracy of meteorological data. International Journal of Advanced Technology and Engineering Exploration, 8(84),1424.http://dx.doi.org/10.19101/IJATEE.2021.874586
[7] Altaf, I., Butt, M.A., Zaman, M. (2021). Apragmatic comparisonofsupervisedmachinelearningclassifiersfor disease diagnosis. In 2021 Third International ConferenceonInventiveResearchinComputing Applications(ICIRCA),Coimbatore,India,pp.1515- 1520.https://doi.org/10.1109/ ICIRCA51532.2021.9544582
[8] Mir, N.M., Khan, S., Butt, M.A., Zaman, M. (2016).An experimental evaluation of bayesian classifiers applied to intrusion detection. Indian Journal of Science and Technology, 9(12), 1-7.http://dx.doi.org/10.17485/ijst/2016/v9i12/86291
[9] Ashraf,M.,Zaman,M.,Ahmed,M.(2018). Performanceanalysisanddifferentsubjectcombinations: an empiricalandanalytical discourseofeducationaldata mining. In 2018 8th International Conference on Cloud Computing,DataScience&Engineering (Confluence), Noida, India, pp. 287292 https://doi.org/10.1109/ CONFLUENCE.2018.8442633
[10] Papavasileiou, E., Cornelis, J., Jansen, B. (2021). A systematicliteraturereviewofthesuccessorsof“NeuroEvolution of Augmenting Topologies”. Evolutionary Computation, 29(1), 1-73. DOI:10.1162/evcoa00282
[11] Oh,I.,Rho,S.,Moon,S.,Son,S.(2021).Creatingpro-level AI for a real-time fighting game using deepreinforcementlearning.IEEETransactionsonGames,pp. 1-10.https://doi.org/10.1109/TG.2021.3049539
[12] Liu, B. (2020). Implementing game strategies based on reinforcement learning. In 2020 6th International Conference on Robotics and Artificial Intelligence (ICRAI), pp. 53-56.https://doi.org/10.1145/3449301.3449311
[13] Urtans, E., Nikitenko, A. (2018). Survey of Deep Q- Network variants in PyGame Learning Environment. In ICDLT \'18: Proceedings of the 2018 2nd International Conference on Deep Learning Technologies, pp. 27-36. https://doi.org/10.1145/3234804.3234816
[14] McIntyre,A.,Kallada,M.,Miguel,C.G.,daSilva,C.F. (n.d.). neatpython. Retrieved fromhttps://github.com/CodeReclaimers/neat-python
[15] Kumar,S.(2020).Understandtypesofenvironments in Artificial Intelligence. Retrieved fromhttps://www.aitude.com/understand-types-of-environments-inartificial-intelligence/
[16] Bird, S., Ellison, N. B., Klein, D. (2020). Therise of Python: A survey of recent research. ACM Computing Surveys, 53(5), 1-36. https://doi.org/10.1145/3411764
[17] Python | Pygame(gamedevelopmentlibrary). (n.d.). Retrieved from https://www.javatpoint.com/pygame
[18] Zorrilla, G. (n.d.). Pygame 1.5.5 reference manual. Retrieved fromhttps://www.pygame.org/ftp/contrib/pygame_docs.pdf
[19] Crljenko, J. (2018). Makingasimplearcadegamein Python. Retrieved from https://core.ac.uk/reader/199709841
[20] Python | displaying images with pygame. (n.d.). Retrieved from https://www.geeksforgeeks.org/python-display-images-with-pygame/
[21] Python | playing audio file in pygame. (n.d.). Retrieved from https://www.geeksforgeeks.org/python-playing-audio-file-in-pygame/