A new networking technology in the telecom sector called network function virtualization lowers operating and capital costs while enabling network service deployment. In NFV, seeks to tackle challenges by utilizing standardized IT virtualization technology to combine various types of network equipment onto industry-standard, high volume servers, switches and storage systems, which may be situated in data centers, network nodes, or at the end user’s location. The Virtual Network Functions (VNFs) worked as software oriented approach, it creates a high flexible and dynamic network to meet more several demand along with a series of research challenges, such as, VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with Network Function Virtualization Infrastructure (NFVI), among others. However, because network conditions and workloads are dynamic, effective resource management is still a major difficulty in NFV systems. This survey paper presents an overview of resource allocation algorithms in Network Functions Virtualization (NFV). We examine state-of-the-art approaches as Deep Reinforcement Learning (DRL), Parallel VNF Placement Framework (PVFP), RL Based Framework and Online Coordinated Resource Allocation (OCRA), analyzing speed, limitations, and suitability for dynamic environments. This paper has been prepared as an effort to reassess the research studies on the relevance of machine learning techniques in the domain of Network Function Virtualization.
Introduction
Traditionally, telecom networks relied on dedicated, physical hardware devices for each network function, leading to rigid service chains, slow product cycles, and dependency on specialized equipment. Network Function Virtualization (NFV) introduces a transformative approach by virtualizing these functions as software instances running on standard hardware, improving flexibility, agility, and cost-efficiency.
NFV’s architecture decouples network functions from hardware, enabling deployment on commercial servers, switches, and storage across various locations. However, realizing NFV’s potential faces challenges such as efficient resource management, autonomous scaling, and maintaining service reliability, especially critical for 5G networks and applications requiring high dependability.
Related Research Highlights:
Several studies address NFV resource allocation, VNF scheduling, and service function chain (SFC) placement using heuristics, genetic algorithms, machine learning, and reinforcement learning to optimize performance, cost, and latency.
Algorithms have been developed for dynamic VNF placement and migration, delay-aware scheduling, and coordinated resource allocation, often using mixed-integer programming, tabu search, or genetic approaches.
Machine learning and deep reinforcement learning techniques have been applied to predict resource needs, optimize scaling, and reduce latency and resource consumption.
Advanced models like multi-agent DRL, federated learning, and graph neural networks improve adaptability and efficiency in managing VNFs and service chains.
Research also focuses on addressing real-time demands, online backup, and fault tolerance in edge and cloud environments.
Overall, NFV research is rapidly evolving with innovative algorithms and intelligent frameworks aimed at optimizing virtual network resource usage, improving quality of service, and enabling scalable, flexible telecom infrastructure for future technologies.
Conclusion
In this paper, a comprehensive review was conducted to examine resource allocation problems in NFV with various approaches. The implementation is carried out between RL, DRL, OCRA and PVFP algorithms for Virtual network functions across Multiple servers, to pivot on Memory, CPU and execution Time synthetic dataset featuring 5 servers and 10 VNFs is utilized, creating a controlled environment for testing. The dataset encompasses a range of resource needs for the VNFs and differing capacities for the servers, replicating real-world situations with diverse resources. The variety in the dataset ensures that each algorithm\'s strengths and weaknesses are highlighted.RL Coordination offers a compromise between efficiency and resource use, making it ideal for typical tasks. Optimal for Maximizing Resource Use: The DRL-Based Allocation ensures peak CPU utilization, making it suitable for tasks that require significant resources. Optimal for Simplicity: OCRA is efficient for basic allocation, but it may not fully utilize resources. Ideal for Parallelism: PVFP performs exceptionally well in parallel execution situations, although it tends to have longer execution durations. Choosing an algorithm must take into account the nature of the workload, the availability of resources, and the particular needs of the application setting.