Mobile Edge Computing (MEC) enables resource-constrained devices to transfer computationally heavy tasks to nearby edge servers, reducing latency and energy consumption while maintaining quality of service. However, unstable network conditions, limited edge resources, and complex decision-making requirements often lead to task backlogs and degraded performance. This paper presents TOMEC-RAS-Net, a lightweight deep reinforcement learning (DRL) framework for real-time adaptive task offloading in dynamic MEC environments. The proposed framework integrates density-based spatial clustering (DBSCAN) for intelligent task grouping, an ensemble RAS-Net actor architecture combining ResNet, AlexNet, and ShuffleNet for robust feature extraction, and a priority-weighted resampling strategy to enhance learning efficiency under varying network conditions. A Markov Decision Process (MDP) formulation jointly optimises binary offloading decisions, CPU frequency allocation, and transmission power under queue stability and average power constraints. Experimental evaluation on an NVIDIA RTX 4090 platform demonstrates that TOMEC-RAS-Net achieves 98.74% accuracy, 98.92% precision, 98.65% sensitivity, 99.01% specificity, and a Matthews Correlation Coefficient (MCC) of 97.46% — outperforming DNN, CNN, ResNet, and LSTM baselines across all metrics. The model maintains low false positive (0.99%) and false negative (1.35%) rates and exhibits stable convergence over five-fold cross-validation, confirming its suitability for real-time MEC task offloading at the network edge.
Introduction
The rapid growth of mobile devices and IoT applications has increased computational demand at the network edge, where traditional cloud computing struggles with latency, congestion, and bandwidth limitations. Mobile Edge Computing (MEC) addresses this by offloading computational tasks from mobile devices to nearby edge servers, reducing communication delay and enabling applications like AR, autonomous vehicles, and real-time analytics.
Task offloading is central to MEC efficiency, but conventional approaches and standard DRL models face challenges due to dynamic network conditions, limited edge resources, and computational overhead. To address this, the paper proposes TOMEC-RAS-Net, a lightweight ensemble DRL framework featuring:
DBSCAN-based task clustering to reduce queue backlog.
Ensemble RAS-Net actor with complementary CNN backbones for robust, lightweight feature learning.
Priority-weighted resampling for adaptability under dynamic conditions.
Unified MDP-based optimization balancing queue stability, energy efficiency, and offloading decisions.
The framework enables adaptive, energy-efficient, and scalable task offloading in heterogeneous MEC networks, outperforming existing static, high-overhead, or resource-intensive approaches while supporting multiple wireless devices and dynamic workloads.
Conclusion
This paper presented TOMEC-RAS-Net, a lightweight ensemble deep reinforcement learning framework for adaptive task offloading in Mobile Edge Computing. The framework integrates three novel components: (i) DBSCAN-based density clustering for task grouping that reduces queue congestion and computational redundancy; (ii) ensemble RAS-Net actor architecture combining ResNet, AlexNet, and ShuffleNet for lightweight yet robust feature extraction with reduced estimation bias and improved convergence stability; and (iii) priority-weighted resampling for adaptive learning efficiency under non-stationary MEC conditions. The system jointly optimises binary offloading decisions, CPU frequency, and transmission power within a unified MDP formulation subject to queue stability and average power constraints.
Experimental results demonstrate that TOMEC-RAS-Net outperforms DNN, CNN, ResNet, and LSTM baselines across all evaluation metrics, achieving 98.74% accuracy, 98.92% precision, 99.01% specificity, MCC of 97.46%, FPR of 0.99%, and FNR of 1.35%. Stable convergence and consistent five-fold cross-validation performance (mean 98.74%) confirm generalisation reliability. These results establish TOMEC-RAS-Net as a viable, accurate, and efficient solution for real-time task offloading in resource-constrained MEC environments.
Future work will focus on deployment on real 5G/6G MEC infrastructure, extension to multi-agent federated DRL for scalable cooperative offloading, integration of energy harvesting and green computing strategies for battery-constrained IoT devices, and incorporation of blockchain-based security mechanisms for privacy-preserving task offloading. Adaptive lightweight neural architectures and dynamic clustering algorithms will also be explored to further reduce computational demands on severely resource-limited edge nodes.
References
[1] Tang, M., & Wong, V. W. (2020). Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mobile Computing, 21(6), 1985–1997.
[2] Lu, H., et al. (2020). Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning. Future Generation Computer Systems, 102, 847–861.
[3] Zhang, S., et al. (2022). DRL-based partial offloading for maximizing sum computation rate of wireless powered MEC network. IEEE Trans. Wireless Communications, 21(12), 10934–10948.
[4] Zheng, K., et al. (2023). DRL-based offloading for computation delay minimization in wireless-powered multi-access edge computing. IEEE Trans. Communications, 71(3), 1755–1770.
[5] Li, Y., et al. (2020). Distributed edge computing offloading algorithm based on deep reinforcement learning. IEEE Access, 8, 85204–85215.
[6] Lim, D., et al. (2022). DRL-OS: A deep reinforcement learning-based offloading scheduler in mobile edge computing. Sensors, 22(23), 9212.
[7] Nashaat, H., et al. (2024). DRL-based distributed task offloading framework in edge-cloud environment. IEEE Access, 12, 33580–33594.
[8] Wang, S., et al. (2022). Deep reinforcement learning-based adaptive resource management in MEC. IEEE Trans. Network and Service Management.
[9] Ma, G., et al. (2023). Dynamic neural network-based resource management for 6G MEC networks. IEEE Transactions on Communications.
[10] Duan, M., et al. (2021). Resource management for intelligent vehicular edge computing networks. IEEE Trans. Intelligent Vehicles.
[11] Chen, Y., et al. (2022). Dynamic task offloading for mobile edge computing with hybrid energy supply. Tsinghua Science and Technology, 28(3), 421–432.
[12] Yang, S., et al. (2022). Deep learning-based dynamic computation task offloading for MEC networks. Sensors, 22(11), 4088.
[13] Yuan, H., et al. (2024). Cost-efficient task offloading in MEC with layered UAVs. IEEE Internet of Things Journal, 11(19), 30496–30509.
[14] Seid, A. M., et al. (2021). Multi-agent DRL for task offloading and resource allocation in multi-UAV IoT edge network. IEEE Trans. Network and Service Management, 18(4), 4531–4547.
[15] Zhao, F., et al. (2021). Dynamic offloading and resource scheduling for MEC with energy harvesting devices. IEEE Trans. Network and Service Management, 18(2), 2154–2165.
[16] Qin, P., et al. (2022). Learning-based energy-efficient task offloading for vehicular collaborative edge computing. IEEE Trans. Vehicular Technology, 71(8), 8398–8413.
[17] Bolourian, M., & Shah-Mansouri, H. (2023). Energy-efficient task offloading for three-tier wireless-powered MEC. IEEE Internet of Things Journal, 10(12), 10400–10412.
[18] Qin, Y., et al. (2025). Task offloading optimization in MEC based on DRL using density clustering and ensemble learning. Scientific Reports, 15(1), 211.
[19] Bi, S., et al. (2020). Lyapunov-guided deep reinforcement learning for stable online computation offloading in MEC networks. arXiv preprint.
[20] Ullah, I., et al. (2023). Optimizing task offloading and resource allocation in edge-cloud networks: a DRL approach. Journal of Cloud Computing, 12(1), 112.