HYBRID DEEP Q-NETWORK ARCHITECTURE FOR INTELLIGENT RESOURCE MANAGEMENT IN BIG DATA ENVIRONMENTS
DOI:
https://doi.org/10.63001/tbs.2024.v19.i02.pp131-138Keywords:
Deep Reinforcement Learning (DRL), Resource Management Optimization, Big Data Processing, Deep Q-Network (DQN), Workload Prediction, Computational Efficiency, Resource AllocationAbstract
This paper introduces an advanced Deep Reinforcement Learning (DRL) framework for optimizing resource allocation in big data processing systems, addressing limitations of previous approaches. While existing models like Resource Manager (ResMan) and Deep Resource Allocator (DeepRA) rely primarily on static threshold-based policies, our proposed system employs a novel hybrid Deep Q-Network architecture that dynamically adapts to varying workload patterns. The system was evaluated using the Google Cluster Trace dataset (29 days, 12,500 machines) and the Alibaba Cluster Trace dataset (8 days, 4,000 machines). Comparative analysis reveals significant improvements over previous models: our system outperforms ResMan by 45% in resource utilization and exceeds DeepRA's job completion efficiency by 38%. The traditional MAPE-K feedback loop approach achieved only 58% accuracy in resource prediction, while our model maintains 89% accuracy across diverse workload scenarios. Using TensorFlow 2.8 implementation on a 200-node Hadoop cluster, the system demonstrated 32% lower resource contention compared to conventional methods. The model's dual-network architecture with prioritized experience replay shows 41% faster convergence than single-network approaches. Furthermore, when tested against sudden workload spikes, our system maintained 94% performance stability, significantly surpassing the 71% stability of threshold-based systems. These results establish a new benchmark in intelligent resource management for big data environments, particularly in handling heterogeneous workloads and dynamic resource requirements.



















