SLO-Targeted Congestion Control with Deep Reinforcement Learning
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
The Internet faces significant challenges in congestion control (CC) due to unpredictable traffic patterns and dynamic network conditions. Traditional CC methods usually struggle to consistently meet strict Service Level Objectives (SLOs) while reducing the end-to-end latency, leading to suboptimal user experiences. In this paper, we introduce DRLLM, a novel congestion control algorithm that seamlessly integrates Deep Reinforcement Learning (DRL) with the Lagrange Multiplier method. By combining the adaptive intelligence of DRL with the mathematical optimization power of Lagrange multipliers, DRLLM dynamically adjusts to network demands and provides a highly reliable and guaranteed user experience in a variety of network conditions. Our extensive simulations demonstrate superior performance: DRLLM reduces the average latency by 15% compared to BBR, 50% compared to Aurora, and 67% compared to Cubic under high buffer conditions. Moreover, it achieves the lowest latency in 95th percentile across different network conditions with low latency jitter. These results justify DRLLM’s ability to deliver a stable, low-latency and high throughput performance.

