Repository logo
 

SLO-Targeted Congestion Control with Deep Reinforcement Learning

Accepted version
Peer-reviewed

Change log

Abstract

The Internet faces significant challenges in congestion control (CC) due to unpredictable traffic patterns and dynamic network conditions. Traditional CC methods usually struggle to consistently meet strict Service Level Objectives (SLOs) while reducing the end-to-end latency, leading to suboptimal user experiences. In this paper, we introduce DRLLM, a novel congestion control algorithm that seamlessly integrates Deep Reinforcement Learning (DRL) with the Lagrange Multiplier method. By combining the adaptive intelligence of DRL with the mathematical optimization power of Lagrange multipliers, DRLLM dynamically adjusts to network demands and provides a highly reliable and guaranteed user experience in a variety of network conditions. Our extensive simulations demonstrate superior performance: DRLLM reduces the average latency by 15% compared to BBR, 50% compared to Aurora, and 67% compared to Cubic under high buffer conditions. Moreover, it achieves the lowest latency in 95th percentile across different network conditions with low latency jitter. These results justify DRLLM’s ability to deliver a stable, low-latency and high throughput performance.

Description

Keywords

Journal Title

Conference Name

30th IEEE Symposium on Computers and Communications

Journal ISSN

2642-7389

Volume Title

Publisher

Publisher DOI

Publisher URL

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
Horizon Europe UKRI Underwrite Innovate (10066543)
The work was partially funded by the European Union under the project EDGELESS (GA no. 101092950)