
		<paper>
			<loc>https://jjcit.org/paper/275</loc>
			<title>CUBIC-LEARN: A REINFORCEMENT LEARNING APPROACH TO CUBIC CONGESTION CONTROL</title>
			<doi>10.5455/jjcit.71-1748057293</doi>
			<authors>Ehsan Abedini,Mohsen Nickray</authors>
			<keywords>Q-learning,Reinforcement learning,CUBIC Algorithm,Network congestion</keywords>
			<citation>2</citation>
			<views>2162</views>
			<downloads>918</downloads>
			<received_date>30-May.-2025</received_date>
			<revised_date>  3-Sep.-2025</revised_date>
			<accepted_date>  22-Sep.-2025</accepted_date>
			<abstract>Managing congestion effectively enables reliable and fast data transfer over networks. CUBIC delivers reliable 
results under normal circumstances, but cannot adapt effectively to changing network scenarios. We introduce 
CUBIC-Learn, an RL approach for improving congestion control in CUBIC. The central idea is to use a Q-
learning algorithm to adjust congestion window thresholds based on current data on packet loss, throughput and 
latency. Simulations demonstrate more efficient and reliable congestion control when using CUBIC-Learn 
compared to standard CUBIC. CUBIC-Learn achieves a 47% reduction in packet loss, over a 59% increase in 
bandwidth utilization, approximately a 28% decrease in retransmissions and 47% lower latency. In addition, 
CUBIC-Learn shows significant improvements in congestion window (cwnd) growth behavior, fairness among 
competing flows and stability under heterogeneous traffic and network scenarios, including gigabit-scale 
bandwidth conditions. Statistical analysis further confirms the robustness of these gains, while the method 
introduces no additional computational overhead. Overall, CUBIC-Learn performs better than PCC, Reno, Tahoe, 
NewReno and BBRv3 in most metrics. These findings suggest that RL can markedly improve congestion control in 
high-speed networks.</abstract>
		</paper>


