Citation

Abstract

Congestion control algorithms are used in computer networks to regulate and manage the flow of data traffic, preventing network congestion and ensuring efficient data transmission. Due to their conservative nature, traditional congestion control algorithms often demonstrate suboptimal performance in high-latency and lossy network environments, as they do not fully leverage the available network bandwidth. This article discusses the progress made in optimizing the performance of the Data Capture and Delivery (DCD) application, a vital data management component within the Deep Space Network (DSN), over a wide-area network (WAN) characterized by a high-latency and lossy network environment. The methodology employed involves setting up a controlled testing environment that accurately replicates the challenging network conditions. Then through systematic experimentation, different congestion control algorithms are evaluated based on their ability to maintain optimal throughput while effectively handling random packet loss. Key performance metrics such as throughput, latency, and loss rate are measured and analyzed to assess the algorithms’ effectiveness. Preliminary results indicate that the Bottleneck Bandwidth and Round-Trip Time (BBR) algorithm exhibits superior performance in high-latency and lossy environments when compared with other algorithms. However, continued testing and analysis are required to validate these initial observations and draw robust conclusions, and further refinement of the algorithm is recommended for network performance optimization.

Keywords

Deep Space Network congestion control network congestion

Details

Volume
42-241
Published
May 15, 2025
Pages
1–7
File Size
0.2 MB