Different networks have different issues.
The flow control that is implemented at TCP protocol or at the application layer for UDP-based apps is always end-to-end. This means they can only react to what they can measure or predict, such as a packet loss, delay, or jitter across the end-to-end connection. They do not have the knowledge of why certain network impairments impact certain network segments differently. They will not be able to see if the impairment comes from the user being in a crowded hotspot or if it is a broadband oversubscription issue or if it is congestion in the mid-mile due to peering. Many times, by the time the applications or protocols react based on end-to-end measurements or predictions, the condition has changed. Often it is assumed that the impairments are caused by congestion, which may not always be the case, especially in wireless segments.
The problem with end-to-end protocols The problem is that remediation measures (e.g., reducing the rate, changing the CODEC, adding FEC, etc.) may, in fact, make the conditions worse. Often this results in applications not being able to utilize the available capacity of the underlying networks. We will show in the next blog how the typical response of TCP with a packet loss of less than half of one percent will reduce the effective throughput of a link by over 95%. That means a 25 megabits-per-second broadband connection that is perfectly capable of supporting video, voice, and file transfers suddenly become effectively a 1.25 megabits-per-second link barely capable of voice. Since almost every user now accesses their apps and services through a form of wireless network, there will almost always be some packet loss. This makes the ability