25 Mart 2017 Cumartesi

TCP Congestion Control

Giriş
Açıklaması şöyle
Congestion control is perhaps the most important aspect of TCP, which makes TCP capable of
achieving high performance and avoid congestion collapse, at least in wired and single hop wireless
networks. Where the flow control mechanism addresses the receiver’s resources, congestion control
addresses the network resources, preventing the sender to push too much traffic into the network.

Senders use the acknowledgments for data sent, and the lack of these, to infer network conditions
between the sender and receiver.

The TCP congestion control algorithm has received much attention after its introduction in 1988,
and a substantial number of proposals for improvement of the congestion control mechanism have
been put forward. Most of these TCP variants, such as Tahoe, Reno, Vegas etc. have focused on congestion control
Congestion Control kaynak ve hedef arasındaki tüm yol içindir. Hedef bilgisayar sorunsuz olsa bile yoldaki herhangi bir router'daki sorun da tıkanıklığa sebep olabilir.

Congestion control için iki tane algoritma var. İlki TCP Tahoe ve en eski yöntem. Diğeri ise TCP Reno ve daha yeni.

Tahoe Algoritması
TCP Tahoe şöyle
The first version of TCP with congestion control became known as TCP Tahoe.
Algoritması şöyle
Assign a congestion window Cw:
 1. Initial value of Cw = 1 (packet)
 2. If trx successful, congestion window doubled. Continues until  Cmax is reached
3. After Cw ≥ Cmax, Cw = Cw + 1
4. If timeout before ACK, TCP assumes congestion
Congestion olursa algoritma şöyle
TCP response to congestion is drastic:
1. A random backoff timer disables all transmissions for duration of timer
2. Cw is set to 1
C3. ax is set to Cmax / 2
Congestion window can become quite small for successive packet losses.
Throughput falls dramatically as a result.
Şeklen şöyle

Algoritma Açıklamarı
1 ve 2. adımların açıklaması şöyle
The TCP Tahoe congestion control strategy consists of multiple mechanisms. For each connection,
TCP maintains a congestion window that limits the total number of unacknowledged packets that may be in transit end-to-end. The congestion window is an extension of the sliding window that TCP uses for flow control. When a connection is initialized, and after a timeout, TCP uses a mechanism called slow start to increase the congestion window. It starts with a window of two times the Maximum Segment Size (MSS). Although the initial rate is low, the rate of increase is very rapid. For every packet acknowledged, the congestion window increases by one MSS so that effectively the congestion window doubles for every RTT. The window is doubled as follows: If the congestion window has two packets outstanding, and one packet is acknowledged, this means that the congestion window is increased to three packets, and only one packet is outstanding. I.e. the sender may now send two new packets. When the final packet (of the original two) is acknowledged, this allows the sender to increase the congestion window with one MSS yet again, bringing the total congestion window to four, and of these two are free.  In other words, the congestion window has doubled.

3. adımın açıklaması şöyle
When the congestion window exceeds a threshold ssthresh, the algorithm enters a new state, called
congestion avoidance. In some implementations (e.g., Linux), the initial ssthresh is large, resulting
in the first slow start usually ending in a loss of a packet. The ssthresh is updated at the end of each
slow start, and will often affect subsequent slow starts triggered by timeouts.

In the state of congestion avoidance, the congestion window is additively increased by one MSS
every RTT, instead of the previous one MMS per acknowledged packet, as long as non-duplicate
ACKs are received.
Congestion olursa açıklaması şöyle
When a packet is lost, the likelihood of receiving duplicate ACKs is very high. (It is also possible,
though unlikely, that the stream has undergone extreme packet reordering, which would also prompt
duplicate ACKs.) Triple duplicate ACKs are interpreted in the same way as a timeout. In such a case,
Tahoe performs a ”fast retransmit”, reduces the congestion window to one MSS, and resets to the
slow-start state.
Timeout Süresi
Açıklaması şöyle
In order to estimate a typical RTT, it is therefore natural to take some sort of average of the SampleRTT values. TCP maintains an average, called EstimatedRTT, of the SampleRTT values. Upon obtaining a new SampleRTT, TCP updates EstimatedRTT according to the following formula:
EstimatedRTT = (1 – x) • EstimatedRTT + x • SampleRTT
The formula above is written in the form of a programming-language statement — the new value of EstimatedRTT is a weighted combination of the previous value of EstimatedRTT and the new value for SampleRTT.
SampleRTT
Kurose and Ross'a ait "Computer Networks : A Top-Down Approach"  kitabındaki ifade şöyle
Instead of measuring a SampleRTT for every transmitted segment, most TCP implementations take only one SampleRTT measurement at a time.
Bazı TCP gerçekleştirimleri tek bir SampleRTT ölçümü yapıyorlar. Bu bana da tuhaf geldi.Açıklaması ise şöyle
As in most systems engineering topics, there's a balance between performance and accuracy. In the case of TCP, it could take a SampleRTT measurement every other segment, but that would imply a higher processing delay, and thus a higher queueing delay, etc. The single SampleRTT measurement helps TCP get an idea of the RTT while staying lean and fast.

Each node wants to spend the minimum time possible processing each segment so that it can get forwarded onto the network and move on to the next segment.

Additionally, each subsequent measurement is less valuable than the one before. The first measurement lets you know the neighborhood of RTT, the second helps you fine-tune that, etc. So there's a diminishing payoff the more RTTs you measure, especially since after each measurement you have fewer segments remaining to send. What's the point of measuring the RTT if there's only 5 segments left to send? Thus the first RTT measurement is the most important and really the only one needed.

Random Early Detection (RED)
Bu algoritma TCP'ye yardımcı olmak için router'larda bulunur. Açıklaması şöyle
RED is used to prevent queues from filling up by randomly dropping queued packets. Full queues lead to tail-drop, and this can cause multiple TCP flows to become globally synchronized, simultaneously shrinking and expanding their windows, alternately starving and filling queues.
Oyun Dünyasında TCP
Gerçek zamanılı oyun dünyasında TCP kullanılmıyor. Açıklaması şöyle
The model whereby one host acts as a "server" and keeps track of game state (all players, statistics and object state in a world), and all clients communicate with the server via UDP is the industry standard.
Sebebi ise kaybolan tek bir paketin arkadan gelen diğer paketleri de engellemesi.Açıklaması şöyle
The real problem with TCP is head of line blocking. If a single packet is lost on a TCP connection then the loss must be detected,the packet resent and the resent packet must be delivered to the application before any data coming after the lost packet can be delivered to the application.
...
Games where realtime isn't so critical can get away with just using TCP.



Hiç yorum yok:

Yorum Gönder