Flow Control

Flow Control

Remember that the hosts on each side of a TCP connection set aside a receive buffer for the connection. When the TCP connection receives bytes that are correct and in sequence, it places the data in the receive buffer. The associated application process will read data from this buffer, but not necessarily at the instant the data arrives. In fact, the receiving application may be busy with some other task and may not even attempt to read the data until long after it has arrived. If the application is comparatively slow at reading the data, the sender can very easily overflow the connection's receive buffer by sending too much data too quickly.

TCP provides a flow-control service to its applications to eliminate the possibility of the sender overflowing the receiver's buffer. Flow control is therefore a speed-matching service - matching the rate at which the sender is sending against the rate at which the receiving application is reading. As noted earlier, a TCP sender can also be throttled due to congestion within the IP network; this form of sender control is referred to as congestion control, a topic we will explore in detail in "Principles of Congestion Control" and "TCP Congestion Control". Even though the actions taken by flow and congestion control are alike (the throttling of the sender), they are obviously taken for very different reasons. Unfortunately, various authors use the terms interchangeably, and the savvy reader would be wise to distinguish between them. Let's now discuss how TCP provides its flow-control service. To see the forest for the trees, we assume throughout this section that the TCP implementation is such that the TCP receiver rejects out-of-order segments.

TCP provides flow control by having the sender maintain a variable called the receive window. Informally, the receive window is used to give the sender an idea of how much free buffer space is available at the receiver. Because TCP is full-duplex, the sender at each side of the connection maintains a distinct receive window. Let's investigate the receive window in the context of a file transfer. Assume that Host A is sending a large file to Host B over a TCP connection. Host B allocates a receive buffer to this connection; denote its size by RcvBuffer. From time to time, the application process in Host B reads from the buffer. Define the following variables:

●  LastByteRead: the number of the last byte in the data stream read from the buffer by the application process in B

●  LastByteRcvd: the number of the last byte in the data stream that has arrived from the network and has been placed in the  receive buffer at B

The receive window (rwnd) and the receive buffer (RcvBuffer)

Because TCP is not permitted to overflow the allocated buffer, we must have

LastByteRcvd - LastByteRead  ≤  RcvBuffer

The receive window, denoted rwnd is set to the amount of spare room in the buffer:

rwnd = RcvBuffer - [LastByteRcvd - LastByteRead]

Because the spare room changes with time, rwnd is dynamic. The variable rwnd is shown in Figure 1.

How does the connection use the variable rwnd to provide the flow-control service? Host B tells Host A how much spare room it has in the connection buffer by placing its current value of rwnd in the receive window field of every segment it sends to A.  In the beginning, Host B sets rwnd = RcvBuffer. Note that to pull this off, Host B must keep track of numerous connection-specific variables.

Host A in turn keeps track of two variables, LastByteSent and LastByteAcked, which have obvious meanings. Note that the difference between these two variables, LastByteSent - LastByteAcked, is the amount of unacknowledged data that A has sent into the connection. By keeping the amount of unacknowledged data less than the value of rwnd, Host A is assured that it is not overflowing the receive buffer at Host B. Therefore, Host A makes sure throughout the connection's life that

LastByteSent - LastByteAcked  ≤  rwnd

There is one minor technical problem with this scheme. To see this, assume Host B's receive buffer becomes full so that rwnd = 0. After advertising rwnd = 0 to Host A, also assume that B has nothing to send to A. Now examine what happens. As the application process at B empties the buffer, TCP does not send new segments with new rwnd values to Host A: in fact, TCP  sends a segment to Host A only if it has data to send or if it has an acknowledgment to send. Therefore, Host A is never informed that some space has opened up in Host B's receive buffer - Host A is blocked and can transmit no more data. To solve this problem, the TCP specification requires Host A to continue to send segments with one data byte when B's receive window is zero. These segments will be acknowledged by the receiver. Finally the buffer will begin to empty and the acknowledgments will contain a nonzero rwnd value.

Having explained TCP's flow-control service, we briefly mention here that UDP does not provide flow control. In order to understand the issue, consider sending a series of UDP segments from a process on Host A to a process on Host B. For a typical UDP implementation, UDP will append the segments in a finite-sized buffer that "precedes" the corresponding socket (that is, the door to the process). The process reads one entire segment at a time from the buffer. If the process does not read the segments fast enough from the buffer, the buffer will overflow and segments will get dropped.


Tags

receive buffer, congestion control, receive window, tcp connection

Copy Right

The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.