Output Ports

Output Ports

Output port processing, shown in "Switching Fabric" Figure 2, takes the packets that have been stored in the output ports memory and transmits them over the outgoing link. The data link protocol processing and line termination are the send-side link- and physical-layer functionality that interacts with the input port on the other end of the outgoing link, as discussed in What's Inside a Router. The queuing and buffer management functionality is required when the switch fabric delivers packets to the output port at a rate that exceeds the output link rate; well cover output port queuing below.

Where Does Queuing Occur?

If we look at the input and output port functionality and the configurations shown in "Switching Fabric" Figure 1, it is evident that packet queues can form at both the input ports and the output ports. It is important to consider these queues in a bit more detail, since as these queues grow large, the router's buffer space will finally be exhausted and packet loss will occur. Remember that in our earlier discussions, we said that packets were lost within the network or dropped at a router. It is here, at these queues within a router, where such packets are in fact dropped and lost. The actual location of packet loss (either at the input port queues or the output port queues) will depend on the traffic load, the relative speed of the switching fabric, and the line speed, as discussed below.

Assume that the input line speeds and output line speeds are all the same, and that there are n input ports and n output ports. Define the switching fabric speed as the rate at which the switching fabric can move pockets from input ports to output ports. If the switching fabric speed is at least n times as fast as the input line speed, then no queuing can take place at the input ports. This is because even in the worst case, where all n input lines are receiving packets, the switch will be able to transfer n packets from input port to output port in the time it takes each of the n input ports to (simultaneously) receive a single packet. But what can happen at the output ports? Let us suppose still that the switching fabric is at least n times as fast as the line speeds. In the worst case, the packets arriving at each of the n input ports will be destined to the same output port. In this case, in the time it takes to receive (or send) a single packet, n packets will arrive at this output port. Since the output port can transmit only a single packet in a unit of time (the packet transmission time), the n arriving packets will have to queue (wait) for transmission over the outgoing link. Then n more packets can possibly arrive in the time it takes to transmit just one of the n packets that had previously been queued. And so on. Finally, the number of queued packets can grow large enough to exhaust the memory space at the output port, in which case packets are dropped.

Output port queuing is shown in Figure 1. At time t, a packet has arrived at each of the incoming input ports, each destined for the uppermost outgoing port. Assuming identical line speeds and a switch operating at three times the line speed, one time unit later (that is, in the time required to receive or send a packet), all three original packets have been transferred to the outgoing port and are queued awaiting transmission. In the next time unit, one of these three packets will have been transmitted over the outgoing link. In our example, two new packets have arrived at the incoming side of the switch; one of these packets is destined for this uppermost output port.

Output port queuing

Given that router buffers are required to absorb the fluctuations in traffic load, the natural question to ask is how much buffering is needed. For many years, the rule of thumb for buffer sizing was that the amount of buffering (B) should be equal to an average round-trip time (RTT, say 250 msec) times the link capacity (C). This result is based on an analysis of the queuing dynamics of a relatively small number of TCP flows [Villamizar 1994]. Thus, a 10 Gbps link with an RTT of 250 msec would need an amount of buffering equal to B = RTT . C = 2.5 Gbps of buffers. Recent theoretical and experimental efforts [Appenzeller 2004], however, suggest that when there are a large number of TCP flows (N) passing through a link, the amount of buffering required is . With a large number of flows normally passing through large backbone router links (see, e.g., [Fraleigh 2003]), the value of N can be large, with the decrease in required buffer size becoming quite important. [Appenzellar 2004; Wischik 2005; Beheshti 2008] provide very readable discussions of the buffer sizing problem from a theoretical, implementation, and operational standpoint.

A consequence of output port queuing is that n packet scheduler at the output port must choose one packet among those queued for transmission. This selection might be done on a simple basis, such as first-come-first-served (FCFS) scheduling, or a more sophisticated scheduling discipline such as weighted fair queuing (WFQ), which shares the outgoing link fairly among the different end-to-end connections that have packets queued for transmission. Packet scheduling plays a crucial role in providing quality-of- service guarantees. We'll thus cover packet scheduling extensively in "Multimedia Networking". A discussion of output port packet scheduling disciplines is [Cisco Queue 2009]. 

Likewise, if there is not enough memory to buffer an incoming packet, a decision must be made to either drop the arriving packet (a policy known as drop-tail) or remove one or more already-queued packets to make room for the newly arrived packet. In some cases, it may be beneficial to drop (or mark the header of) a packet before the buffer is full in order to provide a congestion signal to the sender. A number of packet-dropping and -marking policies (which collectively have become known as active queue management (AQM) algorithms) have been proposed and analyzed [Labrador 1999, Hollot 2002]. One of the most broadly studied and implemented AQM algorithms is the Random Early Detection (RED) algorithm. Under RED, a weighted average is maintained for the length of the output queue. If the avenge queue length is less than a minimum threshold, minth, when a packet arrives, the packet is admitted to the queue. On the other hand, if the queue is full or the average queue length is greater than a maximum threshold, maxth, when a packet arrives, the packet is marked or dropped. Eventually, if the packet arrives to find an average queue length in the interval [minth, maxth], the packet is marked or dropped with a probability that is normally some function of the average queue

HOL blocking at an input queued switch

length, minth, and maxth. A number of probabilistic marking/dropping functions have been proposed, and various versions of RED have been analytically modeled, simulated and/or implemented. [Christiansen 2001] and [Floyd 2009] provide overviews and pointers to additional reading.

If the switch fabric is not fast enough (relative to the input line speeds) to transfer all arriving packets through the fabric without  delay, then packet queuing can also take place at the input ports, as packets must join input port queues to wait their turn to be transferred through the switching fabric to the output port. To show an important consequence of this queuing, look at a crossbar switching fabric and suppose that (1) all link speeds are identical, (2) that one packet can be transferred from any one input port to a given output port in the same amount of time it takes for a packet to be received on an input link, and (3) packets are moved from a given input queue to their desired output queue in an FCFS manner. Numerous packets can be transferred in parallel, as long as their output ports are different. On the other hand, if two packets at the front of two input queues are destined for the same output queue, then one of the packets will be blocked and must wait at the input queue - the switching fabric can transfer only one packet to a given output port at a time.

Figure 2 shows an example in which two packets (darkly shaded) at the front of their input queues are destined for the same upper-right output port. Assume that the switch fabric chooses to transfer the packet from the front of the upper-left queue. In this case, the darkly shaded packet in the lower-left queue must wait. But not only must this darkly shaded packet wait, so too must the lightly shaded packet that is queued behind that packet in the lower-left queue, even though there is no contention for the middle-right output port (the destination for the lightly shaded packet). This phenomenon is known as head-of-the-line (HOL) blocking in an input-queued switch - a queued packet in an input queue must wait for transfer though the fabric (even though its output port is free) because it is blocked by another packet at the head of the line. [Karol 1987] shows that due to HOL blocking, the input queue will grow to unbounded length (informally, this is equivalent to saying that significant packet loss will take place) under certain assumptions as soon as the packet arrival rate on the input links reaches only 58 percent of their capacity. A number of solutions to HOL blocking are discussed in [McKeown 1997b].


switch fabric, packet loss, router, traffic load, packet scheduler

Copy Right

The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.