CSMA/CD: Ethernets Multiple Access Protocol

CSMA/CD: Ethernets Multiple Access Protocol

When the nodes are interconnected with a hub (as opposed to a link-layer switch), as shown in "Ethernet" Figure 2, the Ethernet LAN is a true broadcast LAN - that is, when an adapter transmits a frame, all of the adapters on the LAN receive the frame. Because Ethernet can employ broadcast, it needs a multiple access protocol. Ethernet uses the celebrated CSMA/CD multiple access protocol. Recall from "Multiple Access Protocols" that CSMA/CD does the following:

1. An adapter may begin to transmit at any time; that is, there is no notion of time slots.
2. An adapter never transmits a frame when it senses that some other adapter is transmitting; that is, it uses carrier sensing.
3. A transmitting adapter aborts its transmission as soon as it detects that another adapter is also transmitting; that is, it uses collision detection.
4. Before attempting a retransmission, an adapter waits a random time that is typically small compared with the time to transmit a frame.

These mechanisms give CSMA/CD much better performance than slotted ALOHA in a LAN environment. In fact, if the maximum propagation delay between stations is very small, the efficiency of CSMA/CD can approach 100 percent. But note that the second and third mechanisms listed above require each Ethernet adapter to be able to (1) sense when some other adapter is transmitting and (2) detect a collision while it is transmitting. Ethernet adapters perform these two tasks by measuring voltage levels before and during transmission.

Each adapter runs the CSMA/CD protocol without explicit coordination with the other adapters on the Ethernet. Within a specific adapter, the CSMA/CD protocol works as follows:

1. The adapter obtains a datagram from the network layer, prepares an Ethernet frame, and puts the frame in an adapter buffer.

2. If the adapter senses that the channel is idle (that is, there is no signal energy entering the adapter from the channel for 96 bit times), it starts to transmit the frame. If the adapter senses that the channel is busy, it waits until it senses no signal energy (plus 96 bit times) and then starts to transmit the frame.

3. While transmitting, the adapter monitors for the presence of signal energy coming from other adapters. If the adapter transmits the entire frame without detecting signal energy from other adapters, the adapter is finished with the frame.

4. If the adapter detects signal energy from other adapters while transmitting, it stops transmitting its frame and instead transmits a 48-bit jam signal.

5. After aborting (that is, transmitting the jam signal), the adapter enters an exponential backoff phase. Specifically, when transmitting a given frame, after experiencing the nth collision in a row for this frame, the adapter chooses a value for K at random from {0,1,2, . . . , 2m - 1} where m = min(n,10). The adapter then waits K . 512 bits times and then returns to Step 2.

A few comments about the CSMA/CD protocol are certainly in order. The purpose of the jam signal is to make sure that all other transmitting adapters become aware of the collision. Lets look at an example. Suppose adapter A begins to transmit a frame, and just before As signal reaches adapter B, adapter B begins to transmit. So B will have transmitted only a few bits when it aborts its transmission. These few bits will indeed propagate to A, but they may not constitute enough energy for A to detect the collision. To make sure that A detects the collision (so that it too can abort), B transmit the 48-bit jam signal.

Next consider the exponential backoff algorithm. The first thing to notice here is that a bit time (that is, the time to transmit a single bit) is very short; for a 10 Mbps Ethernet, a bit time is 0.1 microsecond. Now lets look at an example. Suppose that an adapter attempts to transmit a frame for the first time and while transmitting it detects a collision. The adapter then chooses K = 0 with probability 0.5 or chooses K = 1 with probability 0.5. If the adapter chooses K = 0, then it immediately jumps to Step 2 after transmitting the jam signal. If the adapter chooses K = 1, it waits 51.2 microseconds before returning to Step 2. After a second collision, K is chosen with equal probability from {0,1,2,3}. After three collisions, K is chosen with equal probability from {0,1,2,3,4,5,6,7}. After 10 or more collisions, K is chosen with equal probability from {0,1,2, . . . , 1023}. Thus the size of the sets from which K is chosen grows exponentially with the number of collisions (until n = 10); it is for this reason that Ethernets backoff algorithm is referred to as exponential backoff. The Ethernet standard imposes limits on the distance between any two nodes. These limits ensure that if adapter A chooses a lower value of K than all the other adapters involved in a collision, then adapter A will be able to transmit its frame without experiencing a new collision.

Why use exponential backoff? Why not, for example, select K from {0,1,2,3,4,5,6,7} after every collision? The reason is that when an adapter experiences its first collision, it has no idea how many adapters are involved in the collision. If there are only a small number of colliding adapters, it makes sense to choose K from a small set of small values. On the other hand, if many adapters are involved in the collision, it makes sense to choose K from a larger, more dispersed set of values (why?). By increasing the size of the set after each collision, the adapter appropriately adapts to these different scenarios.

We also note here that each time an adapter prepares a new frame for transmission, it runs the CSMA/CD algorithm presented above, not taking into account any collisions that may have occurred in the recent past. So it is possible that an adapter with a new frame will immediately be able to sneak in a successful transmission while several other adapters are in the exponential backoff state.

Ethernet Efficiency


When only one node has a frame to send, the node can transmit at the full rate of the Ethernet technology (e.g., 10 Mbps, 100 Mbps, or 1 Gbps). However, if many nodes have frames to transmit, the effective transmission rate of the channel can be much less. We define the efficiency of Ethernet to be the long-run fraction of time during which frames are being transmitted on the channel without collisions when there is a large number of active nodes, with each node having a large number of frames to send. In order to present a closed-form approximation of the efficiency of Ethernet, let dprop denotes the maximum time it takes signal energy to propagate between any two adapters. Let dtrans be the time to transmit a maximum-size Ethernet frame (approximately 1.2 msec for a 10 Mbps Ethernet). A derivation of the efficiency of Ethernet is beyond the scope of this blog. Here we simply state the following approximation:



We see from this formula that as dprop approaches 0, the efficiency approaches 1. This matches our intuition that if the propagation delay is zero, colliding nodes will abort immediately without wasting the channel. Also, as dtrans becomes very large, efficiency approaches 1. This is also intuitive because when a frame grabs the channel, it will hold on to the channel for a very long time; thus the channel will be doing productive work most of the time.


Tags

nodes, network layer, ethernet frame, adapter buffer, exponential backoff

Copy Right

The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.