Connectionless Transport: UDP

Connectionless Transport: UDP

In this section, we'll examine UDP, how it works, and what it does. We encourage you to refer back to "Principles of Network Applications", which contains an overview of the UDP service model, and to "Socket Programming with UDP", which discusses socket programming using UDP.

To motivate our discussion about UDP, assume you were interested in designing a no-frills, bare-bones transport protocol. How might you go about doing this? You might first look at using a vacuous transport protocol. Particularly, on the sending side, you might look at taking the messages from the application process and passing them directly to the network layer; and on the receiving side, you might look at taking the messages arriving from the network layer and passing them directly to the application process. But as we studied in the previous section, we have to do a little more than nothing. At the very least, the transport layer has to provide a multiplexing/demultiplexing service in order to pass data between the network layer and the correct application-level process.

UDP does just about as little as a transport protocol can do. Apart from the multiplexing/demultiplexing function and some light error checking, it adds nothing to IP. In reality, if the application developer chooses UDP instead of TCP, then the application is almost directly talking with IP. UDP takes messages from the application process, attaches source and destination port number fields for the multiplexing/demultiplexing service, adds two other small fields, and passes the resulting segment to the network layer. The network layer encapsulates the transport-layer segment into an IP datagram and then makes a best-effort attempt to deliver the segment to the receiving host. If the segment arrives at the receiving host, UDP uses the destination port number to deliver the segment's data to the correct application process. Note that with UDP there is no handshaking between sending and receiving transport-layer entities before sending a segment. Therefore, UDP is said to be connectionless.

DNS is an instance of an application-layer protocol that normally uses UDP. When the DNS application in a host wants to make a query. it creates a DNS query message and passes the message to UDP. Without performing any handshaking with the UDP entity running on the destination end system, the host-side UDP adds header fields to the message and passes the resulting segment to the network layer. The network layer encapsulates the UDP segment into a datagram and sends the datagram to a name server. The DNS application at the querying host then waits for a reply to its query. If it doesn't receive a reply (possibly because the underlying network lost the query or the reply), either it tries sending the query to another name server, or it informs the invoking application that it can't get a reply.

Now you might be wondering why an application developer would ever choose to build an application over UDP rather than over TCP. Isn't TCP always preferable, since TCP offers a reliable data transfer service, while UDP does not? The answer is no, as many applications are better suited for UDP for the following reasons:

●  Finer application-level control over what data is sent, and when. Under UDP, as soon as an application process passes data to UDP, UDP will package the data inside a UDP segment and immediately pass the segment to the network layer. TCP, however, has a congestion-control mechanism that throttles the transport-layer TCP sender when one or more links between the source and destination hosts become very congested. TCP will also continue to resend a segment until the receipt of the segment has been acknowledged by the destination, regardless of how long reliable delivery takes. Since real-time applications sometimes require a minimum sending rate, do not want to overly delay segment transmission, and can tolerate some data loss, TCPs service model is not particularly well matched to these applications needs. As discussed below, these applications can use UDP and implement, as part of the application, any additional functionality that is required beyond UDP's no-frills segment-delivery service.

●  No connection establishment. As well discuss later, TCP uses a three-way handshake before it starts to transfer data. UDP just blasts away without any formal preliminaries. Therefore UDP does not introduce any delay to establish a connection. This is perhaps the principal reason why DNS runs over UDP rather than TCP - DNS would be much slower if it ran over TCP. HTTP uses TCP rather than UDP, since reliability is critical for Web pages with text. But, as we briefly discussed in "The Web and HTTP", the TCP connection-establishment delay in HTTP is an important contributor to the delays associated with downloading Web documents.

●  No connection state. TCP maintains connection state in the end systems. This connection state includes receive and send buffers, congestion-control parameters, and sequence and acknowledgment number parameters. We will see in "Connection-Oriented Transport: TCP" that this state information is required to implement TCP's reliable data transfer service and to provide congestion control. UDP, however, does not maintain connection state and does not track any of these parameters. That's why, a server devoted to a particular application can normally support many more active clients when the application runs over UDP rather than TCP

●  Small packet header overhead. The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8 bytes of overhead.

Figure 1 lists popular Internet applications and the transport protocols that they use. As we expect, e-mail, remote terminal access, the Web, and file transfer run over TCP - all these applications need the reliable data transfer service of TCP. However, many important applications run over UDP rather than TCP. UDP is used for RIP routing table updates. Since RIP updates are sent from time to time (normally every five minutes), lost updates will be replaced by more recent updates, thus making the lost, out-of-date update useless. UDP is used to carry network management (SNMP; see "Network Management") data. UDP is preferred to TCP in this case, since network management applications must often run when the network is in a stressed state - specifically when reliable, congestion-controlled data transfer is hard to achieve. Also, as we mentioned earlier, DNS runs over UDP, thereby avoiding TCP's connection-establishment delays.

As shown in Figure 1, both UDP and TCP are used today with multimedia applications, such as Internet phone, real-time video conferencing, and streaming of stored audio and video. We'll examine these applications in "Multimedia Networking". We just mention now that all of these applications can tolerate a small amount of packet loss, so that reliable data transfer is not absolutely critical for the application's success. Moreover, real-time applications, like Internet phone and video conferencing, react very poorly to TCP's congestion control. For these reasons, developers of multimedia applications may choose to run their applications over UDP instead of TCP. On the other hand, TCP is increasingly being used for streaming media transport. For instance, [Sripanidkulchai 2004] found that nearly 75% of on-demand and live streaming used TCP. When packet loss rates are low, and with some organizations blocking UDP traffic for security reasons (see "Security in Computer Networks"). TCP becomes an increasingly attractive protocol for streaming media transport.

Popular Internet applications and their underlying transport protocols

Though commonly done today, running multimedia applications over UDP is controversial. As we mentioned above, UDP has no congestion control. But congestion control is required to prevent the network from entering a congested state in which very little useful work is done. If everyone were to start streaming high-bit-rate video without using any congestion control, there would be so much packet overflow at routers that very few UDP packets would successfully traverse the source-to-destination path. Furthermore, the high loss rates induced by the uncontrolled UDP senders would cause the TCP senders (which, as we'll see, do decrease their sending rates in the face of congestion) to radically decrease their rates. In this way, the lack of congestion control in UDP can result in high loss rates between a UDP sender and receiver, and the crowding out of TCP sessions - a potentially serious problem [Floyd 1999]. Many researchers have proposed new mechanisms to force all sources, including UDP sources, to perform adaptive congestion control [Mahdavi 1997; Floyd 2000; Kohler 2006].

Before discussing the UDP segment structure, we mention that it is possible for an application to have reliable data transfer when using UDP. This can be done if reliability is built into the application itself (for instance, by adding acknowledgment and retransmission mechanisms, such as those we'll study in the next section). But this is a nontrivial task that would keep an application developer busy debugging for a long time. However, building reliability directly into the application allows the application to "have its cake and eat it too". That is, application processes can communicate reliably without being subjected to the transmission-rate constraints imposed by TCP's congestion-control mechanism.


socket programming, udp, network layer, application process, end system, tcp, segment

Copy Right

The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.