Non-Persistent and Persistent Connections

Non-Persistent and Persistent Connections

In various Internet applications, the client and server communicate for an extensive period of time, with the client making a series of requests and the server responding to reach of the requests. Depending on the application and on how the application is being used, the series of requests may be prepared back-to-back, from time to time at regular intervals, or occasionally. When this client-server interaction is taking place over TCP, the application developer needs to make an important decision - should each request/response pair be sent over a separate TCP connection, or should all of the requests and their related responses be sent over the same TCP connection? In the former approach, the application is said to use non-persistent connections; and in the latter approach, persistent connections. To gain a deep understanding of this design issue, let's look at the advantages and disadvantages of persistent connections in the context of a particular application, namely, HTTP, which can use both non-persistent connections and persistent connections. Though HTTP uses persistent connections in its default mode, HTTP clients and servers can be configured to use non-persistent connections instead.

HTTP with Non-Persistent Connections


Let's walk through the steps, of transferring a Web page from server to client for the case of non-persistent connections. Let's assume the page comprises a base HTML file and 10 JPEG images, and that all of these objects reside on the same server. Moreover assume the URL for the base HTML file is

http://www.scomeSchool.edu/someDepartment/home.index

Here is what happens:

I. The HTTP client process starts a TCP connection to the server www.someSchool.edu on port number 80, which is the default port number for HTTP. Connected with the TCP connection, there will be a socket at the client and a socket at the server.

2. The HTTP client sends an HTTP request message to the server via its socket. The request message contains the path name /  someDepartment / home. index. (We will discuss HTTP messages in some detail below.)

3. The HTTP server process receives the request message via its socket, recovers the object / someDepartment/home.index from its storage (RAM or disk), encapsulates the object in an HTTP response message, and sends the response message to the client via its socket.

4. The HTTP server process tells TCP to close the TCP connection. (But TCP doesn't in fact finish the connection until it knows for sure that the client has received the response message intact.)

5. The H'TTP client receives the response message. The TCP connection finishes. The message shows that the encapsulated object is an HTML file. The client extracts the file from the response message, inspects the HTML file, and finds references to the 10 JPEG objects.

6. The first four steps are then repeated for each of the referenced JPEG objects.

As the browser receives he Web page, it displays the page to the user. Two different browsers may interpret (that is, display to the user) a Web page in somewhat different ways. HTTP has nothing to do with how a Web page is interpreted by a client. The HTTP specifications ([RFC 1945] and [RFC 2616]) describe only the communication protocol between the client HTTP program and the server HTTP program.

The steps above demonstrate the use of non-persistent connections, where each TCP connection is closed after the server sends the object - the connection does not continue for other objects. Note that each TCP connection transports exactly one request message and one response message. Therefore, in this example, when a user requests the Web page, 11 TCP connections are produced.

In the steps explained above, we were deliberately unclear about whether the client obtains the 10 JPEGs over 10 serial TCP connections, or whether some of the JPEGs are obtained over parallel TCP connections. In fact, users can configure modern browsers to manage the degree of parallelism. In their default modes, most browsers open 5 to 10 parallel TCP connections, and each of these connections manages one request-response transaction. If the user prefers, the maximum number of parallel connections can be set to one, in which case the 10 connections are created serially. As we'll see in the next section, the use of parallel connections shortens the response time.

Before continuing, let's do a back-of-the-envelope calculation to estimate the amount of time that elapses from when a client requests the base HTML file until the entire file is received by the client. To this end, we describe the round-trip time (RTT), which is the time it takes for a small packet to travel from client to server and then back to the client. The RTT contains packet-propagation delays, packet-queuing delays in intermediate routers and switches, and packet-processing delays. (These delays were discussed in "Delay, Loss, and Throughput in Packet-Switched Networks"). Now examine what happens when a user clicks on a hyperlink. As shown in Figure 1, this causes the browser to initiate a TCP connection between the browser and the Web server; this includes a "three-way handshake" -  client sends a small TCP segment to the server, the server acknowledges and responds  with a small TCP segment, and, finally, the client acknowledges back to the server. The first two parts of the three-way handshake take one RTT. After completing the first two parts of the handshake, the client sends the HTTP request message combined with the third part of

Back-of-the-envelope calculation for the time needed to request and receive an HTML file

the three-way handshake (the acknowledgment) into the TCP connection. Once the request message arrives at the server, the server sends the HTML file into the TCP connection. This HTTP request/response eats up another RTT. Thus, roughly, the total response time is two RTTs plus the transmission time at the server of the HTML file.

HTTP with Persistent Connections


Non-persistent connections have some defects. First, a brand-new connection must be created and maintained for each requested object. For each of these connections, TCP buffers must be assigned and TCP variables must be kept in both the client and server. This can place a considerable burden on the Web server, which may be serving requests from hundreds of different clients at the same time. Second, as we just explained, each object suffers a delivery delay of two RTTs - one RTT to establish the TCP connection and one RTT to request and receive an object.

With persistent connections, the server leaves the TCP connection open after sending a response. Subsequent requests and responses between the same client and server can be sent over the same connection. Particularly, an entire Web page (in the example above, the base HTML file and the 10 images) can be sent over a single persistent TCP connection. Furthermore, various Web pages residing on the same server can be sent from the server to the same client over a single persistent TCP connection. These requests for objects can be made back-to-back, without waiting for replies to pending requests (pipelining). Usually, the HTTP server closes a connection when it isn't used for a certain time (a configurable timeout interval). When the server receives the back-to-back requests, it sends the objects back-to-back. The default mode of HTTP uses persistent connections with pipelining.




Tags

internet applications, persistent connections, base html file, socket, routers

Copy Right

The contents available on this website are copyrighted by TechPlus unless otherwise indicated. All rights are reserved by TechPlus, and content may not be reproduced, published, or transferred in any form or by any means, except with the prior written permission of TechPlus.