CMPU 375, Fall 2005 Lecture 2

Lecture 2

Outline

Kinds of delays

Delays at routers as packets are forwarded in the network are a fundamental part of the service model of the internet.  Several components of router delays are easy to predict and understand.  One kind of delay, queueing delay, is somewhat more subtle, and is generally hard to predict.

Predictable delays

Processing delay  When a packet arrives at a router, the router must inspect its routing tables in order to figure out which outgoing interface to send the packet on.  Modern routers can do this very quickly---on the order or microseconds.  So, this is not a particularly significant source of delay.

Transmission delay  This is the amount of time it takes to push a packet out onto the outgoing link, and is determined by L/R, where L is the length of the packet and R is the transmission rate.  For example: say we have a 1500 byte ethernet packet being sent out on a 100 Mb/s link.  1500 bytes is 12,000 bits (we will use 1 byte == 8 bits consistently).  So, the transmission delay is

12000 bits * (1 s / 108 bits) = 1.2 * 10-4 seconds = .12 milliseconds

Propagation delay  This is the delay due to the rate at which a single bit of information propagates through the communication medium.  The upper limit is the speed of light in a vacuum, which is about 3 * 108 meters/second.  Signals travelling through media typically used in computer networks, such as copper wires and optical fibers, have a speed about 2/3 the speed of light, or around 2 * 108 meters/second.  Propagation delay is simply d/s, where d is the distance and s is the speed of signals through the communication medium.  Say we have a single wire or optical fiber running from the east coast of the US to the west coast, or around 3000 miles or 5000 km.  5000 km is 5 * 106 meters.  So, the propagation delay is

5 * 106 meters * (1 second / 2 * 108 meters) = (5 / 200) seconds = .025 seconds = 25 milliseconds

Given the global scale of the internet, propagation delays can be significant, especially since many protocols are sensitive to round trip time, the amount of time it takes for a packet to be sent to its destination and a response packet to come back to the original sender.  In the above example, the round trip delay will be at least 50 ms, due to propagation delay alone.

Queuing delays

When a packet arrives at a router, it may need to wait in a queue for the outbound link to be available:

The queue is necessary because the outgoing link can only transmit packets at a fixed rate.  If too many packets arrive bound for that link, some of them must wait in the queue until the link is available.

You might ask why we can't just buy a faster outbound link.  If we increase the transmission rate of the link, then we should be able to avoid having to queue packets, right?

Unfortunately, this is not a general solution to the problem.  Network links are bidirectional, and each router will generally have more than two links to other systems.  Consider a router with three links to other networks, each of which has transmission rate r:

Say that networks 1 and 2 are both sending a large volume of data to network 3 (maybe because user systems in network 3 are downloading files over a peer to peer file sharing system):

The capacity required to carry the traffic on the outgoing link to network 3 without queuing is 2r.  We could increase the capacity of this link.  However, then another combination of flows could create a similar situation on another link:

Now the capacity we need on the congested link (to network 1) is 3r.  In general, the possibility of queuing delays and congestion can never be eliminated, unless every system on the network has a direct link to every other system on the network.

In practice, queuing delays are hard to predict:

Here is a simplistic view of the problem:

a = average rate of packet arrival
L = average length of a packet
R = transmission rate of link
The traffic intensity is La/R; basically, the traffic it is a characterization of the utilitization of a particular link.  If the traffic intensity is greater than one, then incoming packets are arriving faster than they can be forwarded.  Assume we have infinite queuing capacity on the link.  Intuitively, here is what will happen to the queuing delay as the traffic intensity approaches 1:

In practice, queuing delays are on the order of milliseconds, because network operators are continually adding additional capacity to their network links in order to cope with the increasing volume of data being transmitted.

Application protocols

This section discusses some of the considerations for designing and implementing a network application protocol.

Choice of transport service.

Here are the transport protocols available in the TCP/IP protocol suite, broken down by the characteristics of their service models:

Transport Reliable? Connection-oriented? Flow and congestion control?
TCP yes yes yes
UDP no no no
DCCP (proposed) no yes yes

For most network applications, a reliable, connection-oriented transport service, such as TCP, will be most appropriate.  For example, file transfer protocols (HTTP, FTP) and remote login protocols (SSH) need relatively long-lived connections that deliver data completely and in-order.  These applications must be at least somewhat tolerant of delay, because lost packets in the network will require some time to be retransmitted.

Applications like streaming audio and video, on the other hand, can tolerate some degree of loss, but require timely delivery of data.  Since these streams are real-time, late data is useless.  For such applications, an unreliable transport service is more appropriate.

Note, however, that the existing unreliable transport protocol, UDP, has some shortcomings: it does not support any mechanism for connections, flow control, or congestion control.  All of these features are useful in a streaming audio or video system, so the current generation of systems that use UDP as a transport service have to reinvent them at the application layer.  DCCP, a proposed transport protocol, adds connections and flow/congestion control to an unreliable service.

What messages are exchanged?

At the heart of designing any protocol lies the question of what communication the protocol is designed to carry out.  For example, HTTP is essentially a file transfer protocol, and support messages for requesting a file's contents or meta-information (client to server messages), and delivering file contents or meta-information (server to client messages).

Along with the messages a protocol allows lies the notion of state.  At each state of the protocol, certain kinds of messages may be either sent or received.  For example, here is a greatly simplified version of the states of the HTTP protocol:

How are the methods represented?

Three main issues with message representation:

  1. Where do messages begin and end?
  2. What fields does each message contain?
  3. How are the fields encoded?

The two general approaches to encoding the message fields are text or binary.

Text Binary
Advantages Human-readable, machine independent Compact, can be fixed size, fast encoding/decoding
Disadvantages Verbose, variable size, encoding/decoding time Not easily human-readable, byte-order issues, alignment issues

Most of the standard application protocols (HTTP, FTP, SMTP) use text encoding.  Most (all?) of the standard transport and network-layer protocols use binary encoding.  Because of the tremendous packet volume that must be handled by routers and other networking hardware, using a binary encoding makes more sense.  At the application layer, the processing time of encoding and decoding message fields tends to be a small component of overall performance, so text encoding is a reasonable choice.

All standard protocols that use binary encoding represent integers in big-endian (most significant byte first) format.

Sockets

A socket is a communications endpoint.  For connection-oriented transport protocols like TCP, a socket is one end of the connection.  For connectionless transport protocols like UDP, a socket is more like a mailbox: incoming data arriving at a UDP socket could be coming from anywhere.

Sockets are bidirectional: they support both sending and receiving data.

Sockets are managed by the operating system on behalf of the process that is communicating over the network.

Server sockets and ports

For connection-oriented based sockets, there needs to be some way of opening a connection to another process on another network host.  A server socket allows a process to accept connections.  Note that server sockets are not used to send and receive data: their only purpose is to listen for and accept connections.

In general, a single host system can have many processes running, each of which may be using one or more network sockets.  Therefore, there needs to be a way to demultiplex incoming network packets in order to deliver them to the correct socket.  In both TCP and UDP, a port is used for this purpose.  A port is a 16 bit unsigned integer.  Note that the space of TCP port numbers is distinct from the space of UDP port numbers.

When creating a server socket, a server process specifies the port number it wants to listen for connections on.  When a peer process on another host requests a connection, the server process can accept the connection.  Accepting a connection results in the creation of a new ordinary socket with the same port number as the server socket.  This new socket forms the server's end of the connection.

Note that because the server socket can accept many connections, and that all connections accepted by the same server socket have the same port number, a port number does not uniquely identify a socket or a connection.  Rather, the connection is identified by the tuple

local host address, local port, remote host address, remote port

Here is a concrete example.  We have two client hosts each running a web browser application, and a server host running a web server application.

The circles represent sockets.  The light blue lines represent TCP connections.  The orange socket in the web server is the server socket, which is not part of any connection.  Note that, somewhat confusingly, each of the sockets in the web server host is bound to port 80.  However, because one of them is a server socket, and the other two are TCP sockets communicating with different remote host/remote port pairs, the sockets are distinguishable.

Socket Programming in Java

In Java, the java.net.Socket class represents a socket.  It has getInputStream() and getOutputStream() methods for reading data from and writing data to the socket, respectively.

The ServerSocket class represents a server socket.  The accept() method listens on the server socket for connection requests, and returns a new Socket object representing the new connection from a client.

An example of a very simple client and server implementation in Java is contained in clientserver.zip.  Some of the details to note: