After completing this unit, you will be able to:
·         Understand various methods of Media Methods.
·         Understand CSMA/CD and how Collisions are Detected and Avoided in a Network.
·         Understand how aToken is Circulated and Fault is Determined.
·         Understand the format of a Frame which Carries Data.
·         AppreciateDemand Priority.
·         Understand TCP/IP Protocol.
·         Understand IP addressing, its format and classes.
·         Understand IP Routing and other Protocols.
·         Understand ICMP and its Message Format.
·         Understand TCP, its Packet Format and Sliding Window.
·         Understand UDP.
·         Understand Application Layer Protocols.
A.1    Introduction
A.2    Media Access Methods
A.3    Internet Protocols
A.4    Internet Control Message Protocol (ICMP)
A.5    Transmission Control Protocol (TCP)
A.6    User Datagram Protocol
A.7    Application Layer Protocols
A.8    Summary
A.1    Introduction
A media access method refers to how data moves from one terminal to another and how the computer terminal on a network gains and controls the transfer of data packets over the network through the cables forming the communication link. The most important factor here is to ensure proper and timely delivery of the data packets with ease.
A media access method refers to the manner in which a computer terminal on a network gains and controls access to the network's physical medium such as a cable.
Given below are some of the common media access methods:
1.       CSMA/CD
2.       CSMA/CA
3.       Token Passing
4.       Demand Priority
The prime objective of media access is to prevent data packets from colliding when two or more computer terminals on a network try to transmit data simultaneously over a network.
The data transmitted over a network is sent one bit at a time. A bit is either a 1 or a 0 represented by a voltage change (on or off). If two terminals are transmitting at the same time, it is possible that the signals may overlap, resulting in a corruption of data. Such overlapping of signals is referred to as a “collision”.
1. Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
This is a media access method which defines how the network places data on the cable and how it takes it off. CSMA/CD specifies how bus topologies such as Ethernet handle transmission collisions. It usually operates in two modes of Carrier Sense, Multiple Access and Collision Detection.
Carrier Sensemeans that each station on the LAN continually listens to (tests) the cable for the presence of a signal prior to transmitting. Multiple Access means that there are many computers attempting to transmit and compete for the opportunity to send data (i.e., they are in contention). Collision Detection means that when a collision is detected, the stations will stop transmitting and wait a random length of time before retransmitting the data.
CSMA/CD works best in an environment where relatively fewer, longer data frames are transmitted. This is in contrast to token passing which works best with a relatively large amount of short data frames. Because CSMA/CD works to control or manage collisions rather than prevent them, network performance can be degraded with heavy data traffic. More traffic will lead to a greater number of collisions and retransmissions in a network. CSMA/CD is used on Ethernet networks.
2. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
CSMA/CA stands for Carrier-Sense Multiple Access with Collision Avoidance and is a media access method very similar to CSMA/CD. The difference is that the CD (collision detection) is changed to CA (collision avoidance). Instead of detecting and reacting to collisions, CSMA/CA tries to avoid them by having each computer signal its intention to transmit before actually transmitting. In effect, the transmitting computer gives a “Request” prior to transmitting.
Although CSMA/CA can prevent collisions, it comes with a cost in the form of the additional overhead incurred by having each workstation broadcast its intention prior to transmitting. Thus, CSMA/CA is slower than CSMA/CD. CSMA/CA is used on Apple networks.
3. Token Passing
Token passing is a media access method by which collisions are prevented. Collisions are eliminated under token passing because only a computer that possesses a free token (a small data frame) is allowed to transmit. The token passing method also allows different priorities to be assigned to different stations on the ring. Transmissions from stations with higher priority take precedence over stations with lower priority. Token passing works best in an environment where a relatively large number of shorter data frames are being transmitted.
Token passing networks move a small data frame, called a token, around the network. Possession of the token grants the right to transmit to that terminal on a network. If a node receiving the token has no information to send, it passes the token to the next station connected on a network. Each station can hold the token for a maximum period of time.
If a station possessing the token does have information to transmit, it seizes the token, alters 1 bit of the token (which turns the token into a start-of-frame sequence), appends the information that it wants to transmit, and sends this information to the next station on the ring. While the information frame is circling the ring, no token is on the network, (unless the ring supports early token release), which means that other stations wanting to transmit must wait. Therefore, collisions cannot occur in token ring networks. If early token release is supported, a new token can be released when frame transmission is complete.
The information frame circulates the ring until it reaches the destination node, which copies the information for processing and acknowledges the receipt of the frame, which can be seen by the sending terminal. The information frame continues to circle the ring and is finally removed when it reaches the sending node.
Token ring networks employ several mechanisms for detecting and compensating for network faults. For example, one station in the token ring network is selected to be the active terminal. This station or node, which potentially can be any station on the network, acts as a centralized source of timing information for other ring stations and performs a variety of ring-maintenance functions. One of these functions is the removal of continuously circulating frames from the ring. When a sending device fails, its frame may continue to circle the ring. This can prevent other stations from transmitting their own frames and essentially can lock up the network. The active terminal can detect such frames, remove them from the ring, and generate a new token.
A token ring consists of certain tools and algorithms which are capable of detecting and repairing network faults. Whenever a station or node on a network detects a serious problem with the network (such as a cable break), it sends a Fault Frame, which defines a failure domain. This domain includes the station reporting the failure, its nearest active upstream neighbor, and everything in between. Failure Detection initiates a process called auto-reconfiguration, in which nodes within the failure domain are automatically set to perform diagnostics in an attempt to reconfigure the network around the failed areas.
There are two common error conditions that can occur on a token passing LAN:
a)      Constant Frame Error
A token cannot be acknowledged and continues to be passed around the ring.
b)      Lost Token Error
A token is accidentally “hung up” or removed from the ring.
Most token passing schemes can detect these errors and provide a mechanism for clearing the ring or initializing a new token.
Frame Format
Token rings support two basic frame types: tokens and data frames. Tokens are usually 3 bytes in length and consist of a start delimiter, an access control byte, and an end delimiter. Data/command frames vary in size, depending on the size of the information field.
Data Frame
(1 Byte)
(1 Byte)
(1 Byte)
(6 Bytes)
(6 Bytes)
(4 Bytes)
(1 Byte)
(1 Byte)
Start DelimiterAccess ControlEnd Delimiter
Table A.1: Token Frame Fields
Start DelimiterAlerts each station of the arrival of a token (or data/command frame). This field includes signals that distinguish the byte from the rest of the frame by violating the encoding scheme used elsewhere in the frame.
Access Control ByteContains the Priority field (the most significant 3 bits) and the Reservation field (the least significant 3 bits), as well as a token bit (used to differentiate a token from a data/command frame) and an active terminal bit (used by the active terminal to determine whether a frame is circling the ring endlessly).
End DelimiterSignals the end of the token or data/command frame. This field also contains bits to indicate a damaged frame and identify the frame that is the last in a logical sequence.
Table A.2: Data/Command Frame Fields
Frame Control Bytesindicates whether the frame contains data or control information.
In control frames, this byte specifies the type of control information.
Destination and Source AddressesIt usually consists of two 6-byte address field blocks that identify the destination and source terminal addresses.
Dataindicates that the length of field is limited by the ring token holding time, which defines the maximum time a terminal can hold the token.
Frame Check SequenceThis block is filed by the source station with a calculated value dependent on the frame contents. The destination station recalculates the value to determine whether the frame was damaged in transit. If so, the frame is discarded.
Frame StatusIt is a 1-byte field block terminating a data frame. The Frame Status field includes the address-recognized indicator and frame-copied indicator.
Start Delimiter Access-Control Byte and End Delimiter(The Frame Blocks will be performing the same functions as they are in Token Frame)
4.       Demand Priority
Demand priorityis the new Ethernet media access method that will probably replace the popular but older CSMA/CD method. In demand priority, an active hub is an essential requirement that can control the access to the network. The terminals on a network are required to obtain permission from the hub before they start transmitting the bytes over a network. In this the terminals involved in communication can send and receive at the same time. The transmission can be prioritized based on the requirements; for example, time sensitive data such as video can be given priority.
Demand priority utilizes a “hub-centric approach” to media access. A “smart hub” controls access to the network. When a workstation needs to transmit, it sends a request to the hub. The hub grants permission to transmit based on network conditions and requester priority. As they are under the control of the hub, workstations or terminals do not compete for access to the network.
Unlike regular Ethernet in which a transmission is transmitted to all stations, demand priority utilizes a directed transmission. The hub directs the transmission from sender to intended recipient rather than sending it to all stations.
With demand priority, workstations can transmit and receive at the same time. This is because demand priority uses “quartet signaling” (Transmission of data on four pairs of wires).
Table A.3: Distinction between CSMA/CD and Token Passing
CSMA/CDToken Passing
Used primarily by Ethernet LANs.Used primarily by Token Ring LANs.
Works best in larger networks with relatively fewer, longer data frames.Works best in small to medium size networks with many short data frames
Does not allow different priorities to be assigned to stations.Allows different priorities to be assigned to stations.
Normally less expensive than token passing.Normally more expensive than CSMA/CD.
Internet protocols were first developed in the mid-1970s, when the Defense Advanced Research Projects Agency (DARPA) became interested in establishing a network that would facilitate communication between dissimilar computer systems at research institutions. With the goal of heterogeneous connectivity in mind, DARPA funded research to be done by the Stanford University and Bolt, Beranek and Newman (BBN). The result of this development effort was the Internet protocol suite. The Internet protocols are the world's most popular open-system protocol suite because they can be used to communicate across any set of interconnected networks and are equally well suited for LAN and WAN communications. The Internet protocols consist of a suite of communication protocols; the two best-known protocols are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). The Internet protocol suite not only includes lower-layer protocols (such as TCP and IP), but it also specifies common applications such as electronic mail, terminal emulation, and file transfer.
To illustrate the scope of the Internet protocols, look at Table A.4 which maps many of the protocols of the Internet protocol suite and their corresponding OSI layers.
Table A.4: OSI layers and their corresponding IP suite
ApplicationFTP, Telnet,
TransportTCP, UDP
Routing Protocols
Data link  
Internet Protocol (IP)
The Internet Protocol (IP) is a network-layer protocol that contains addressing information and some control information that enables packets to be routed. In combination with the Transmission Control Protocol (TCP), IP represents the control center of the Internet protocols. IP has two primary responsibilities of providing connectionless, best-effort delivery of data packets through an inter-network and providing fragmentation and reassembly of data packets to support data links.
IP Addressing
As with any other network-layer protocol, the IP addressing scheme is integral to the process of routing IP data packets through an inter-network. Each IP address has specific components and follows a specific format. These IP addresses can be subdivided and used to create addresses for sub-networks (network within a network).
 Each host terminal on a TCP/IP network is assigned a unique 32-bit logical address divided into two main parts: the network number and the host number. The network number identifies a network. The host number identifies a host on a network and is assigned by the local network administrator.
IP Address Format
The 32-bit IP address is grouped eight bits at a time, separated by a period/full stop (.), and it is represented in decimal format. Each bit in the octet has a binary weight (128, 64, 32, 16,8,4,2 and 1). The minimum value for an octet is 0, and the maximum value for an octet is 255.
These 32-bits are divided into an octet of 8 bits, which is separated by a period (.)
For example: 174.16. 123.205. etc.
IP Address Classes
IP addressing supports five different address classes: A, B, C, D, and E. Only classes A, B, and C are available for commercial use. The left-most (high-order) bits indicate the network class. The table structure shown to you in Table A.5 provides reference information about the IP address classes and its range.
Table A.5: IP Address Classes
Address RangeNo. Bits Max.
AFew large
organization to
BMedium-size organizations1.012A.1.0.0 to
CRelatively small
1,1,0192.0.1.0 to
The class of address can be determined easily by examining the first octet of the address and mapping that value to a class range as shown in Table A.5. If an IP address is 192.16A.1.1 for example, the first octet is 192. As 192 lies between 192 and 223, 192.16A.1.1 is a Class C address.
IP Sub-Network Addressing
IP networks can be divided into smaller networks called sub-networks. Sub-networks (networks within a network) provide the network administrator with several benefits such as extra flexibility, more efficient use of network addresses for the terminals, expansion of networks and the capability to contain broadcast traffic on a network.
Sub-networks are under local administration at the root level. As such, the outside world sees an organization as a single network and has no detailed knowledge of the organization’s internal network structure.
A given network address can be broken up into many sub-networks. For example,,, 192.16A.3.0 and 192.16A.4.0 are all sub-networks within a portion of an address that specifies the entire network.
Address Resolution Protocol (ARP) Overview
For two machines on a given network to communicate, they must know the other machine’s physical (or MAC) addresses. By broadcasting Address Resolution Protocols (ARPs), a host can dynamically discover the MAC-layer address corresponding to a particular IP network-layer address. After receiving a MAC-layer address, IP devices create an ARP cache to store the recently acquired IP-to-MAC address mapping, thus avoiding having to broadcast ARPS when they want to reconnect to a device. If the device does not respond within a specified time frame, the cache entry is flushed or overwritten. In addition, the Reverse Address Resolution Protocol (RARP) is used to map MAC-layer addresses to IP addresses. RARP, which is the logical inverse of ARP, might be used by diskless workstations or terminals on a network that do not know their IP addresses when they boot. RARP relies on the presence of a RARP server with table entries of MAC-layer-to- IP address mappings.
IP Routing
IP routing protocols are dynamic. Dynamic routing calls for routes are calculated automatically at regular intervals by software in routing devices on a network. This is in contrast with static routing, where routers are established by the network administrator and do not change until the network administrator changes them. An IP routing table consists of destination address/next hop pairs and is used to enable dynamic routing. An entry in this table, for example, would be interpreted as follows: to get to network and send the packet out Ethernet interface 0 (E0).
IP routing specifies that IP data packets travel through inter-networks one hop at a time. The entire route is not known at one instance of the journey, however. Instead, at each stop, the next destination is calculated by matching the destination address within the data packet with an entry in the current node’s or terminals routing table.
Each node’s involvement in the routing process is limited to forwarding packets based on internal information. The nodes do not monitor whether the packets get to their final destination, nor does IP provide for error reporting back to the source when routing anomalies occur. This task is left to another Internet protocol, the Internet Control-Message Protocol (ICMP).
The Internet Control Message Protocol (ICMP) is a network-layer Internet protocol that provides message packets to report errors and other control information regarding IP packet processing back to the source.
ICMP Messages
ICMPs generate several kinds of useful messages, including Destination Unreachable, Echo Request and Reply, Redirect, Time Exceeded, Router Advertisement, Router Solicitation, etc. If an ICMP message cannot be delivered, no second one is generated and this avoids an endless generation of ICMP messages.
When an ICMP destination-unreachable message is sent by a router, it means that the router is unable to send the data packet to its final destination and the router then discards the data packet.
Two reasons usually exist for a destination not reachable condition:
a.       Most commonly, the source host has specified a nonexistent address.
b.      Less frequently, the router does not have a route to the destination.
Destination not reachable messages include four basic types:
1.       Network not reachable.
2.       Host not reachable.
3.       Protocol not reachable.
4.       Port not reachable.
Network not reachable messages usually mean that a failure has occurred in the routing or addressing of a packet. Host not reachable messages usually indicate a delivery failure, such as a wrong subnet mask. Protocol not reachable messages generally mean that the destination does not support the upper-layer protocol specified in the packet. Port not reachable messages imply that the TCP socket or port is not available.
An ICMP echo-request message is generated by the ping command to test the node accessibility across an inter-network. The ICMP echo-reply message will indicate that the node can be successfully reached.
An ICMP Redirect message is sent by the router to the source host to stimulate more efficient routing. The router still forwards the original packet to the destination. ICMP redirects allow host routing tables to remain small because it is necessary to know the address of only one router, even if that router does not provide the best path.
An ICMP Time-exceeded message is sent by the router if an IP packet's Time to expire (expressed in hops or in seconds) reaches zero. The time to expire prevents packets from continuously circulating within the inter-network.
ICMP Router-Discovery Protocol (IDRP)
IDRP uses Router-Advertisement and Router-Solicitation messages to discover the addresses of routers on directly attached sub-networks. Each router periodically multicasts Router-Advertisement messages from each of its interfaces. Hosts then discover addresses of routers on directly attached sub-networks by listening for these messages. Hosts can use Router-Solicitation messages to request immediate advertisements rather than waiting for unsolicited messages. IRDP offers several advantages over other methods of discovering addresses of neighbouring routers. Router-Advertisement messages enable hosts to discover the existence of neighbouring routers, but not which router is best to reach a particular destination. If a host uses a poor first-hop router to reach a particular destination, it receives a Redirect message identifying a better choice.
The TCP provides reliable transmission of data packets. TCP corresponds to the transport layer of the OSI reference model.
1.       Stream data transfer.
2.       Reliability.
3.       Efficient flow control.
4.       Full-duplex operation.
5.       Multiplexing.
With stream data transfer, TCP delivers an unstructured stream of bytes identified by sequence numbers. This service benefits applications because they do not have to chop data into blocks before handing it off to TCP. Instead, TCP groups bytes into segments and passes them to IP for delivery.
TCP offers reliability by providing connection-oriented, end-to-end reliable data packet delivery through an inter-network. It does this by sequencing bytes with a forwarding acknowledgment number that indicates to the destination the next byte the source expects to receive. Bytes not acknowledged within a specified time period are retransmitted. The reliability mechanism of TCP allows devices to deal with lost, delayed, duplicate, or misread packets. A time-out mechanism allows devices to detect lost packets and request retransmission.
TCP offers efficient flow control, which means that when sending acknowledgments back to the source, the receiving TCP process indicates the highest sequence number it can receive without overflowing its internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same time. Finally, TCP’s multiplexing means that numerous simultaneous upper-layer conversations can be multiplexed over a single connection.
TCP Connection Establishment
To use reliable transport services, TCP hosts must establish a connection-oriented session with one another. Connection establishment is performed by using a “three-way handshake” mechanism. A three-way handshake synchronizes both ends of a connection by allowing both sides to agree upon initial sequence numbers. This mechanism also guarantees that both sides are ready to transmit data and know that the other side is ready to transmit as well. This is necessary so that packets are not transmitted or retransmitted during session establishment or after session termination.
Each host randomly chooses a sequence number used to track bytes within the stream it is sending and receiving. The three-way handshake proceeds in the following manner: The first host (Host A) initiates a connection by sending a packet with the initial sequence number (X) and SYN bit set to indicate a connection request. The second host (Host B) receives the SYN, records the sequence number X, and replies by acknowledging the SYN (with an ACK = X + 1). Host B includes its own initial sequence number (SEQ = Y). An ACK = 20 means the host has received bytes 0 through 19 and expects byte 20 next. This technique is called forward acknowledgment. Host A then acknowledges all the bytes that Host B sent with a forward acknowledgment indicating the next byte Host A expects to receive (ACK =Y+ 1). Data transfer can then begin. This is how a mechanism of data transfer follows.
Positive Acknowledgement and Retransmission (PAR)
A simple transport layer protocol might implement a reliability-and-flow-control technique the source sends one packet, starts a timer, and waits for an acknowledgment before send a new packet. If the acknowledgment is not received before the timer expires, the source retransmits the packet. Such a technique is called positive acknowledgment and retransmission (PAR).
This is controlled by assigning each packet a sequence number. PAR enables hosts to track lost or duplicate packets caused by network delays that usually result in premature retransmission. The sequence numbers are sent back in the acknowledgments so that the acknowledgments can be tracked.
The basic disadvantage of PAR is an inefficient use of bandwidth, as the host must wait for an acknowledgment before sending a new packet, and only one packet can be sent at a time.
Table A.6: TCP Packet Format
Data OffsetReservedFlagsWindows
ChecksumUrgent Pointer

The following descriptions summarize the TCP packet formats:
1.       Source Port and Destination Port:  Identifies points at which upper-layer source and destination processes receive TCP services.
2.       Sequence Number:Usually specifies the number assigned to the first byte of data in the current message. In the connection-establishment phase, this field also can be used to identify an initial sequence number to be used in an upcoming transmission.
3.       Acknowledgement Number:Contains the sequence number of the next byte of data the sender of the packet expects to receive.
4.       Data Offset:Indicates the number of 32-bit words in the TCP header.
5.       Reserved:This field remains reserved for future use.
6.       Flags:This file carries a variety of control information, including the transmission and receiving of bits used for connection establishment and control indicating connection termination.
7.       Window:Specifies the size of the sender's receiving window (that is, the buffer space available for incoming data).
8.       Checksum:Indicates whether the header was damaged during transmission or not.
9.       Urgent Pointer:Points to the first urgent data byte in the packet.
10.   Options:Specifies various TCP options in built options.
11.   Data:Contains upper-layer information.
TCP Sliding Window
A TCP sliding window provides more efficient use of network bandwidth than PAR because it enables hosts to send multiple bytes or data packets instead of waiting for an acknowledgment. In TCP, the receiver (terminal) specifies the current window size in every data packet. Because TCP provides a byte-stream connection, window sizes are expressed in bytes. This means that a window is the number of data bytes that the sender is allowed to send before waiting for an acknowledgment. Initial window sizes are indicated at connection setup, but might vary throughout the data transfer to provide flow control.
A window size of zero means “Not allowed to send data.” For example, the sender might have a sequence of bytes to send (numbered 1 to 10) to a receiver who has a window size of five. The sender then would place a window around the first five bytes and transmit them at one instance. It would then wait for an acknowledgment from the receiver.
The receiver would respond with an ACK = 6, indicating that it has received bytes 1 to 5 and is expecting byte 6 next. In the same data packet, the receiver would indicate that its window size is 5. The sender then would move the sliding window from five bytes to the right and transmit bytes from 6 to 10.
The User Datagram Protocol (UDP) is a connectionless transport-layer protocol that belongs to the Internet protocol family. UDP is basically an interface between IP and upper layer processes. UDP protocol ports distinguish multiple applications running on a single device from one another. Unlike the TCP, UDP adds no reliability, flow-control or error-recovery functions to IP. Due to UDP’s simplicity, UDP headers contain fewer bytes and consume less network overhead than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer protocol might provide error and flow control. UDP is the transport protocol for several well-known application-layer protocols, including Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name System (DNS) and File Transfer Protocol (FTP). The UDP packet format contains four fields which include source and destination ports, length and checksum fields.
Source and destination ports contain the 16-bit UDP protocol port numbers used to demultiplex datagrams for receiving application-layer processes. A length field specifies the length of UDP header and data. Checksum provides an integrity check on the UDP Header and Data.
The Internet protocol suite includes many application-layer protocols of applications, including the following:
a)      File Transfer Protocol (FTP)-Moves files between devices.
b)      Simple Network Management Protocol (SNMP)-Primarily reports anomalous network conditions and sets network threshold values.
c)       Telnet-Serves as a terminal emulation protocol.
d)      Network File System (NFS), External Data Representation (XDR) and Remote Procedure Call (RPC)-Work together to enable transparent access to remote network resources.
Simple Mail Transfer Protocol (SMTP)-Provides electronic mail services
Domain Name System (DNS)-Translates the names of network nodes into network addresses.
Table A.7: Application Protocols
File transferFTP
Terminal emulationTelnet
Electronic mailSMTP
Network managementSNMP
Distributed file servicesNFS, XDR,RPC
A media access method refers to how data moves from one terminal to another and how the computer terminal on a network gains and controls the transfer of data packets over the network through the cables forming the communication link. There are 4 methods commonly used and they are CSMA/CD, CSMA/CA, Token Ring and Demand Priority.
The communication between the nodes is governed by certain standards or rather rules of forming successful communication channel; these rules are nothing but protocols. In this unit we have discussed TCP/IP protocol suit which is the most popular and important. TCP/IP is bundled up with IP (Internet Protocol) and Transmission Control Protocol (TCP).