AIX > Administrator > Networks

Controlling the Flow


This is the second installment in a series of related articles. Read >part one and part three.

In my last article, "Communications Protocols 101," I discussed communications protocols, focusing on TCP and IP. Here, I provide a TCP/IP network overview. Because the bulk of this installment deals with performance and capacity planning, the primary topic of this article is TCP's flow control.

TCP/IP Network Overview

How is data from an application in one system (the sending host) transmitted to another application in a remote system (the receiving host)? To answer this, I'll start with a brief overview of what happens during this process. (Note: This section and many of the topics that follow refer to the information in Figure 1.)

On the sending host, a local application performs a write request, which causes the data (either in the form of messages or streams) to be copied from the application's working segment to the socket send buffer (tcp_sendspace or udp_sendspace in the network options syntax--I'll cover this later). Depending upon the type of data, the socket layer passes the data to TCP or UDP.

For remote networks, if the data is larger than the maximum segment size (MSS), TCP breaks the data into fragments that comply with the MSS. This is the largest chunk of data that TCP sends to a destination. The TCP protocol includes mechanisms on both ends of a connection that announce the MSS to be used during the connection. The MSS that's chosen is the smaller of the two ends. The size isn't actually "negotiated"; it's announced and set to the smaller size in the connection.

For local networks, if the data is larger than the maximum transmission unit (MTU), TCP breaks it into appropriately sized fragments. The MTU is the maximum packet size (including all headers) that can be transmitted on a network. If two hosts are communicating across a path of different networks, a transmitted packet becomes fragmented if its size is greater than the smallest MTU of any network in the path. Fragmentation can reduce network performance. UDP leaves the fragmentation to the IP layer. The interface (IF) layer makes sure that no packet exceeds the MTU. The packets are then placed on the adapter transmit queue (or the SP switch sendpool) and transmitted to the receiving system.

The receiving host places the incoming packets on the adapter's receive queue. They're then passed up to the IP layer, which determines if any fragmentation has occurred due to the MTU. If so, it restores the fragments to their original form and passes the packets to TCP or UDP. TCP reassembles the original segments and puts them on the socket receive buffer (tcp_recvspace) in kernel memory, or UDP passes the data on to the socket receive buffer (udp_recvspace) in kernel memory. The application's read request causes the appropriate data to be copied from the socket receive buffer to the buffer in the application's working (process private) area.

This sending and receiving of packets requires memory from each computer. With AIX*, special memory structures provide for network communication.

Tom Farwell is a technical editor for IBM Systems Magazine, Open Systems edition. He can be reached through www.tomfarwellconsulting.com.


comments powered by Disqus
Buyers Guide

Advertisement

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store

Advertisement