This draft describes extended performance statistics for TCP. They are designed to use TCP's ideal vantage point to diagnose performance problems in both the network and the application. If a network based application is performing poorly, TCP can determine if the bottleneck is in the sender, the receiver or the network itself. If the bottleneck is in the network, TCP can provide specific information about its nature.
Please get the most up-to-date date TCP-ESTATS-MIB here:
[the already out of date IETF draft] [live draft with change bars since the IETF draft]
[live draft with change bars since the prior IETF draft]
NOTE: We just completed a fairly substantial restructuring of this draft. Unfortunately a number of nits got through, but we are grooming this draft for last call, RealSoonNow. The rest of this page is currently out of date.
We are restructuring the prior ESTATS document to incorporated the good transport work that was done by the IPv6 team and later determined to be out of scope. We will have a new draft before the IETF ID cutoff (9 AM Monday March 3rd) that will address most of the open issues.
The best description of the process history is in the previous version of this status page.
If you want to be included in the design team, drop a note to Peter O'Neil <firstname.lastname@example.org> and we will add you to the list.
The web100 project has implemented TCP kernel instrumentation that approximates this MIB. The web100 instruments are exported via the Linux proc interface (not SNMP), and are differ slightly from this draft. Glen Turner is building a real SNMP agent to access the web100 kernel instruments.
Update various boilerplate
The standard MIB boiler plate is undergoing revision. The reference are do not conform to the current RFC editor specifications. Introductory text and abstract are too terse.
Is the TCP-ESTATS-MIB complete?
Did we overlook something that you would like to observe about TCP in the field?
Web100 assumed a real ToD clock.
Web100 lists nine 64 bit counters without HC tags or 32 bit equivalents. In terms of regular computer system implementations these are pretty much a no brainer (because you already hold a SMP lock on the per connection protocol structure, so simple double precision using 32bits is good enough). How much political fire will this draw? (interestingly enough web100 only specified 32 bits for segment counts).
Web100 assumes uS precision for various objects (duration, RTT, etc) however today nobody has cheap clocks with this granularity. I would like to have this debated (or better yet, play back the same debate from some other already completed MIB). Can somebody cite a pointer?
octets v bytes
Back when there were many 36 bit computers in the world, the term byte was ambiguously either 6 or a 8 bits, ergo the use of octet in the TCP spec. Byte is no longer ambiguous, so most of us have stopped using octet. I will try to find a more official reference on the deprecation of octet. RFC1122 even defines octets as 8-bit bytes.
handling retransmitted data
The historical and revised MIBs [RFC2012 and RFC2012bis] contain a false optimization re: excluding retransmitted data from some instruments such as tcpOutSegs. The problem is that there are many different code paths that cause retransmission (Tahoe, classic fast retransmit, NewReno, SACK, etc) but the accounting is done much later (after the headers are built). Doing it correctly requires reconstructing why the segments is being sent (not too hard, but still not clean) A far better approach is to take the view that there are two classes of statistics: measures of IP resources consumed (packets and bytes in and out, etc for all reasons) and measures of application performance (total advance of the ACK fields). Then DataBytesTransmitted-DataBytesAcked(-CurrentWindow) is an exact measure of data bytes retransmitted. I would keep the direct measures of retransmissions (tcpRetransSegs, etc) but I would scratch all of the "(in/ex)cluding retransmitted segs/octets" This applies to both the summary and per connection stats.
Although this is a change from past definitions, I believe that there are zero applications that care.
Is the partition into 8 separate tables appropriate?
Since ESTAT-MIB is so large (~135 objects) we partitioned it into 8 smaller tables, each indexed by a "ConnectionIndex". This provides a mechanism to balance resource consumption against statistics detail and provide a fast mechanism for diagnostic tools to poll on a specific connection. However it has never been clear to me if this partitioning is worth the added complexity and overhead. Is it? We may not know until we have a true stand alone implementation.
Error semantics are not sufficiently specified.
What error should be reported if a given connection index is no longer valid.
Is it appropriate to capture the IP TTL field?
|[TCP-ESTATS-MIB]||Matt Mathis, John Heffner, Raghu Reddy, J. Saperia, "TCP Extended Statistics MIB", work in progress.|
|[RFC2012]||McCloghrie, K., "SNMPv2 Management Information Base for the Transmission Control Protocol using SMIv2", RFC 2012, November 1996.|
|[RFC2012bis]||Bill Fenner, et al, "Management Information Base for the Transmission Control Protocol (TCP)" Internet-Draft draft-ietf- draft-ietf-ipv6-rfc2012-update-00.txt, expires Dec 2002|
Prior versions of this page:
Please send comments and suggestions to email@example.com.
This document is a product of the web100 project.