If I remember correctly, Cisco uses some kind of decaying average algorithm
to report bit rate/second on interfaces.
MRTG and programs like it use regular polling of byte counters. If the
latency of the network is constant and both the statistics machine and the
network device have sufficient available CPU, it seems to me that MRTG
statistics will be more accurate over a 5 minute window.
The reason I am asking is because I have a full duplex FE interface that is
reporting 94mb/s inbound on a 5 minute average, and yet the MRTG is never
exceeding 84.2mb/s on the same interface. [MRTG is configured as a daemon so
cron-timing shouldn't be an issue]
Information and suggestions are appreciated.
Thanks in advance,
Deepak Jain
AiNET
This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:12:54 EDT