[j-nsp] strange packet loss without impact
Matthias Brumm
matthias at brumm.net
Mon Jul 4 11:25:38 EDT 2011
no, that is not the problem. I have looked into the Juniper definition and
we have one discard routing entry, which should be responsible for this
entry.
the complete output:
show pfe statistics traffic
Packet Forwarding Engine traffic statistics:
Input packets: 586522987655 19925 pps
Output packets: 585165208482 19866 pps
Packet Forwarding Engine local traffic statistics:
Local packets input : 1228194454
Local packets output : 713668140
Software input control plane drops : 0
Software input high drops : 0
Software input medium drops : 13059
Software input low drops : 0
Software output drops : 0
Hardware input drops : 0
Packet Forwarding Engine local protocol statistics:
HDLC keepalives : 0
ATM OAM : 0
Frame Relay LMI : 0
PPP LCP/NCP : 0
OSPF hello : 0
OSPF3 hello : 0
RSVP hello : 0
LDP hello : 0
BFD : 0
IS-IS IIH : 0
LACP : 0
ARP : 513852055
ETHER OAM : 0
Unknown : 0
Packet Forwarding Engine hardware discard statistics:
Timeout : 0
Truncated key : 0
Bits to test : 0
Data error : 0
Stack underflow : 0
Stack overflow : 0
Normal discard : 557514914
Extended discard : 0
Invalid interface : 0
Info cell drops : 0
Fabric drops : 0
Packet Forwarding Engine Input IPv4 Header Checksum Error and Output MTU
Error :
Input Checksum : 132684
Output MTU : 34
2011/7/4 Matthias Brumm <matthias at brumm.net>
> Hello!
>
> show pfe statistics traffic is the first command, showing some errors:
>
> Packet Forwarding Engine hardware discard statistics:
> Timeout : 0
> Truncated key : 0
> Bits to test : 0
> Data error : 0
> Stack underflow : 0
> Stack overflow : 0
> Normal discard : 557491798
> Extended discard : 0
> Invalid interface : 0
> Info cell drops : 0
> Fabric drops : 0
>
> Is "Normal discard" an error or something "Normal", as the name would say.
>
>
> Matthias
>
> 2011/7/4 Christian <cdebalorre at neotelecoms.com>
>
>> **
>>
>> If in doubt run show system processes summary to check for busy process
>> during your peak time.
>> Also you can have some interesting stats with a show pfe statistics
>> traffic
>>
>> Christian
>>
>>
>> Le 04/07/2011 15:33, Matthias Brumm a écrit :
>>
>> Hello!
>>
>> At the moment I am monitoring it with top on UNIX shell, do you have
>> another suggestion? In top this process is idleing.
>>
>> Regards,
>>
>> Matthias
>>
>> 2011/7/4 Christian <cdebalorre at neotelecoms.com>
>>
>> Hi,
>> Try to monitor the fwdd process - when running high it causes packet to
>> drop on these pc's.
>>
>> Christian
>>
>>
>> Le 04/07/2011 13:11, Adam Leff a écrit :
>>
>> I realize this will sound silly, but have you checked for half-duplex
>> on your interfaces?
>>
>> Those onboard J6350 interfaces are actually 10/100/1000, so if you
>> don't have the speed and link-mode hardcoded, do a show interfaces
>> extensive ge-0/0/# and check the link partner section to ensure you're
>> running full-duplex.
>>
>> Adam
>>
>> On Jul 4, 2011, at 7:01, Matthias Brumm<matthias at brumm.net> wrote:
>>
>> Hello!
>>
>> Since some weeks now, we have a strange packet loss on one of our edge
>> locations.
>>
>> A few days ago an IX informed us about packet loss on our router. The
>> router
>> in place is a J6350. We have a 1 Gig line to us and two 1 Gig lines to
>> some
>> uplinks. Every communication goes through a 1 Gig copper link to a
>> ProCurve
>> 2810-24G. So the external links are connected to the switch and the switch
>> via one cable with the router.
>>
>> The packet loss is strange, because:
>>
>> 1. In smokeping during the busy hours of the day, there are losses of
>> about
>> 5%
>> 2. From my workstation I get packet loss of about 10 up to 50%
>> 3. There are no errors on the switch or router interface (except i.e. VLAN
>> errors)
>> 4. no customers have reported any problems. But there are many customers
>> relying on real time communication (VoIP/RDP)
>> 5. The switch port with the router is showing maximum 200 Mbit
>> 6. The router is showing 20% real-time threads
>>
>> According to the datasheet the J-Series should be able to deliver this
>> performance easily. Or are the onboard Gig-Interfaces the problem? Of
>> course
>> I know, that this physical configuration is a bad idea, and I will change
>> is
>> very soon to ease the load on this particular port.
>>
>> Any other ideas?
>>
>> Regards,
>>
>> Matthias
>> ______________________________ _________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/ mailman/listinfo/juniper-nsp<https://puck.nether.net/mailman/listinfo/juniper-nsp>
>>
>> ______________________________ _________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/ mailman/listinfo/juniper-nsp<https://puck.nether.net/mailman/listinfo/juniper-nsp>
>>
>> ______________________________ _________________
>> juniper-nsp mailing list juniper-nsp at puck.nether.net
>> https://puck.nether.net/ mailman/listinfo/juniper-nsp<https://puck.nether.net/mailman/listinfo/juniper-nsp>
>>
>>
>>
>
More information about the juniper-nsp
mailing list