[outages] Bay Area packet loss at Comcast (gblx.net, he.net)
Stephen Wilcox
steve.wilcox at ixreach.com
Sat Jan 19 13:37:19 EST 2013
Thats not an outage, thats a congested link - probably you should report to
HE / GBLX / Comcast....
On 19 January 2013 06:37, Constantine A. Murenin <mureninc at gmail.com> wrote:
> I think this happens practically every evening recently, for traffic
> between Comcast in NorCal and Linode at he.net in Fremont, around
> gblx.net.
>
> Today I have been monitoring through a crontab script since 16:00 PT,
> with a 10-minute mtr runs in a loop. The traffic loss has started in
> the 20:50/21:00 timeframe, and is still ongoing (22:20). There
> weren't a 10-minute interval between 20:50 and 22:20 without at least
> a 4% packet loss (often more than that).
>
>
> 2013-01-18T20:50:09-0800
> HOST: li163-### Loss% Snt
> Last Avg Best Wrst StDev
> 1. router1-fmt.linode.com 0.0% 600
> 47.5 29.2 0.5 304.2 38.4
> 2. 10gigabitethernet2-3.core1.fmt1.he.net 0.0% 600
> 0.6 16.3 0.6 929.0 64.5
> 3. 10gigabitethernet1-2.core1.sjc2.he.net 0.5% 600
> 2.4 4.1 0.7 41.9 6.4
> 4. Port-channel100.ar3.SJC2.gblx.net 2.2% 600
> 75.7 31.8 0.8 1160. 76.7
> 5. po6-20G.ar4.SJC2.gblx.net 1.3% 600
> 19.9 33.4 0.9 917.4 72.8
> 6. lag8.ar5.SJC2.gblx.net 3.3% 600
> 1.1 4.4 0.9 22.3 5.4
> 7. 208.178.58.2 4.2% 600
> 4.2 4.2 1.5 39.7 2.1
> 8. pos-2-7-0-0-cr01.sanjose.ca.ibone.comcast.net 6.8% 600
> 10.6 7.2 4.7 43.8 2.9
> 9. pos-1-4-0-0-cr01.sacramento.ca.ibone.comcast.net 3.7% 600
> 20.7 9.5 7.0 26.7 1.9
> 10. te-7-3-ar02.saltlakecity.ut.utah.comcast.net 4.7% 600
> 92.1 12.4 7.4 129.2 19.1
> 11. 69.139.222.50 3.3% 600
> 7.9 8.1 7.6 59.3 2.6
> 12. te-1-0-0-ten05.sacramento.ca.ccal.comcast.net 3.2% 600
> 14.3 14.4 14.1 26.0 1.6
> 13. 74-93-180-##-Sacramento.hfc.comcastbusiness.net 4.2% 600
> 20.9 21.7 14.2 193.0 13.6
> 2013-01-18T21:00:10-0800
>
> 2013-01-18T21:30:04-0800
> HOST: li163-### Loss% Snt
> Last Avg Best Wrst StDev
> 1. router1-fmt.linode.com 0.0% 600
> 0.6 11.0 0.5 202.3 21.2
> 2. 10gigabitethernet2-3.core1.fmt1.he.net 0.0% 600
> 4.5 20.2 0.5 315.3 46.0
> 3. 10gigabitethernet1-2.core1.sjc2.he.net 0.0% 600
> 11.5 3.4 0.7 25.9 3.7
> 4. Port-channel100.ar3.SJC2.gblx.net 6.2% 600
> 1.0 28.5 0.8 845.3 65.5
> 5. po6-20G.ar4.SJC2.gblx.net 9.8% 600
> 1.2 32.6 0.9 689.9 68.5
> 6. lag8.ar5.SJC2.gblx.net 12.2% 600
> 1.0 4.8 0.9 26.9 5.4
> 7. 208.178.58.2 12.0% 600
> 5.2 4.4 1.6 39.0 2.2
> 8. pos-2-7-0-0-cr01.sanjose.ca.ibone.comcast.net 13.0% 600
> 7.6 7.3 4.7 45.0 2.8
> 9. pos-1-4-0-0-cr01.sacramento.ca.ibone.comcast.net 11.5% 600
> 7.6 9.6 7.2 24.1 2.1
> 10. te-7-3-ar02.saltlakecity.ut.utah.comcast.net 12.3% 600
> 7.6 12.6 7.4 162.7 19.7
> 11. 69.139.222.50 12.2% 600
> 8.0 8.5 7.6 157.5 6.9
> 12. te-1-0-0-ten05.sacramento.ca.ccal.comcast.net 11.0% 600
> 14.3 14.8 14.1 29.2 2.3
> 13. 74-93-180-##-Sacramento.hfc.comcastbusiness.net 12.5% 600
> 15.9 26.3 14.4 417.3 33.3
> 2013-01-18T21:40:04-0800
>
> 2013-01-18T22:00:01-0800
> HOST: li163-### Loss% Snt
> Last Avg Best Wrst StDev
> 1. router1-fmt.linode.com 0.0% 600
> 17.2 19.9 0.5 195.7 27.4
> 2. 10gigabitethernet2-3.core1.fmt1.he.net 0.0% 600
> 1.1 31.0 0.5 1666. 109.0
> 3. 10gigabitethernet1-2.core1.sjc2.he.net 0.0% 600
> 0.8 3.3 0.7 13.2 3.6
> 4. Port-channel100.ar3.SJC2.gblx.net 5.5% 600
> 2.0 27.7 0.9 290.3 56.5
> 5. po6-20G.ar4.SJC2.gblx.net 13.3% 600
> 1.0 33.5 0.9 770.0 71.0
> 6. lag8.ar5.SJC2.gblx.net 16.8% 600
> 15.0 5.7 0.8 30.1 6.0
> 7. 208.178.58.2 17.2% 600
> 5.9 4.5 1.5 30.2 2.4
> 8. pos-2-7-0-0-cr01.sanjose.ca.ibone.comcast.net 18.5% 600
> 7.2 7.3 4.9 34.8 2.4
> 9. pos-1-4-0-0-cr01.sacramento.ca.ibone.comcast.net 17.8% 600
> 9.8 9.6 7.1 24.0 2.1
> 10. te-7-3-ar02.saltlakecity.ut.utah.comcast.net 15.8% 600
> 7.7 12.8 7.4 177.3 19.8
> 11. 69.139.222.90 17.0% 600
> 7.9 8.9 7.6 164.0 8.3
> 12. te-1-0-0-ten05.sacramento.ca.ccal.comcast.net 13.3% 600
> 14.3 15.1 14.1 27.9 3.0
> 13. 74-93-180-##-Sacramento.hfc.comcastbusiness.net 17.5% 599
> 21.4 20.5 14.3 147.7 9.5
> 2013-01-18T22:10:07-0800
>
>
> Anyone else has noticed? ssh is completely unusable for a couple of
> seconds every now and again.
>
> It seems to have subdued back to a 0% loss as of 22:20/22:30 window,
> but I bet it'll be back.
>
> C.
> _______________________________________________
> Outages mailing list
> Outages at outages.org
> https://puck.nether.net/mailman/listinfo/outages
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages/attachments/20130119/3a57c13b/attachment.htm>
More information about the Outages
mailing list