[outages] AWS issues

Micah Croff micahcroff at github.com
Fri Apr 22 19:40:52 EDT 2016


It's also interesting to note that carriers are not correctly stripping out
private ASNs.  I've been opening cases with carriers if I find them
announcing private ASNs.

Just in case anyone is interested, this is the policy I wrote to prevent
these sorts of things.

Regards,
Micah

set policy-options policy-statement TRANSIT-IMPORT term
REJECT-PRIVATE-BGP-ASNS from protocol bgp
set policy-options policy-statement TRANSIT-IMPORT term
REJECT-PRIVATE-BGP-ASNS from as-path-group PRIVATE-ASNS
set policy-options policy-statement TRANSIT-IMPORT term
REJECT-PRIVATE-BGP-ASNS then reject

set policy-options as-path-group PRIVATE-ASNS as-path
PRIVATE-ASN-2BYTE ".* [64512-65535]+ .*"
set policy-options as-path-group PRIVATE-ASNS as-path
PRIVATE-ASN-4BYTE ".* [4200000000-4294967294]+ .*"


On Fri, Apr 22, 2016 at 4:33 PM, Nick Buraglio via Outages <
outages at outages.org> wrote:

> I suspect this was due to a large bgp event today.
> http://www.bgpmon.net/large-hijack-affects-reachability-of-high-traffic-destinations/
>
> On Friday, April 22, 2016, Charles Sprickman via Outages <
> outages at outages.org> wrote:
>
>> I can reach east-1 from HE in NYC and VZ in NNJ.
>>
>> I’m relatively new to FiOS, is this path normal?
>>
>> frankentosh:~ spork$ traceroute !$
>> traceroute monitor.xxxs.com
>> traceroute to monitor.xxxs.com (54.85.105.220), 64 hops max, 52 byte
>> packets
>>  1  lo0-100.nwrknj-vfttp-312.verizon-gni.net (108.53.194.1)  1.406 ms
>>  1.233 ms  1.904 ms
>>  2  t1-8-0-6.nwrknj-lcr-22.verizon-gni.net (100.41.220.233)  5.804 ms
>>     t1-6-0-0.nwrknj-lcr-22.verizon-gni.net (100.41.220.228)  3.732 ms
>>     t1-8-0-6.nwrknj-lcr-21.verizon-gni.net (100.41.220.235)  6.286 ms
>>  3  * * *
>>  4  0.ae6.br1.nyc1.alter.net (140.222.228.131)  4.642 ms
>>     0.ae5.br1.nyc1.alter.net (140.222.228.107)  4.304 ms
>>     0.ae6.br1.nyc1.alter.net (140.222.228.131)  4.582 ms
>>  5  pax-brdr-01.inet.qwest.net (63.235.40.53)  4.864 ms  2.956 ms
>>  13.092 ms
>>  6  dca2-edge-01.inet.qwest.net (67.14.36.10)  9.539 ms  10.112 ms
>>  11.018 ms
>>  7  72.165.86.74 (72.165.86.74)  9.879 ms
>>     65.120.78.82 (65.120.78.82)  9.580 ms
>>     67.133.224.206 (67.133.224.206)  8.967 ms
>>  8  * * *
>>  9  * * *
>> 10  54.239.110.245 (54.239.110.245)  32.092 ms
>>     54.239.110.235 (54.239.110.235)  11.426 ms
>>     54.239.110.233 (54.239.110.233)  40.425 ms
>>
>> I would have assumed AS701 would peer directly with AWS, that has to
>> represent at least a quarter of FiOS traffic.
>>
>> Charles
>>
>> --
>> Charles Sprickman
>> NetEng/SysAdmin
>> Bway.net <http://bway.net> - New York's Best Internet www.bway.net
>> spork at bway.net - 212.982.9800
>>
>>
>>
>> On Apr 22, 2016, at 1:48 PM, Ben Burns via Outages <outages at outages.org>
>> wrote:
>>
>> Can confirm inbound connectivity still down for us-east-1.
>>
>> On 16-04-22 11:42 AM, Sebastian J. Orsini II via Outages wrote:
>>
>> That correlates with the timing of some really weird DNS issues for us.
>> (ISP=Hargray)
>>
>> Ie some webpages would load some wouldn't.   we lost 1/3 of our vpns and
>> about half our public ips were not pingable from some different isps.
>>
>> Then it just magically cleared up.   Felt like about 4 ,months ago when
>> level3 had DNS issues?
>>
>> I dunno.   Its all okay now......
>>
>>
>>
>> On Fri, Apr 22, 2016 at 1:29 PM, John Kinsella via Outages <
>> outages at outages.org> wrote:
>>
>>> Back to normal as others have said. Lasted 10ish minutes? Normal trace
>>> below. Not sure if HE issue or Amazon dropped an announcement maybe...
>>>
>>>                                                  Packets
>>> Pings
>>>  Host                                          Loss%   Snt   Last   Avg
>>>  Best  Wrst StDev
>>>  1. 10.0.1.1                                    0.0%     1    1.1   1.1
>>>   1.1   1.1   0.0
>>>  2. 10.0.1.49                                   0.0%     1    2.2   2.2
>>>   2.2   2.2   0.0
>>>  3. 173.247.205.38                              0.0%     1    3.7   3.7
>>>   3.7   3.7   0.0
>>>  4. x.196.247.173.web-pass.com                  0.0%     1    4.9   4.9
>>>   4.9   4.9   0.0
>>>  5. x.196.247.173.web-pass.com                  0.0%     1    3.7   3.7
>>>   3.7   3.7   0.0
>>>  6. equinix01-sfo5.amazon.com                   0.0%     1    5.6   5.6
>>>   5.6   5.6   0.0
>>>  7. 54.240.242.112                              0.0%     1   37.3  37.3
>>>  37.3  37.3   0.0
>>>  8. 54.240.242.115                              0.0%     1   28.6  28.6
>>>  28.6  28.6   0.0
>>>  9. 205.251.229.189                             0.0%     1   42.8  42.8
>>>  42.8  42.8   0.0
>>> 10. 54.239.42.6                                 0.0%     1   25.2  25.2
>>>  25.2  25.2   0.0
>>> 11. 205.251.232.130                             0.0%     1   25.1  25.1
>>>  25.1  25.1   0.0
>>> 12. 205.251.232.143                             0.0%     1   25.0  25.0
>>>  25.0  25.0   0.0
>>> 13. 54.239.48.179                               0.0%     1   26.8  26.8
>>>  26.8  26.8   0.0
>>> 14. ???
>>>
>>>
>>> On Apr 22, 2016, at 10:22 AM, Hal Ponton <hal at buzcom.net> wrote:
>>>
>>> I've also been seeing some issues to youtube, twitter, amazon and reddit
>>> from the UK but they just seem to have been kicked back into life.
>>> --
>>> --
>>> Regards,
>>>
>>> Hal Ponton
>>> Senior Network Engineer
>>>
>>> Buzcom / FibreWiFi
>>>
>>>
>>>
>>>
>>> John Kinsella via Outages
>>> 22 April 2016 at 18:19
>>> Currently able to connect to multiple services hosted by AWS (Slack,
>>> iTunes, and my own EC2 instances). http://status.aws.amazon.com/ claims
>>> situation normal, so far, although I can’t get into my AWS dashboard.
>>>
>>> Might be wider than AWS, but I seem to have connectivity out to other
>>> major services...
>>>
>>> Packets Pings
>>> Host Loss% Snt Last Avg Best Wrst StDev
>>> 1. 10.0.1.1 0.0% 29 1.0 10.7 0.9 123.2 26.2
>>> 2. 10.0.1.49 0.0% 29 1.7 12.1 1.5 101.1 26.7
>>> 3. 173.247.205.38 0.0% 29 3.7 24.3 3.5 120.4 36.8
>>> 4. x.196.247.173.web-pass.com 0.0% 29 3.3 20.3 3.2 127.4 36.9
>>> 5. x.196.247.173.web-pass.com 0.0% 29 3.9 21.0 3.6 106.7 31.3
>>> 6. v505.core1.sfo1.he.net 0.0% 29 3.9 19.7 3.8 123.5 30.3
>>> 7. 10ge11-2.core1.sjc2.he.net 0.0% 29 14.8 27.7 4.9 165.2 39.3
>>> 8. 100ge1-2.core1.nyc4.he.net 6.9% 29 74.4 112.9 66.5 285.1 71.3
>>> 9. 100ge11-1.core1.par2.he.net 21.4% 29 296.4 171.1 137.0 296.4 57.3
>>> 10. 10ge3-2.core1.zrh1.he.net 0.0% 28 175.8 173.3 150.4 280.4 34.0
>>> 11. ???
>>> 12. ???
>>> 13. ???
>>> 14. ???
>>> 15. te2-2.er01.zrh01.ip-max.net 92.6% 28 391.1 318.5 246.0 391.1 102.6
>>> 16. xe0-0-3.cr02.gva253.ip-max.net 96.3% 28 331.7 331.7 331.7 331.7 0.0
>>> 17. xe0-0-1.cr02.gva252.ip-max.net 96.2% 27 321.6 321.6 321.6 321.6 0.0
>>> 18. ???
>>>
>>>
>>> _______________________________________________
>>> Outages mailing list
>>> Outages at outages.org
>>> https://puck.nether.net/mailman/listinfo/outages
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Outages mailing list
>>> Outages at outages.org
>>> https://puck.nether.net/mailman/listinfo/outages
>>>
>>>
>>
>>
>> _______________________________________________
>> Outages mailing listOutages at outages.orghttps://puck.nether.net/mailman/listinfo/outages
>>
>>
>> _______________________________________________
>> Outages mailing list
>> Outages at outages.org
>> https://puck.nether.net/mailman/listinfo/outages
>>
>>
>>
>
> --
> ---
> Nick Buraglio
> Energy Sciences Network; AS293
> Lawrence Berkeley National Laboratory
> buraglio at es.net
> +1 (510) 995-6068
>
> _______________________________________________
> Outages mailing list
> Outages at outages.org
> https://puck.nether.net/mailman/listinfo/outages
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/outages/attachments/20160422/b8892f70/attachment.htm>


More information about the Outages mailing list