[c-nsp] cisco-nsp Digest, Vol 109, Issue 34

raymond stuve stuveray at gmail.com
Thu Dec 15 14:07:23 EST 2011


On Dec 15, 2011 11:04 AM, <cisco-nsp-request at puck.nether.net> wrote:

> Send cisco-nsp mailing list submissions to
>        cisco-nsp at puck.nether.net
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://puck.nether.net/mailman/listinfo/cisco-nsp
> or, via email, send a message with subject or body 'help' to
>        cisco-nsp-request at puck.nether.net
>
> You can reach the person managing the list at
>        cisco-nsp-owner at puck.nether.net
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of cisco-nsp digest..."
>
>
> Today's Topics:
>
>   1. Logging Connections (miroku)
>   2. Re: svi autostate issue (Peter Rathlev)
>   3. Re: svi autostate issue (?????? ???????)
>   4. 1G (SFP) single-mode aggregation (Peter Rathlev)
>   5. Re: 1G (SFP) single-mode aggregation (David Prall)
>   6. Re: 1G (SFP) single-mode aggregation (Justin M. Streiner)
>   7. Re: 1G (SFP) single-mode aggregation (Phil Mayers)
>   8. Re: 1G (SFP) single-mode aggregation (Phil Mayers)
>   9. Re: 1G (SFP) single-mode aggregation (Mark Tinka)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 15 Dec 2011 03:35:24 -0800 (PST)
> From: miroku <bundaberg440ml at gmail.com>
> To: cisco-nsp at puck.nether.net
> Subject: [c-nsp] Logging Connections
> Message-ID:
>        <b9cb7f93-6fc7-4af8-9cdf-4f6e8a8ac83a at a31g2000pre.googlegroups.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi all,
>
> We are experiencing a bit of he said she said between a number of
> different clients/service providers.  The situation is a remote site
> (lets say 40.40.40.40) is experiencing connectivity issues to a couple
> of hosts within our infrastructure (lets say 10.0.1.10 and
> 10.0.2.10).  I beleive that an upstream firewall is blocking certain
> traffic from the host which is the cause of the problem, but the
> firewall team claim otherwise.  I would like to setup logging on our
> infrastructure to see if we are receiving the packets .  Whats the
> best way to do this and would this have any impact to other hosts
> within the SVI when the ACL is applied.
>
> Our SVI is setup something like this (Active for HSRP) (its a 6500)
> interface Vlan10
>  ip address 10.0.3.254 255.255.255.128 secondary
>  ip address 10.0.2.126 255.255.255.224 secondary
>  ip address 10.0.1.254 255.255.255.128
>  no ip redirects
>  standby 14 ip 10.0.1.129
>  standby 14 ip 10.0.2.97 secondary
>  standby 14 ip 10.0.3.129 secondary
>  standby 14 priority 130
>  standby 14 preempt delay minimum 60 sync 60
>  standby 14 authentication <password>
> end
>
> I would like to implement an extended access-list for logging would
> this work and would it impact other hosts on the SVI when it is
> applied as currently their is no ACL on the SVI.
> #
>  ip access-list extended 100
>  permit ip host 40.40.40.40 host 10.0.1.10 log
>  permit ip host 40.40.40.40 host 10.0.2.10 log
>  permit ip any any
>  int vlan 10
>  ip access-group 100 out
>
> Your comments would be greatly appreciated.
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 15 Dec 2011 13:11:58 +0100
> From: Peter Rathlev <peter at rathlev.dk>
> To: ?????? ???????  <a.andreev at teztour.com>
> Cc: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] svi autostate issue
> Message-ID: <1323951118.17420.9.camel at abehat.dyn.net.rm.dk>
> Content-Type: text/plain; charset="UTF-8"
>
> On Thu, 2011-12-15 at 13:06 +0400, ?????? ??????? wrote:
> > How can i disable svi autostate feature to avoid SVI flaps?
> > There is no such command in native IOS.
>
> AFAIK you cannot do this in native IOS for the Catalyst switches. CatOS
> can do it and some routers can do it. Hopefully some day
>
> What is your goal? There might be another way of accomplishing that.
>
> If you need an interface that is always up, you can use a Loopback
> interface. If you need a route to be active (and exported via e.g. BGP)
> you can use a floating static route.
>
> --
> Peter
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 15 Dec 2011 16:41:05 +0400
> From: ?????? ??????? <a.andreev at teztour.com>
> To: Peter Rathlev <peter at rathlev.dk>
> Cc: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] svi autostate issue
> Message-ID: <4EE9EAE1.20901 at teztour.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Some SVI has OSPF adjacencies and some eBGP to clients.
> In period of STP convergence SVI flaps and OSPF/BGP flaps too.
>
>
>
> 15.12.2011 16:11, Peter Rathlev ?????:
> > On Thu, 2011-12-15 at 13:06 +0400, ?????? ??????? wrote:
> >> How can i disable svi autostate feature to avoid SVI flaps?
> >> There is no such command in native IOS.
> > AFAIK you cannot do this in native IOS for the Catalyst switches. CatOS
> > can do it and some routers can do it. Hopefully some day
> >
> > What is your goal? There might be another way of accomplishing that.
> >
> > If you need an interface that is always up, you can use a Loopback
> > interface. If you need a route to be active (and exported via e.g. BGP)
> > you can use a floating static route.
> >
>
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 15 Dec 2011 14:44:56 +0100
> From: Peter Rathlev <peter at rathlev.dk>
> To: cisco-nsp <cisco-nsp at puck.nether.net>
> Subject: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <1323956696.17420.88.camel at abehat.dyn.net.rm.dk>
> Content-Type: text/plain; charset="UTF-8"
>
> We've been asked to look at how one could best cram a fair amount of SFP
> links into not too much space. They are downlinks to FTTO switches. When
> going full scale we're talking about maybe 2500 FTTO switches.
>
> So the question is: How many (1G) SFP switchports can one hope to
> terminate in a standard rack? And what is the smartest/cheapest/easiet
> way to do it?
>
> We've been looking at the things described further down. Any better
> ideas than those? If anybody has good experience with non-Cisco
> equipment I'd also love to hear about it. And if the idea of aggregating
> as much as possible in a single rack is stupid, please tell me. :-)
>
> - One rack holds two 6513E (20RU), each with 12 WS-X6748-SFP cards and
> one supervisor, probably Sup2T. That's 1152 ports per standard 42RU
> rack. The supervisor uplinks ports would be configured as 2*10G
> port-channel, resulting in 20G uplink bandwidth per unit and thus about
> 120:1 total oversubscription, since each FTTO switch has 4 downlink
> ports. Power budget would seem to be around 8 kW per rack if the numbers
> from "show power" are used. (Anyone know what the real consumption is?)
>
> - One rack holds 8 6504E (5RU), each with 3 WS-X6748-SFP cards and one
> supervisor. That's also 1152 ports per rack. Using just one uplink would
> result in about 60:1 oversubscription. This is more expensive, both
> capex and power consumption, and somewhat more complicated. And we don't
> really think that 60:1 is needed. A similar power budget of 8 kW seems
> like a reasonable guess.
>
> - One rack holds 40 WS-C3750X-24S (1RU). That's 960 ports per rack. We
> would use them as 5 stacks of 8 members, each with one 2*10G MC
> etherchannel as uplink, resulting in a total 80:1 oversubscription. The
> power budget would also be around 8 kW, though for the 3750X Cisco
> supplies the actual expected consumption with 100% load as around 110 W,
> resulting in an expected consumption of 4.4 kW.
>
> - We've also tried looking at the 4500E series, which we don't know much
> about. I cannot seem to find any E-model 48 port SFP cards, only the
> WS-X4448-GB-SFP which must have just a 6 GB/s fabric connection. There's
> the WS-X4640-CSFP-E, which combined with the BX SFPs could deliver 80
> "ports" per slot. One might shoehorn three 4710R+E (14RU each) into a
> 42RU rack, resulting in 1920 ports per rack. Total oversubscription
> would be around 128:1.
>
> Thank you.
>
> --
> Peter
>
>
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 15 Dec 2011 09:05:53 -0500
> From: "David Prall" <dcp at dcptech.com>
> To: "'Peter Rathlev'" <peter at rathlev.dk>, "'cisco-nsp'"
>        <cisco-nsp at puck.nether.net>
> Subject: Re: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <027201ccbb32$ab32e4e0$0198aea0$@com>
> Content-Type: text/plain;       charset="us-ascii"
>
> Peter,
> The 6513E can't support Fabric Enabled Modules in the secondary Supervisor
> slot, so you only get 11 6748/6848's.
>
> The 4640-CSFP-E is not supported in the 4510. So you would get 5 per
> 4506/7,
> using the CSFP optics 80 ports per slot.
>
> David
>
> --
> http://dcp.dcptech.com
>
>
>
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Peter Rathlev
> Sent: Thursday, December 15, 2011 8:45 AM
> To: cisco-nsp
> Subject: [c-nsp] 1G (SFP) single-mode aggregation
>
> We've been asked to look at how one could best cram a fair amount of SFP
> links into not too much space. They are downlinks to FTTO switches. When
> going full scale we're talking about maybe 2500 FTTO switches.
>
> So the question is: How many (1G) SFP switchports can one hope to
> terminate in a standard rack? And what is the smartest/cheapest/easiet
> way to do it?
>
> We've been looking at the things described further down. Any better
> ideas than those? If anybody has good experience with non-Cisco
> equipment I'd also love to hear about it. And if the idea of aggregating
> as much as possible in a single rack is stupid, please tell me. :-)
>
> - One rack holds two 6513E (20RU), each with 12 WS-X6748-SFP cards and
> one supervisor, probably Sup2T. That's 1152 ports per standard 42RU
> rack. The supervisor uplinks ports would be configured as 2*10G
> port-channel, resulting in 20G uplink bandwidth per unit and thus about
> 120:1 total oversubscription, since each FTTO switch has 4 downlink
> ports. Power budget would seem to be around 8 kW per rack if the numbers
> from "show power" are used. (Anyone know what the real consumption is?)
>
> - One rack holds 8 6504E (5RU), each with 3 WS-X6748-SFP cards and one
> supervisor. That's also 1152 ports per rack. Using just one uplink would
> result in about 60:1 oversubscription. This is more expensive, both
> capex and power consumption, and somewhat more complicated. And we don't
> really think that 60:1 is needed. A similar power budget of 8 kW seems
> like a reasonable guess.
>
> - One rack holds 40 WS-C3750X-24S (1RU). That's 960 ports per rack. We
> would use them as 5 stacks of 8 members, each with one 2*10G MC
> etherchannel as uplink, resulting in a total 80:1 oversubscription. The
> power budget would also be around 8 kW, though for the 3750X Cisco
> supplies the actual expected consumption with 100% load as around 110 W,
> resulting in an expected consumption of 4.4 kW.
>
> - We've also tried looking at the 4500E series, which we don't know much
> about. I cannot seem to find any E-model 48 port SFP cards, only the
> WS-X4448-GB-SFP which must have just a 6 GB/s fabric connection. There's
> the WS-X4640-CSFP-E, which combined with the BX SFPs could deliver 80
> "ports" per slot. One might shoehorn three 4710R+E (14RU each) into a
> 42RU rack, resulting in 1920 ports per rack. Total oversubscription
> would be around 128:1.
>
> Thank you.
>
> --
> Peter
>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 15 Dec 2011 09:30:16 -0500 (EST)
> From: "Justin M. Streiner" <streiner at cluebyfour.org>
> Cc: cisco-nsp <cisco-nsp at puck.nether.net>
> Subject: Re: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <Pine.LNX.4.64.1112150920300.1202 at whammy.cluebyfour.org>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
> On Thu, 15 Dec 2011, Peter Rathlev wrote:
>
> > So the question is: How many (1G) SFP switchports can one hope to
> > terminate in a standard rack? And what is the smartest/cheapest/easiet
> > way to do it?
>
> How much space are you allocating for fiber termination bays and cable
> management?
>
> The one issue I've had with very high-density installations is that people
> often forget that someone needs to be able to make physical changes to the
> rack from time to time, and many peoples' fingers are not small enough to
> fumble around in an LC termination bay without potentially knocking
> something else offline (bump fade, so to speak).
>
> The same issue exists with many switch vendors.  For example, with Cisco
> Cat6500s, the metal latches that are used to seat the cards in the chassis
> can interfere with access to the latches to uncouple LC jumpers from some
> of the SFP ports on a WS-X6748-SFP.
>
> > ports. Power budget would seem to be around 8 kW per rack if the numbers
> > from "show power" are used. (Anyone know what the real consumption is?)
>
> The 6500s tend to pre-allocate power for each card when they are inserted
> or when the switch reboots, so the numbers you get from "show power" don't
> change much (if at all) as more SFPs are added to a linecard.
>
> jms
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 15 Dec 2011 14:55:30 +0000
> From: Phil Mayers <p.mayers at imperial.ac.uk>
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <4EEA0A62.9080209 at imperial.ac.uk>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 15/12/11 14:05, David Prall wrote:
> > Peter,
> > The 6513E can't support Fabric Enabled Modules in the secondary
> Supervisor
> > slot, so you only get 11 6748/6848's.
>
> True with the Sup720.
>
> Not true with the Sup2T, which I would expect anyone to use in a new
> deployment, given the favourable pricing. That is, Sup2T in 6513E allows
> full bandwidth in all slots.
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 15 Dec 2011 14:56:06 +0000
> From: Phil Mayers <p.mayers at imperial.ac.uk>
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <4EEA0A86.4090106 at imperial.ac.uk>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 15/12/11 14:30, Justin M. Streiner wrote:
>
> > The one issue I've had with very high-density installations is that
> > people often forget that someone needs to be able to make physical
> > changes to the rack from time to time, and many peoples' fingers are not
> > small enough to fumble around in an LC termination bay without
> > potentially knocking something else offline (bump fade, so to speak).
>
> Agreed - go ODF, and do all patching away from the linecards.
>
>
> ------------------------------
>
> Message: 9
> Date: Fri, 16 Dec 2011 00:02:54 +0800
> From: Mark Tinka <mtinka at globaltransit.net>
> To: cisco-nsp at puck.nether.net
> Subject: Re: [c-nsp] 1G (SFP) single-mode aggregation
> Message-ID: <201112160002.57750.mtinka at globaltransit.net>
> Content-Type: text/plain; charset="us-ascii"
>
> On Thursday, December 15, 2011 09:44:56 PM Peter Rathlev
> wrote:
>
> > We've been asked to look at how one could best cram a
> > fair amount of SFP links into not too much space. They
> > are downlinks to FTTO switches. When going full scale
> > we're talking about maybe 2500 FTTO switches.
> >
> > So the question is: How many (1G) SFP switchports can one
> > hope to terminate in a standard rack? And what is the
> > smartest/cheapest/easiet way to do it?
> >
> > We've been looking at the things described further down.
> > Any better ideas than those? If anybody has good
> > experience with non-Cisco equipment I'd also love to
> > hear about it. And if the idea of aggregating as much as
> > possible in a single rack is stupid, please tell me. :-)
>
> You don't say whether the aggregating rack will also be the
> same one providing services, or whether services will come
> from the upstream.
>
> If you're looking at providing services at the aggregation
> rack, have you considered other options like the ASR9922.
> Compared to the 6500, it could be more costly, but it's got
> 20 slots which would give you 800 ports per chassis (a
> little less if you're going to spare 2x 10Gbps ports for the
> uplink). Not as much as the 6513 (the ASR9922's are 40-port
> line cards), but remember you're getting line rate on each
> slot in the ASR9922 (not that it really matters in your use-
> case; I'm guessing density trumps performance).
>
> You could also look at the Juniper MX960. You'd be able to
> squeeze 40x Gig-E ports into a single slot, but note that
> the MPC1 carrier card is only 30Gbps throughout the slot,
> while the MPC2 is double.
>
> Juniper have something interesting going on here that I,
> unfortunately, cannot go into. If you have any leads into
> Juniper, I'd suggest calling them up.
>
> The Juniper EX8200 may sound like an idea, but based on the
> feedback from most, I'd likely say stay away, especially if
> you're trying to run services on the rack.
>
> This is an interesting problem.
>
> If you're simply aggregating without services, or providing
> simple services, I'd say seriously consider the 6513 with
> the SUP2T, especially if the price difference with the
> ASR9922 is significant.
>
> Mark.
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 836 bytes
> Desc: This is a digitally signed message part.
> URL: <
> https://puck.nether.net/pipermail/cisco-nsp/attachments/20111216/e01db53b/attachment.sig
> >
>
> ------------------------------
>
> _______________________________________________
> cisco-nsp mailing list
> cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
>
> End of cisco-nsp Digest, Vol 109, Issue 34
> ******************************************
>


More information about the cisco-nsp mailing list