[f-nsp] foundry-nsp Digest, Vol 44, Issue 5

Jason J. W. Williams jasonjwwilliams at gmail.com
Sat Sep 16 21:22:46 EDT 2006


Is this the new Terathon chipset? Does it do prefix-based routing
instead of flow-based?

Best Regards,
Jason

On 9/15/06, foundry-nsp-request at puck.nether.net
<foundry-nsp-request at puck.nether.net> wrote:
> Send foundry-nsp mailing list submissions to
>         foundry-nsp at puck.nether.net
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://puck.nether.net/mailman/listinfo/foundry-nsp
> or, via email, send a message with subject or body 'help' to
>         foundry-nsp-request at puck.nether.net
>
> You can reach the person managing the list at
>         foundry-nsp-owner at puck.nether.net
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of foundry-nsp digest..."
>
>
> Today's Topics:
>
>    1. Re: Looking for throughput infos (Gerald Krause)
>    2. Re: Looking for throughput infos (Stefan Neufeind)
>    3. Re: Looking for throughput infos (Kristian Larsson)
>    4. Re: Looking for throughput infos (Gerald Krause)
>    5. Re: Looking for throughput infos (Gerald Krause)
>    6. Re: Looking for throughput infos (Kristian Larsson)
>    7. Re: Looking for throughput infos (Gerald Krause)
>    8. Re: Looking for throughput infos (Niels Bakker)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 14 Sep 2006 19:06:01 +0200
> From: Gerald Krause <gk at ax.tc>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <200609141906.05109.gk at ax.tc>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 14 September 2006 17:56, Jens Brey wrote:
> > Hi,
> >
> > does someone has informations or a link about the throughput of a
> > BigIron 4000 with M4 Modules, 512 MB RAM, Gigabit Uplink and 3 BGP Full
> > Tables?
> >
> > I heard from something around 850 MBit. Is this right?
> > Average packetsize is around 500 Byte.
>
> IMHO the main limiting factor for the max throughput of a B4000 is the number
> of new flows/second because of the small cam size and not the packetsize
> itself. If you have 500 byte packets from/to only some certain
> sources/destinations I wouldn't be surprised if the B4000 could move 1GBit/s
> or more. The pain begins definitively when the flows grow up but this is
> something you hardly can control in an internet environment.
>
> --
> Gerald    (ax/tc)
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 191 bytes
> Desc: not available
> Url : https://puck.nether.net/pipermail/foundry-nsp/attachments/20060914/9263d42d/attachment-0001.bin
>
> ------------------------------
>
> Message: 2
> Date: Thu, 14 Sep 2006 19:21:07 +0200
> From: Stefan Neufeind <foundry-nsp at stefan-neufeind.de>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <45098F83.5030908 at stefan-neufeind.de>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Gerald Krause wrote:
> > On Thursday 14 September 2006 17:56, Jens Brey wrote:
> >> Hi,
> >>
> >> does someone has informations or a link about the throughput of a
> >> BigIron 4000 with M4 Modules, 512 MB RAM, Gigabit Uplink and 3 BGP Full
> >> Tables?
> >>
> >> I heard from something around 850 MBit. Is this right?
> >> Average packetsize is around 500 Byte.
> >
> > IMHO the main limiting factor for the max throughput of a B4000 is the number
> > of new flows/second because of the small cam size and not the packetsize
> > itself. If you have 500 byte packets from/to only some certain
> > sources/destinations I wouldn't be surprised if the B4000 could move 1GBit/s
> > or more. The pain begins definitively when the flows grow up but this is
> > something you hardly can control in an internet environment.
>
> Hi,
>
> but even then you can try to optimize a bit with net-aggregate, if you
> haven't yet already done so, try to increase cam-size etc.
> To my understanding everything that can be "switched" (routed) with
> information from CAM is quite fast and I *think* that 1GBit or more
> should be possible. But this is not based on practical experience with
> that small packets you have.
>
>
> Regards,
>  Stefan
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 14 Sep 2006 19:28:43 +0200
> From: Kristian Larsson <kristian at spritelink.se>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: Stefan Neufeind <foundry-nsp at stefan-neufeind.de>
> Cc: foundry-nsp at puck.nether.net
> Message-ID: <20060914172843.GS7328 at spritelink.se>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Sep 14, 2006 at 07:21:07PM +0200, Stefan Neufeind wrote:
> > Gerald Krause wrote:
> > > On Thursday 14 September 2006 17:56, Jens Brey wrote:
> > >> Hi,
> > >>
> > >> does someone has informations or a link about the throughput of a
> > >> BigIron 4000 with M4 Modules, 512 MB RAM, Gigabit Uplink and 3 BGP Full
> > >> Tables?
> > >>
> > >> I heard from something around 850 MBit. Is this right?
> > >> Average packetsize is around 500 Byte.
> > >
> > > IMHO the main limiting factor for the max throughput of a B4000 is the number
> > > of new flows/second because of the small cam size and not the packetsize
> > > itself. If you have 500 byte packets from/to only some certain
> > > sources/destinations I wouldn't be surprised if the B4000 could move 1GBit/s
> > > or more. The pain begins definitively when the flows grow up but this is
> > > something you hardly can control in an internet environment.
> >
> > Hi,
> >
> > but even then you can try to optimize a bit with net-aggregate, if you
> > haven't yet already done so, try to increase cam-size etc.
> > To my understanding everything that can be "switched" (routed) with
> > information from CAM is quite fast and I *think* that 1GBit or more
> > should be possible. But this is not based on practical experience with
> > that small packets you have.
> Just as mentioned, when the forwarding information
> is in CAM the BI4k is wicked fast, 1Gbps is no
> match. Just pure packet forwarding, it can do
> wirespeed.
>
> But since the CAM is route-cache based you need to
> insert new entries from time to time, this is not
> to bad. What is bad is when your trying to forward
> packets to more destinations than can fit in your
> CAM, then the CPU becomes busy with filling the
> CAM with new entries and getting rid of the oldest
> one. Since all entries really needs to be in CAM,
> the CPU inserts and removes entries all the time,
> this is called CAM trashing. Depending on how many
> flows it needs to insert/remove per second you
> throughput will drop drastically.
>
> We have used and still use BI4ks for routing 'on
> the Internet' and it's working. From time to time,
> the CAM is programmed with a faulty entry. Some
> packets are misrouted and so forth.
> Foundrys are fast, but it's not all about speed,
> you need to move packets in the right direction as
> well.
>
> Tell us more of your environment and we can
> hopefully give better answers :)
>
> Regards,
>    Kristian.
>
> --
> Kristian Larsson                                   KLL-RIPE
> Network Engineer                      Net at Once [AS35706]
> +46 704 910401                       kristian at spritelink.se
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 14 Sep 2006 20:00:54 +0200
> From: Gerald Krause <gk at ax.tc>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <200609142001.03001.gk at ax.tc>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 14 September 2006 19:28, Kristian Larsson wrote:
> > Depending on how many flows it needs to insert/remove per second you
> > throughput will drop drastically.
>
> Yes and a great bunch of packets to drop (e.g. for unreachable destinations)
> will do the same because drop'ing uses the CAM as I understand (as a special
> kind of flow).
>
> > We have used and still use BI4ks for routing 'on
> > the Internet' and it's working. From time to time,
> > the CAM is programmed with a faulty entry. Some
> > packets are misrouted and so forth.
>
> Oh yeah, just occurred some days ago (NI400, v08.0.00): route learned via OSPF
> but no proper CAM entry was created - so we have to configure the route
> with "ip route ..." locally on the system. :[
>
> --
> Gerald    (ax/tc)
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 191 bytes
> Desc: not available
> Url : https://puck.nether.net/pipermail/foundry-nsp/attachments/20060914/704a748f/attachment-0001.bin
>
> ------------------------------
>
> Message: 5
> Date: Thu, 14 Sep 2006 20:48:02 +0200
> From: Gerald Krause <gk at ax.tc>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <200609142048.06274.gk at ax.tc>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 14 September 2006 19:21, Stefan Neufeind wrote:
> > but even then you can try to optimize a bit with net-aggregate, if you
> > haven't yet already done so, try to increase cam-size etc.
>
> Yes, there are some little knobs to tune the CAM but they haven't convinced me
> until now. Especially this two statements from the orgininal Foundry doc's
> makes me wonder:
>
>  "If most of the BGP4 routes actually go to the same set of next hops as the
> default route, enable the CAM network aggregation feature."
>
>  "CAM network aggregation requires a default route in the IP route table."
>
> I must have a default route on my BGP router towards my peer(s)/next-hop(s)?
> Unfortunately my default is null0 on all my border routers.
> Maybe someone can enlighten me in the case I have misread the doc's but if
> this is true the net-agg feature is pretty useless for me (and many others i
> think).
>
> > To my understanding everything that can be "switched" (routed) with
> > information from CAM is quite fast and I *think* that 1GBit or more
> > should be possible.
>
> Yes of course and that's what I try to say - it's a matter of CAM size (and
> CAM trashing like Kristian mentioned in his post).
>
> --
> Gerald    (ax/tc)
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 191 bytes
> Desc: not available
> Url : https://puck.nether.net/pipermail/foundry-nsp/attachments/20060914/e1b32054/attachment-0001.bin
>
> ------------------------------
>
> Message: 6
> Date: Thu, 14 Sep 2006 21:46:57 +0200
> From: Kristian Larsson <kristian at spritelink.se>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: Gerald Krause <gk at ax.tc>
> Cc: foundry-nsp at puck.nether.net
> Message-ID: <20060914194656.GT7328 at spritelink.se>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Sep 14, 2006 at 08:00:54PM +0200, Gerald Krause wrote:
> > On Thursday 14 September 2006 19:28, Kristian Larsson wrote:
> > > Depending on how many flows it needs to insert/remove per second you
> > > throughput will drop drastically.
> >
> > Yes and a great bunch of packets to drop (e.g. for unreachable destinations)
> > will do the same because drop'ing uses the CAM as I understand (as a special
> > kind of flow).
> Kinda.
>
> Destinations with a next-hop of null0 are handled
> by the CPU, this is pretty strange behaviour if
> you ask me. Not only does the CPU have to send an
> ICMP unreachable, it also must handle the actual
> packet.
> ip hw-drop-on-def-route
> fixes it, if I can recall correctly. It does not,
> as the command might suggest, only apply to the
> default route but to all routes that have null0 as
> nexthop.
>
> We are moving away from Foundry in favour of Cisco
> 6500 with Sup32s which really packs a punch, has a
> lot more features and on top of it is cheaper than
> our old BI4ks. I recommend everyone else to do the
> same.
>
> > > We have used and still use BI4ks for routing 'on
> > > the Internet' and it's working. From time to time,
> > > the CAM is programmed with a faulty entry. Some
> > > packets are misrouted and so forth.
> >
> > Oh yeah, just occurred some days ago (NI400, v08.0.00): route learned via OSPF
> > but no proper CAM entry was created - so we have to configure the route
> > with "ip route ..." locally on the system. :[
> Know just the thing. It's a bitch.
>
>
> --
> Kristian Larsson                                   KLL-RIPE
> Network Engineer                      Net at Once [AS35706]
> +46 704 910401                       kristian at spritelink.se
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 14 Sep 2006 22:30:37 +0200
> From: Gerald Krause <gk at ax.tc>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <200609142230.41084.gk at ax.tc>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 14 September 2006 21:46, you wrote:
> > Destinations with a next-hop of null0 are handled
> > by the CPU, this is pretty strange behaviour if
> > you ask me. Not only does the CPU have to send an
> > ICMP unreachable, it also must handle the actual
> > packet.
> > ip hw-drop-on-def-route
> > fixes it, if I can recall correctly. It does not,
> > as the command might suggest, only apply to the
> > default route but to all routes that have null0 as
> > nexthop.
>
> Ah, ok. So only with "ip hw-drop-on-def-route" the CAM will be used to drop
> those packets.
>
> > We are moving away from Foundry in favour of Cisco
> > 6500 with Sup32s which really packs a punch, has a
> > lot more features and on top of it is cheaper than
> > our old BI4ks. I recommend everyone else to do the
> > same.
>
> Ack. But how about the MLX? If Foundry has made his homework well the new
> systems should be much more better then the legacy ironcore/jetcore stuff
> because all the nasty and *obvious* problems has been reported so often that
> I can't believe that Foundry doesn't take note of it (unless they still do
> not position their gear in the internetworking area but only in the layer2
> and metro services scope).
>
> --
> Gerald    (ax/tc)
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 191 bytes
> Desc: not available
> Url : https://puck.nether.net/pipermail/foundry-nsp/attachments/20060914/c914a07b/attachment-0001.bin
>
> ------------------------------
>
> Message: 8
> Date: Fri, 15 Sep 2006 12:40:40 +0200
> From: Niels Bakker <niels=foundry-nsp at bakker.net>
> Subject: Re: [f-nsp] Looking for throughput infos
> To: foundry-nsp at puck.nether.net
> Message-ID: <20060915104040.GQ16691 at burnout.tpb.net>
> Content-Type: text/plain; charset=us-ascii; format=flowed
>
> * kristian at spritelink.se (Kristian Larsson) [Thu 14 Sep 2006, 21:47 CEST]:
> >>We are moving away from Foundry in favour of Cisco 6500 with Sup32s
> >>which really packs a punch, has a lot more features and on top of it is
> >>cheaper than our old BI4ks. I recommend everyone else to do the same.
>
> The hardware you're comparing has almost a decade of engineering between
> them...
>
>
> * gk at ax.tc (Gerald Krause) [Thu 14 Sep 2006, 22:31 CEST]:
> >Ack. But how about the MLX? If Foundry has made his homework well the new
> >systems should be much more better then the legacy ironcore/jetcore stuff
> >because all the nasty and *obvious* problems has been reported so often that
> >I can't believe that Foundry doesn't take note of it (unless they still do
> >not position their gear in the internetworking area but only in the layer2
> >and metro services scope).
>
> Very different indeed.  Completely new architecture that does much, much
> better in about all environments than the old JetCore does.
>
>
>         -- Niels.
>
> --
>
>
> ------------------------------
>
> _______________________________________________
> foundry-nsp mailing list
> foundry-nsp at puck.nether.net
> http://puck.nether.net/mailman/listinfo/foundry-nsp
>
>
> End of foundry-nsp Digest, Vol 44, Issue 5
> ******************************************
>



More information about the foundry-nsp mailing list