Hi, as mentioned by the other guys, CAM=FIB, and is 1M v4 routes per 'tower' on the line card. There are two towers on most of the 10g modules, a single tower on the 1 gig modules. Each tower is 2 or 4 ports (4 or 8 port module, respectively). This is on the -X, or XMR modules. The -M, or MLX modules are half that. Once you start dividing up the cam for v6, the math changes quickly, since v6 entries take up 4 times as much as a v4 entry.<div>
<br></div><div>For RIB, the MR is capable of 10M routes, it has 2GB SDRAM. The MR2 is twice as much memory, and will support up to 15M routes. The RIB handles v4 and v6 routes differently than the CAM, and thus does not have the same loss for integrating v6 routes, they are handled much the same with similar numbers.</div>
<div><br></div><div>Hope that helps.</div><div><br></div><div>Mike<br><br><div class="gmail_quote">On Mon, Feb 11, 2013 at 2:31 PM, Alex HM <span dir="ltr"><<a href="mailto:alex.hm.list@gmail.com" target="_blank">alex.hm.list@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><font face="trebuchet ms, sans-serif">Thanks Jay, Niels.</font></div><div><font face="trebuchet ms, sans-serif"><br>
</font></div><div><font face="trebuchet ms, sans-serif">I can’t find any figures on the capabilities of the MLXe with BR-MLX-MR2-M management card regarding RIB and/or FIB, thus my concern. </font><span style="font-family:'trebuchet ms',sans-serif">Attached is an extracted table from a CER presentation. I have no clue how these figures reads as they are crude figures – the datasheet of the MLX mentions 10M BGP routes but it also state depending on the configuration, so I don’t take this one for granted.</span></div>
<div><font face="trebuchet ms, sans-serif"><br></font></div><div><font face="trebuchet ms, sans-serif">Although not my primary question, I'd be very keen on aggregating as many prefixes as possible, especially the polluting /24, since most of these small networks who usually de-aggregate are doing that on the sole purpose of tiering their traffic commits for $ reasons. AFAIK it takes hundreds of hand-written statements and RIR records processing to achieve that. If somebody has a hit for that too, I am a buyer :-)<span class="HOEnZb"><font color="#888888"><br>
</font></span></font></div><span class="HOEnZb"><font color="#888888"><div><font face="trebuchet ms, sans-serif"><br></font></div><div><font face="trebuchet ms, sans-serif">--- Alex</font></div></font></span></div><div class="HOEnZb">
<div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">2013/2/11 Niels Bakker <span dir="ltr"><<a href="mailto:niels=foundry-nsp@bakker.net" target="_blank">niels=foundry-nsp@bakker.net</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">* <a href="mailto:alex.hm.list@gmail.com" target="_blank">alex.hm.list@gmail.com</a> (Alex HM) [Mon 11 Feb 2013, 22:51 CET]:<div>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I am considering running 8 eBGP sessions with full-route on each (approx. 3'500'000 prefixes expected). As far as I could figure out, the CAM limit depends on the CAM profile. The XMR seems to support 1M IPv4 prefixes in ipv4 profile mode and 512K on MLX. These figures are according to "CAM partition profiles" of the MLX Configuration Manual.<br>
</blockquote>
<br></div>
Prefixes from your 8 eBGP sessions will form your RIB. The best routes are then placed into the FIB. That's where you may hit CAM partitioning limits.<br>
<br>
I'm not quite sure what the limit on RIB size is but 3.5M will fit just fine memory-wise in an XMR Management Module.<span><font color="#888888"><br>
<br>
<br>
-- Niels.<br>
<br>
-- <br></font></span><div><div>
______________________________<u></u>_________________<br>
foundry-nsp mailing list<br>
<a href="mailto:foundry-nsp@puck.nether.net" target="_blank">foundry-nsp@puck.nether.net</a><br>
<a href="http://puck.nether.net/mailman/listinfo/foundry-nsp" target="_blank">http://puck.nether.net/<u></u>mailman/listinfo/foundry-nsp</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
foundry-nsp mailing list<br>
<a href="mailto:foundry-nsp@puck.nether.net">foundry-nsp@puck.nether.net</a><br>
<a href="http://puck.nether.net/mailman/listinfo/foundry-nsp" target="_blank">http://puck.nether.net/mailman/listinfo/foundry-nsp</a><br></blockquote></div><br></div>