[c-nsp] 回复: RE: 回复: RE: Monitoring 6K performance (pps)

jstuxuhu0816 jstuxuhu0816 at gmail.com
Mon May 14 21:40:33 EDT 2012


Yes, you are right, through the 'show mls statistics' we can check the L3 forwarding speed. That's what Aaron want.
Also can combine the command of show plat hardware capacity  to check much more information.

Best Regards,
Hu Xu
发件人: Mack McBride
发送时间: 2012-05-15 00:45
收件人: jstuxuhu0816; Aaron Riemer; 'Kyle Duren'
抄送: cisco-nsp
主题: RE: [c-nsp] 回复: RE: Monitoring 6K performance (pps)
You would need to capture this for each DFC/PFC.
The equivalent command line is 'show mls statistics'
In the command output there is a total L3 packets processed but that does not include layer 2 packets processed.
I would have to research OIDs for the equivalent of that command.

Mack

-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of jstuxuhu0816
Sent: Monday, May 14, 2012 7:02 AM
To: Aaron Riemer; 'Kyle Duren'
Cc: cisco-nsp
Subject: [c-nsp] 回复: RE: Monitoring 6K performance (pps)

Because i saw your original email said "looking to obtain raw packets per second (pps) that are actually processed by the switch." 
Through this command, you can got this result?

Best Regards,
Hu Xu
发件人: Aaron Riemer
发送时间: 2012-05-14 20:43
收件人: '许, 虎'; 'Kyle Duren'
抄送: 'cisco-nsp'
主题: RE: Re: [c-nsp] Monitoring 6K performance (pps) What do you mean you don't see any useful result?
I am monitoring the data you see in your show command via SNMP and graphing this in Cacti.
Cheers,
Aaron.

From: jstuxuhu0816 [mailto:jstuxuhu0816 at gmail.com]
Sent: Monday, 14 May 2012 7:18 PM
To: Aaron Riemer; 'Kyle Duren'
Cc: cisco-nsp
Subject: Re: Re: [c-nsp] Monitoring 6K performance (pps)

Hi Aaron,
Just now i monitored the utilization of fabric through command "show fabric utilization detail ", i don't see any useful result for your case, see as bellow:
Router#show fabric utilization detail
  Fabric utilization:     Ingress                    Egress
    Module  Chanl  Speed  rate  peak                 rate  peak               
    1       0        20G    1%    0%                   1%    0%               
    1       1        20G    1%    0%                   0%    0%               
    4       0         8G    0%    0%                   0%    0%               
    5       0        20G    1%    0%                   2%    0%               
    6       0        20G    0%    0%                   0%    0%               
    7       0         8G    0%    0%                   0%    0%               
    8       0        20G    0%    0%                   1%    0%   

I don't understand how you can see the result, let me know if you have any progress about this issue.



Thanks and Regards,
Hu Xu

From: Aaron Riemer
Date: 2012-05-13 15:29
To: 'Kyle Duren'
CC: cisco-nsp
Subject: Re: [c-nsp] Monitoring 6K performance (pps) Hi Kyle,



I have had a think about this a little more. It is probably more worthwhile monitoring the utilisation of the fabric on all blades rather than counting up packets per second per interface. I understand that I would lose visibility of any local switching going on (i.e. traffic not traversing the switch fabric).



Please see my other post. Any comments welcome :)



Cheers,



Aaron.



From: Kyle Duren [mailto:pixitha.kyle at gmail.com]
Sent: Sunday, 13 May 2012 3:12 PM
To: Aaron Riemer
Cc: cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] Monitoring 6K performance (pps)



You can use snmp to collect packets/sec also, cacti can make nice graphs for both mb/sec and packets/sec



-Kyle

On Sat, May 12, 2012 at 11:19 PM, Aaron Riemer <ariemer at amnet.net.au> wrote:

Hey guys,



We are looking at upgrading our CAT6K SUP's and I am trying to figure out how I can monitor the current throughput.



We currently monitor the interface utilisation (bits / sec) with SNMP. That is all well and good but I am looking to obtain raw packets per second (pps) that are actually processed by the switch. Obviously bits / sec are not the same as packets / sec.



Is there any real way to go about this other than monitoring each interface and calculating a total for a given time period?



Ideas?



Cheers,



Aaron.

_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list