Re: policy routing performance

From: Hank Nussbacher (hank@att.net.il)
Date: Tue Jan 11 2000 - 01:55:49 EST


At 21:39 10/01/00 -0600, Edward Henigin wrote:

The following benchmark and extracted emails was done by a colleague here
in Israel during the summer:

>I have got some preliminary figures with respect to policy based routing
>and tunneling running at the same time on an 7505/RSP4 with VIP2-50 cards.
>Setup is such that everything received on a specific interface is policy
>routed into a tunnel (see router setup below). As a result, the router was
>able to process about 3500 large (1400bytes) packets on each direction:
>
>chicago-gp3#sh int tun1 | inc bits
> 5 minute input rate 41985000 bits/sec, 3643 packets/sec
> 5 minute output rate 38692000 bits/sec, 3369 packets/sec
>chicago-gp3#sh int ser4/0/0 | inc bits
> 5 minute input rate 40299000 bits/sec, 3530 packets/sec
> 5 minute output rate 37748000 bits/sec, 3527 packets/sec
>
>At the same time, CPU utilization was:
>
>chicago-gp3#sh proc cpu | inc util
>CPU utilization for five seconds: 24%/22%; one minute: 23%; five minutes: 22%
>
>and :
>
>chicago-gp3#sh controllers vip 4 tech-support | inc util
>CPU utilization for five seconds: 17%/17%; one minute: 15%; five minutes: 15%
>
>With tiny (40bytes) packets, utilization was:
>
>chicago-gp3#sh int tun1 | inc bits/sec
> 5 minute input rate 9476000 bits/sec, 6708 packets/sec
> 5 minute output rate 8580000 bits/sec, 6232 packets/sec
>chicago-gp3#sh int ser4/0/0 | inc bits/sec
> 5 minute input rate 8809000 bits/sec, 6691 packets/sec
> 5 minute output rate 9078000 bits/sec, 6691 packets/sec
>chicago-gp3#sh proc cpu | inc util
>CPU utilization for five seconds: 40%/37%; one minute: 38%; five minutes: 36%
>chicago-gp3#sh controllers vip 4 tech | inc util
>CPU utilization for five seconds: 22%/22%; one minute: 20%; five minutes: 19%
>
>
>FYI.
>
>Oded.
>
>-------------Router Setup-------------
>interface Tunnel1
> ip address 192.114.99.129 255.255.255.240
> no ip directed-broadcast
> ip route-cache policy
> ip route-cache flow
> tunnel source FastEthernet1/0/0
> tunnel destination 192.114.99.49
>
>interface FastEthernet1/0/0
> ip address 192.114.101.49 255.255.255.240
> no ip directed-broadcast
> ip route-cache policy
> ip route-cache flow
> ip route-cache distributed
> ip policy route-map to-tunnel1
>
>route-map to-tunnel1 permit 10
> set interface Tunnel1
>
>

> Is anyone doing policy routing on backbone interfaces? So
>it would be on Internet traffic mix, running 10's to 100's of
>megabits?
>
> I'm considering doing this for a specific application.
>I'm concerned about potential performance hit (even after ip
>route-cache policy) and it would be nice to have flow stats available,
>but I guess I can live without it.
>
> Ed
>
>



This archive was generated by hypermail 2b29 : Sun Aug 04 2002 - 04:12:08 EDT