[cisco-bba] LNS with 7200 with NPE-G1

Tassos Chatzithomaoglou achatz at forthnet.gr
Wed Oct 19 06:12:11 EDT 2005


~3000 sessions, 70% cpu.

GigabitEthernet0/1 is up, line protocol is up
   30 second input rate 121057000 bits/sec, 21379 packets/sec
   30 second output rate 52865000 bits/sec, 19767 packets/sec
GigabitEthernet0/2 is up, line protocol is up
   30 second input rate 54558000 bits/sec, 19618 packets/sec
   30 second output rate 124335000 bits/sec, 21226 packets/sec


Ash Garg wrote on 19/10/2005 3:02 πμ:
> Tassos, what traffic levels are you doing? 
> 
> We have a number of 7200 G1s running SSG software in LNS mode with 3000 sessions and 105Mbits of traffic. The cpu peaks at 88%. Sometimes we have done 95%+ CPU with 4400 sessions and 150Mbits of traffic.
> 
> However majority of our CPU load is due to the regular SNMP Polling of the device, interfaces and VPDN mibs.
> 
> 
> Ash
> 
> 
> 
> -----Original Message-----
> From: cisco-bba-bounces at puck.nether.net
> [mailto:cisco-bba-bounces at puck.nether.net]On Behalf Of Tassos
> Chatzithomaoglou
> Sent: Wednesday, 19 October 2005 6:57 AM
> To: Christian Schmit
> Cc: cisco-bba at puck.nether.net
> Subject: Re: [cisco-bba] LNS with 7200 with NPE-G1
> 
> 
> We are also using 7200's as LNS and they are maxing out cpu (95%) at around 3000 L2TP sessions when 
> tunnels are coming through atm interface and 3500 L2TP sessions when coming through GE interface.
> 
> We are now trying 10k, but we already have 25% cpu at 3000 sessions, so 12000 sessions (1/5 of 
> what's advertised) will probably max its capacity. Also 10k can't do (because of PXF) a lot of 
> things 7200 does, which is another drawback.
> 
> We gonna have some look at Juniper's ERX series during the next months and see how it compares too.
> 
> Christian Schmit wrote on 18/10/2005 9:20 µµ:
> 
> 
>>We are currently running a test setup using a 7200/G1
>>device as LNS. The telco operates as LAC Juniper ERX
>>devices.
>>
>>Everything is working as expected but the CPU load
>>on the G1 is quite high. Having around 200 PPP sessions
>>on the LNS the CPU load is already at 11%. In other
>>words this would mean that around 2000 users would put
>>the box to 100% CPU usage which is very far away from
>>the advertised 16 000 broadband sessions for the G1.
>>
>>Running IP-Plus 12.3(16).
>>
>>Do I have a CPU killer in my config?
>>
>>Christian
>>
>>
>>My config:
>>-----------
>>version 12.3
>>service timestamps debug datetime msec
>>service timestamps log datetime msec
>>service password-encryption
>>no service dhcp
>>!
>>hostname LNS
>>!
>>boot-start-marker
>>boot-end-marker
>>!
>>enable password xxxxxxxxxxxxxxxxxxxxxxxxxx
>>!
>>clock timezone GMT 1
>>clock summer-time MET recurring last Sun Mar 3:00 last Sun Oct 3:00
>>aaa new-model
>>!
>>!
>>aaa authentication login default enable
>>aaa authentication ppp default group radius
>>aaa authorization network default group radius
>>aaa accounting delay-start
>>aaa accounting update periodic 240
>>aaa accounting network default start-stop group radius
>>aaa session-id common
>>ip subnet-zero
>>no ip source-route
>>!
>>!
>>ip cef
>>no ip domain lookup
>>ip name-server xxxxxxxxxxxx
>>ip name-server xxxxxxxxxxxx
>>!
>>vpdn enable
>>vpdn ip udp ignore checksum
>>!
>>vpdn-group 1
>> accept-dialin
>>  protocol l2tp
>>  virtual-template 1
>> terminate-from hostname LAC
>> lcp renegotiation on-mismatch
>> l2tp tunnel password 7 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>!
>>interface Loopback0
>> ip address xxxxxxxxxxxxxxxxxxxxx
>>!
>>interface Loopback1
>> ip address xxxxxxxxxxxxxxxxxxxxx
>>!
>>interface GigabitEthernet0/1
>> description Connection to Vlan 13
>> ip address xxxxxxxxxxxxxxxxxxxx
>> ip ospf message-digest-key 10 md5 7 xxxxxxxxxxxxxxxxxx
>> duplex full
>> speed 1000
>> media-type rj45
>> no negotiation auto
>>!
>>interface GigabitEthernet0/2
>> no ip address
>> shutdown
>> duplex auto
>> speed auto
>> media-type rj45
>> negotiation auto
>>!
>>interface GigabitEthernet0/3
>> no ip address
>> shutdown
>> duplex auto
>> speed auto
>> media-type rj45
>> negotiation auto
>>!
>>interface Virtual-Template1
>> ip unnumbered Loopback1
>> ip tcp adjust-mss 1420
>> ip mroute-cache
>> peer default ip address pool VODSL
>> ppp mtu adaptive
>> ppp authentication pap chap
>>!
>>router ospf 101
>> log-adjacency-changes
>> area 0 authentication message-digest
>> summary-address xxxxxxxxxxxxxxxxxxxx
>> summary-address xxxxxxxxxxxxxxxxxxxxx
>> redistribute connected subnets
>> redistribute static subnets
>> passive-interface Virtual-Template1
>> network xxxxxxxxxxxxxxxxxxx area 0
>> network xxxxxxxxxxxxxxxxxxx area 0
>>!
>>ip local pool VODSL xxxxxxxxxxxxxxxxxxxx
>>ip local pool VODSL xxxxxxxxxxxxxxxxxxxx
>>ip classless
>>ip route 0.0.0.0 0.0.0.0 xxxxxxxxxxxxxxxxx
>>ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>>ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>>ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>>no ip http server
>>!
>>!
>>access-list 1 permit xxxxxxxxxxxxxxxxx
>>access-list 1 deny   any
>>access-list 50 permit xxxxxxxxxxxxxxxx
>>access-list 50 deny   any
>>no cdp run
>>!
>>snmp-server community xxxxxxxxxxxxxxxxx RW 1
>>!
>>radius-server attribute nas-port format d
>>radius-server host xxxxxxxxxx auth-port 1645 acct-port 1646 key 7 xxxxx
>>
>>radius-server domain-stripping
>>radius-server unique-ident 3
>>radius-server vsa send accounting
>>!
>>!
>>gatekeeper
>> shutdown
>>!
>>line con 0
>> stopbits 1
>>line aux 0
>> stopbits 1
>>line vty 0 4
>> access-class 50 in
>>!
>>ntp clock-period 17180061
>>ntp server xxxxxxxxxxxx
>>ntp server xxxxxxxxxxxx
>>!
>>end
>>
>>
>>
>>
>>
>>
>>
>>_______________________________________________
>>cisco-bba mailing list
>>cisco-bba at puck.nether.net
>>https://puck.nether.net/mailman/listinfo/cisco-bba
> 
> _______________________________________________
> cisco-bba mailing list
> cisco-bba at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-bba
> 


More information about the cisco-bba mailing list