[cisco-bba] LNS with 7200 with NPE-G1

Christian Schmit cschmit at vo.lu
Tue Oct 18 15:12:31 EDT 2005


Current IP traffic is as follows:

5 minute input rate 29094000 bits/sec, 6718 packets/sec
5 minute output rate 29254000 bits/sec, 6702 packets/sec

I noticed that in and out traffic are nearly the same
which is not what I expected for ADSL connections where
the max upstream is 192 kbit/s and max downstream is 3 Mbit/s.

Regarding the CPU the "L2X Data Daemon" uses most. No other
process shown any significant CPU usage.

sh proc cpu:
------------
CPU utilization for five seconds: 13%/9%; one minute: 12%; five minutes: 12%
.
.
14  4.55%  3.93%  3.96%   0 L2X Data Daemon


Regarding the L2TP setup I terminated the 4 LAC devices
from the telco in "vpdn-group 1". Would creating a
separate vpdn-group for each LAC be of any benefit?

Currently I have:

7206VXR#sh vpdn tunnel

L2TP Tunnel Information Total tunnels 4 sessions 207
LocID RemID Remote Name   State  Remote Address  Port  Sessions VPDN Group
2999  12    LAC           est    xxxxxxxxxxxxx   1701  21       1
55524 5516  LAC           est    xxxxxxxxxxxxx   1701  78       1
29502 14    LAC           est    xxxxxxxxxxxxx   1701  53       1
18896 82    LAC           est    xxxxxxxxxxxxx   1701  55       1
%No active L2F tunnels
%No active PPTP tunnels

 
Is there an IOS image supporting MPF that can be recommended in
a production environment for an LNS?

Christian

 
DP> Your config is very basic, I don't see anything that would cause
DP> process switching or something detrimental to the CPU. How much
DP> traffic, in aggregate, are these 200 users pushing (bps and pps)? 16k
DP> sessions is a control-plane limitation, but if you have broadband
DP> traffic, you'll hit the data-plane limit much faster (16k is really
DP> for narrowband). MPF can greatly help improve data-plane performance.

DP> Dennis

DP> Christian Schmit [cschmit at vo.lu] wrote:
>> 
>> We are currently running a test setup using a 7200/G1
>> device as LNS. The telco operates as LAC Juniper ERX
>> devices.
>> 
>> Everything is working as expected but the CPU load
>> on the G1 is quite high. Having around 200 PPP sessions
>> on the LNS the CPU load is already at 11%. In other
>> words this would mean that around 2000 users would put
>> the box to 100% CPU usage which is very far away from
>> the advertised 16 000 broadband sessions for the G1.
>> 
>> Running IP-Plus 12.3(16).
>> 
>> Do I have a CPU killer in my config?
>> 
>> Christian
>> 
>> 
>> My config:
>> -----------
>> version 12.3
>> service timestamps debug datetime msec
>> service timestamps log datetime msec
>> service password-encryption
>> no service dhcp
>> !
>> hostname LNS
>> !
>> boot-start-marker
>> boot-end-marker
>> !
>> enable password xxxxxxxxxxxxxxxxxxxxxxxxxx
>> !
>> clock timezone GMT 1
>> clock summer-time MET recurring last Sun Mar 3:00 last Sun Oct 3:00
>> aaa new-model
>> !
>> !
>> aaa authentication login default enable
>> aaa authentication ppp default group radius
>> aaa authorization network default group radius
>> aaa accounting delay-start
>> aaa accounting update periodic 240
>> aaa accounting network default start-stop group radius
>> aaa session-id common
>> ip subnet-zero
>> no ip source-route
>> !
>> !
>> ip cef
>> no ip domain lookup
>> ip name-server xxxxxxxxxxxx
>> ip name-server xxxxxxxxxxxx
>> !
>> vpdn enable
>> vpdn ip udp ignore checksum
>> !
>> vpdn-group 1
>>  accept-dialin
>>   protocol l2tp
>>   virtual-template 1
>>  terminate-from hostname LAC
>>  lcp renegotiation on-mismatch
>>  l2tp tunnel password 7 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>> !
>> interface Loopback0
>>  ip address xxxxxxxxxxxxxxxxxxxxx
>> !
>> interface Loopback1
>>  ip address xxxxxxxxxxxxxxxxxxxxx
>> !
>> interface GigabitEthernet0/1
>>  description Connection to Vlan 13
>>  ip address xxxxxxxxxxxxxxxxxxxx
>>  ip ospf message-digest-key 10 md5 7 xxxxxxxxxxxxxxxxxx
>>  duplex full
>>  speed 1000
>>  media-type rj45
>>  no negotiation auto
>> !
>> interface GigabitEthernet0/2
>>  no ip address
>>  shutdown
>>  duplex auto
>>  speed auto
>>  media-type rj45
>>  negotiation auto
>> !
>> interface GigabitEthernet0/3
>>  no ip address
>>  shutdown
>>  duplex auto
>>  speed auto
>>  media-type rj45
>>  negotiation auto
>> !
>> interface Virtual-Template1
>>  ip unnumbered Loopback1
>>  ip tcp adjust-mss 1420
>>  ip mroute-cache
>>  peer default ip address pool VODSL
>>  ppp mtu adaptive
>>  ppp authentication pap chap
>> !
>> router ospf 101
>>  log-adjacency-changes
>>  area 0 authentication message-digest
>>  summary-address xxxxxxxxxxxxxxxxxxxx
>>  summary-address xxxxxxxxxxxxxxxxxxxxx
>>  redistribute connected subnets
>>  redistribute static subnets
>>  passive-interface Virtual-Template1
>>  network xxxxxxxxxxxxxxxxxxx area 0
>>  network xxxxxxxxxxxxxxxxxxx area 0
>> !
>> ip local pool VODSL xxxxxxxxxxxxxxxxxxxx
>> ip local pool VODSL xxxxxxxxxxxxxxxxxxxx
>> ip classless
>> ip route 0.0.0.0 0.0.0.0 xxxxxxxxxxxxxxxxx
>> ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>> ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>> ip route xxxxxxxxxxxxxxxxxxxxxxxxx Loopback0 10
>> no ip http server
>> !
>> !
>> access-list 1 permit xxxxxxxxxxxxxxxxx
>> access-list 1 deny   any
>> access-list 50 permit xxxxxxxxxxxxxxxx
>> access-list 50 deny   any
>> no cdp run
>> !
>> snmp-server community xxxxxxxxxxxxxxxxx RW 1
>> !
>> radius-server attribute nas-port format d
>> radius-server host xxxxxxxxxx auth-port 1645 acct-port 1646 key 7 xxxxx
>> 
>> radius-server domain-stripping
>> radius-server unique-ident 3
>> radius-server vsa send accounting
>> !
>> !
>> gatekeeper
>>  shutdown
>> !
>> line con 0
>>  stopbits 1
>> line aux 0
>>  stopbits 1
>> line vty 0 4
>>  access-class 50 in
>> !
>> ntp clock-period 17180061
>> ntp server xxxxxxxxxxxx
>> ntp server xxxxxxxxxxxx
>> !
>> end
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> cisco-bba mailing list
>> cisco-bba at puck.nether.net
>> https://puck.nether.net/mailman/listinfo/cisco-bba





More information about the cisco-bba mailing list