[c-nsp] T1 down in all MLPPP Bundles

Eric Knudson ericknudson at gmail.com
Wed Feb 1 19:34:13 EST 2006


Eric,

If lcp is closed and the interfaces are down/down, there something
other than ppp that's causing the problem - you wouldn't happen to
have some sort of carrier diversity scheme or something going on that
would drop the 2nd link in all the bundles? Can we get any shows from
the remotes(run/int/ppp mult)?

Eric


On 2/1/06, Eric Kagan <ekagan at axsne.com> wrote:
> I did a few searches and found nothing similar and of course I am in a time
> crunch.
>
> I realized that we have a T1 down in all our MLPPP Bundles.  They each have
> 2 T1's (clear channel ppp) and the 2nd one is down/down on both sides.  The
> carrier can loop all the circuits and its too much of a co-incidence to be
> all circuit issues I think. I am embarassed to say I am not sure when this
> occurred.  We did an IOS upgrade last week from 12.1 w/encrypt IOS to 12.2
> Enterprise with MPLS support (but not configured yet for VPN's, mpls int,
> mtu changes, etc)
>
> The strange part is the MLPPP is working with the single T1 (info, configs
> below) so most of the articles I read on MLPPP Problems was with Auth
> issues, etc but I am figuring this is different.  These all did work
> previously without issue, no other config changes have been done.  I have
> tried the obvious, reboots, shut / no shut, etc.   Any info / pointers is
> appreciated.   Some detail follows below.
>
> Thanks
> Eric
>
>
> IOS (tm) 7200 Software (C7200-JS-M), Version 12.2(31), RELEASE SOFTWARE
> (fc2)
> System image file is "slot0:c7200-js-mz.122-31.bin"
> cisco 7206VXR (NPE300) processor (revision D) with 229376K/65536K bytes of
> memory.
> PCI bus mb0_mb1 (Slots 0, 1, 3 and 5) has a capacity of 600 bandwidth
> points.
> Current configuration on bus mb0_mb1 has a total of 380 bandwidth points.
> This configuration is within the PCI bus capacity and is supported.
>
> PCI bus mb2 (Slots 2, 4, 6) has a capacity of 600 bandwidth points.
> Current configuration on bus mb2 has a total of 470 bandwidth points
> This configuration is within the PCI bus capacity and is supported.
>
>
>
> Multilink40, bundle name is A
>   Bundle up for 02:43:26
>   0 lost fragments, 0 reordered, 0 unassigned
>   0 discarded, 0 lost received, 1/255 load
>   0x0 received sequence, 0x0 sent sequence
>   Member links: 1 active, 1 inactive (max not set, min not set)
>     Se3/0/5:1, since 02:43:26, no frags rcvd
>     Se2/1/1:1 (inactive)
>
> Multilink20, bundle name is B
>   Bundle up for 5d12h
>   0 lost fragments, 0 reordered, 0 unassigned
>   0 discarded, 0 lost received, 1/255 load
>   0x0 received sequence, 0x0 sent sequence
>   Member links: 1 active, 1 inactive (max not set, min not set)
>     Se3/0/12:1, since 5d12h, no frags rcvd
>     Se2/0/13:1 (inactive)
>
> Multilink10, bundle name is C
>   Bundle up for 2w4d
>   0 lost fragments, 0 reordered, 0 unassigned
>   0 discarded, 0 lost received, 1/255 load
>   0xB771ED received sequence, 0x97AD1D sent sequence
>   Member links: 1 active, 1 inactive (max not set, min not set)
>     Se2/1/19:1, since 2w4d, last rcvd seq B771EC
>     Se2/1/3:1 (inactive)
>
>
> interface Multilink10
>  description MLPPP to C
>  ip unnumbered FastEthernet4/0
>  ip access-group 161 in
>  ip route-cache flow
>  ip policy route-map private-net
>  no cdp enable
>  ppp multilink
>  multilink-group 10
> !
> interface Multilink20
>  description connected to B
>  ip address 10.10.89.5 255.255.255.252
>  ip route-cache flow
>  no cdp enable
>  ppp multilink
>  multilink-group 20
> !
> interface Multilink40
>  description connected to A
>  bandwidth 3000
>  ip unnumbered FastEthernet4/0
>  ip access-group 131 in
>  ip access-group 130 out
>  ip route-cache flow
>  no cdp enable
>  ppp multilink
>  multilink-group 40
> !
>
> interface Serial2/1/3:1
>  description connected to C#1
>  bandwidth 1544
>  no ip address
>  encapsulation ppp
>  ip route-cache flow
>  ip mroute-cache
>  no fair-queue
>  ppp multilink
>  multilink-group 10
>
> interface Serial2/1/19:1
>  description connected to C#2
>  bandwidth 1544
>  no ip address
>  encapsulation ppp
>  ip route-cache flow
>  ip mroute-cache
>  no fair-queue
>  ppp multilink
>  multilink-group 10
> end
>
> Serial2/1/3:1 is down, line protocol is down
>   Hardware is PA-MC-2T3+
>   Description: connected to C#1
>   MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
>      reliability 255/255, txload 1/255, rxload 1/255
>   Encapsulation PPP, crc 16, loopback not set
>   Keepalive set (10 sec)
>   LCP Closed, multilink Closed                 <---------- ???
>   Closed: CDPCP
>   Last input 4d18h, output 4d18h, output hang never
>   Last clearing of "show interface" counters 3d20h
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   5 minute input rate 0 bits/sec, 0 packets/sec
>   5 minute output rate 0 bits/sec, 0 packets/sec
>      0 packets input, 0 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      0 packets output, 0 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>      0 carrier transitions alarm present
>   Timeslot(s) Used: 1-24, subrate: 1536Kb/s, transmit delay is 0 flags
> non-inverted data
>
>
> Serial2/1/19:1 is up, line protocol is up
>   Hardware is PA-MC-2T3+
>   Description: connected to C#2
>   MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
>      reliability 255/255, txload 36/255, rxload 25/255
>   Encapsulation PPP, crc 16, loopback not set
>   Keepalive set (10 sec)
>   LCP Open, multilink Open
>   Last input 00:00:00, output 00:00:00, output hang never
>   Last clearing of "show interface" counters 3d20h
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   5 minute input rate 156000 bits/sec, 50 packets/sec
>   5 minute output rate 221000 bits/sec, 51 packets/sec
>      9653307 packets input, 2951415593 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      6 input errors, 2 CRC, 4 frame, 0 overrun, 0 ignored, 0 abort
>      8641757 packets output, 3867772573 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>      0 carrier transitions no alarm present
>   Timeslot(s) Used: 1-24, subrate: 1536Kb/s, transmit delay is 0 flags
> non-inverted data
>
> Multilink10 is up, line protocol is up
>   Hardware is multilink group interface
>   Description: MLPPP to C
>   Interface is unnumbered. Using address of FastEthernet4/0 (192.168.0.2)
>   MTU 1500 bytes, BW 1544 Kbit, DLY 100000 usec,
>      reliability 255/255, txload 36/255, rxload 23/255
>   Encapsulation PPP, loopback not set
>   Keepalive set (10 sec)
>   DTR is pulsed for 2 seconds on reset
>   LCP Open, multilink Open
>   Open: IPCP
>   Last input 00:08:16, output never, output hang never
>   Last clearing of "show interface" counters 3d20h
>   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 19613
>   Queueing strategy: fifo
>   Output queue: 0/40 (size/max)
>   5 minute input rate 145000 bits/sec, 50 packets/sec
>   5 minute output rate 221000 bits/sec, 51 packets/sec
>      9649179 packets input, 2951879356 bytes, 0 no buffer
>      Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
>      8637782 packets output, 3851943146 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 output buffer failures, 0 output buffers swapped out
>      0 carrier transitions
>
>
>
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>



More information about the cisco-nsp mailing list