[c-nsp] T1 Bonding with PA-MC-T3

Darryl Dunkin ddunkin at netos.net
Fri Mar 14 19:36:26 EDT 2008


As opposed to burning up IPs for those links, just for monitoring, you
can monitor the interface oper state via SNMP. I'm using indexing via
description.

Example nagios configuration from my system:

First monitor the IP of the bundle:
define host{
            use                 generic-host
            host_name           customerA
            address             10.0.0.2
            check_command       check-host-alive
            contact_groups      pager
            }

Then each individual T1, I stack these on top of the host (the PPP
bundle) as services:

define service{
        use                             generic-service
        host_name                       customerA
        service_description             T1-1
        contact_groups                  pager
        check_command
check_ifdescr!ds3_router!public!Serial4/1/0/1:0
        }
define service{
        use                             generic-service
        host_name                       customerA
        service_description             T1-2
        contact_groups                  pager
        check_command
check_ifdescr!ds3_router!public!Serial4/1/0/2:0
        }

The command definition is:
define command{
        command_name    check_ifdescr
        command_line    $USER1$/check_snmp_if -H $ARG1$ -C $ARG2$ -i
ifdescr -v $ARG3$
}

-----Original Message-----
From: cisco-nsp-bounces at puck.nether.net
[mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Troy Beisigl
Sent: Friday, March 14, 2008 16:11
To: 'Nick Voth'; cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] T1 Bonding with PA-MC-T3

The only reason that we have IP addresses assigned to the serial
interfaces
is that we use then to ping using nagios to determine if that link goes
down. You would do:

ip route 67.7.187.16 255.255.255.248 10.0.0.2

assuming that your multilink interface ip address on the vxr is
10.0.0.1/30

The multilink interface on the cpe would have the 10.0.0.2/30 and the
serial
interfaces on the CPE would not have any IP addresses as they are not
reachable. All traffic would go over the Multilink interface and should
a
circuit go down, you would only decrease BW dynamically. 

Once thing you want to be aware of when running MLPPP is that should a
circuit not fail but take errors, you will see high latency across the
links. Best to down the link and have the carrier work on the circuit
then
to have it cause performance issues for your customer.  

Troy Beisigl


-----Original Message-----
From: Nick Voth [mailto:nvoth at estreet.com] 
Sent: Friday, March 14, 2008 3:59 PM
To: Troy Beisigl; cisco-nsp at puck.nether.net
Subject: Re: [c-nsp] T1 Bonding with PA-MC-T3

Troy,

That makes perfect sense! Thanks.

One other question. I see that you have IP addresses assigned to both
serial
interfaces as well as the Multilink4 Interface. What does the 7206 see
as
the "real" IP of that Interface?

In other words, if we needed to route a block of 8 IP's over that
circuit to
the customer's CPE, would the 7206 need this:

   ip route 66.7.184.16 255.255.255.248 Serial5/0/9:0

Or this:

   ip route 66.7.184.16 255.255.255.248 Multilink4

I'm guessing it would be the second case since.

Thanks again,

-Nick Voth

> From: Troy Beisigl <troy at i2bnetworks.com>
> Date: Fri, 14 Mar 2008 15:41:32 -0700
> To: 'Nick Voth' <nvoth at estreet.com>, <cisco-nsp at puck.nether.net>
> Subject: RE: [c-nsp] T1 Bonding with PA-MC-T3
> 
> Hi Nick,
> 
> The PA-MC-T3 card works fine for MLPPP in the 7206. We are using them
here
> with no problem. 
> 
> 
> interface Multilink4
>  description Dual Circuit to TRI-CITY MC
>  ip address 172.20.1.69 255.255.255.252
>  ip nat inside
>  no cdp enable
>  ppp multilink
>  multilink max-links 2
>  multilink min-links 1
>  no ppp multilink fragmentation
>  multilink-group 4
> !
> interface Serial5/0/9:0
>  description Circuit SD/HCGS/080703 to TRI-CITY MC on S0
>  ip address 172.30.1.13 255.255.255.252
>  encapsulation ppp
>  down-when-looped
>  ppp multilink
>  multilink-group 4
> !
> interface Serial5/0/11:0
>  description Circuit SD/HCGS/080704 to TRI-CITY MC on S1
>  ip address 172.30.1.21 255.255.255.252
>  encapsulation ppp
>  down-when-looped
>  ppp multilink
>  multilink-group 4
> !
> 
> Troy Beisigl
> 
> -----Original Message-----
> From: cisco-nsp-bounces at puck.nether.net
> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Nick Voth
> Sent: Friday, March 14, 2008 3:11 PM
> To: cisco-nsp at puck.nether.net
> Subject: [c-nsp] T1 Bonding with PA-MC-T3
> 
> Guys,
> 
> I have a 7206 VXR with a PA-MC-T3 card in it for doing T1's off of a
> channelized DS3. I know the PA-MC-T3 doesn't support MLPPP bonding of
> multiple T1's. The problem is, my NPE doesn't support the newer
PA-MC-T3-EC
> enhanced card that works for T1 bonding.
> 
> I know I could feed the DS3 to some separate Mux and pull individual
T1's
> off of that and bond them in a different Cisco card. Problem is,
that's
not
> really a very "clean" solution for us and definitely adds some other
links
> in the chain that could fail.
> 
> SO, is there any way to accomplish T1 bonding with that existing DS3
card
or
> am I just stuck?
> 
> Thanks very much for any help.
> 
> -Nick Voth
> 
> 
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> 



_______________________________________________
cisco-nsp mailing list  cisco-nsp at puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


More information about the cisco-nsp mailing list