[c-nsp] VFI LDP transport signaled down (ME3600x)
Waris Sagheer (waris)
waris at cisco.com
Thu May 10 22:26:35 EDT 2012
Hi Ihsan,
The VC did come up in our case without the bgp next hop self command. I
am not sure why it should make a difference in this case.
I am not aware of any issue since it seems to be working in our setup
without any knobs.
Copying Ahmed for his inputs.
Regards,
Waris
-----Original Message-----
From: Ihsan Junaidi Ibrahim [mailto:ihsan.junaidi at gmail.com]
Sent: Thursday, May 10, 2012 7:12 PM
To: cisco-nsp at puck.nether.net; Waris Sagheer (waris)
Subject: Re: [c-nsp] VFI LDP transport signaled down (ME3600x)
Well took me by surprise and managed to bring the VC l2transport up.
Turned our next-hop-self attribute is not implied for l2vpn VPLS NLRI
for my configs, even though the doc Waris attached didn't specify it
either.
My understanding is that next-hop-self for l2vpn and inetvpn NLRIs are
implied (they are with JUNOS) but I fear something is missing here.
So adding the next-hop-self attribute to both sides of the BGP neighbour
configs promptly brought up the VC.
Waris,
Any known issue on this?
I've opened an SR for this so I guess I'll take this with TAC directly.
ihsan
On May 10, 2012, at 11:56 PM, Ihsan Junaidi Ibrahim wrote:
> Adam,
>
> Shutting and unshutting both side of the VFIs resulting in the
following:
>
> PE1
> ---
> May 10 23:52:29.485 MYT: %VFI-6-VFI_STATUS_CHANGED: Status of VFI
> ME002555 changed from DOWN to UP May 10 23:52:29.485 MYT:
AToM[200.28.0.120, 116]: Circuit attributes, Receive update:
> May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . Interface handle:
> 0x3E83 May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . Status: UP
> (0x1) May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . Circuit
> directive: Go Active May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]:
> . Payload encap: Ethernet May 10 23:52:29.485 MYT: AToM[200.28.0.120,
> 116]: . Circuit Encap: VFI May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: . Segment type: 0x19 May 10 23:52:29.485
> MYT: AToM[200.28.0.120, 116]: . Switch handle: 61469 May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . MTU: 9178 May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . I/F Str: pw100001 May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . Circuit string: vfi May
> 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: Process attrs May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: Received Go Active service
> directive May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: . Receive
status update May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ......
NMS: VC oper state: DOWN
> May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...... NMS: err
codes: no-err
> May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...... SYSLOG: VC is
> DOWN May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: .... Local
> ready May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: .... Local
> service is ready; send a label May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: .... Alloc local binding May 10 23:52:29.485
> MYT: AToM[200.28.0.120, 116]: ..... Alloc label for dynamic May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...... Populate local
> binding May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ........
> Capability C000505, returned cap C000505, mask FFFEFFBF May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ....... Autosense enabled,
> no remote, Ethernet(5) May 10 23:52:29.485 MYT: AToM[200.28.0.120,
> 116]: ....... MTU set to 9178 May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: ....... FEC set to 129 May 10 23:52:29.485
> MYT: AToM[200.28.0.120, 116]: ....... Grouping off May 10 23:52:29.485
> MYT: AToM[200.28.0.120, 116]: ....... Grouping ignored, set to 0 May
> 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ....... Control word on
> May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...... PWID: already
> in use, reuse 14 May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]:
> ...... Asking to reuse label 23 May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: ...... Requested label: any May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...... Label request, label
> 0 pwid 0 May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ....
> Generate local event May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]:
> .... No label May 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: ...
> Check if can activate dataplane May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: .... Keep dataplane up May 10 23:52:29.485
> MYT: AToM: 277 cumulative msgs handled. rc=0 May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: Label response: label 23 pwid 14 reqid 27 May
> 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: Generate local event May
> 10 23:52:29.485 MYT: AToM[200.28.0.120, 116]: Ready, label 23 May 10
> 23:52:29.485 MYT: AToM[200.28.0.120, 116]: Evt local ready,
> provisioned->local standby, AC-ready May 10 23:52:29.485 MYT:
> AToM[200.28.0.120, 116]: . Take no action May 10 23:52:29.485 MYT:
> AToM: 278 cumulative msgs handled. rc=0
>
> PE2
> ---
> May 10 23:51:20.404 MYT: %VFI-6-VFI_STATUS_CHANGED: Status of VFI
> ME002617 changed from ADMINDOWN to DOWN May 10 23:51:20.404 MYT:
> %VFI-6-VFI_STATUS_CHANGED: Status of VFI ME002617 changed from DOWN to
UP May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: Circuit attributes,
Receive update:
> May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: . Interface handle:
> 0x3E83 May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: . Status: UP
> (0x1) May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: . Circuit
> directive: Go Active May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> . Payload encap: Ethernet May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: . Circuit Encap: VFI May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: . Segment type: 0x19 May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: . Switch handle: 12297 May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: . MTU: 9178 May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: . I/F Str: pw100001 May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: . Circuit string: vfi May 10 23:51:20.404
> MYT: AToM[200.28.0.15, 116]: Process attrs May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: Received Go Active service directive May 10
> 23:51:20.404 MYT: AToM[200.28.0.15, 116]: . Receive status update May
10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ...... NMS: VC oper state:
DOWN
> May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ...... NMS: err
codes: no-err
> May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ...... SYSLOG: VC is
> DOWN May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: .... Local ready
> May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: .... Local service is
> ready; send a label May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> .... Alloc local binding May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: ..... Alloc label for dynamic May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: ...... Populate local binding May 10
> 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ........ Capability C000505,
> returned cap C000505, mask FFFEFFBF May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: ....... Autosense enabled, no remote,
> Ethernet(5) May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: .......
> MTU set to 9178 May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> ....... FEC set to 129 May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: ....... Grouping off May 10 23:51:20.404 MYT: AToM[200.28.0.15,
> 116]: ....... Grouping ignored, set to 0 May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: ....... Control word on May 10 23:51:20.404
> MYT: AToM[200.28.0.15, 116]: ...... PWID: already in use, reuse 2 May
> 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ...... Asking to reuse
> label 637 May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: ......
> Requested label: any May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> ...... Label request, label 0 pwid 0 May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: .... Generate local event May 10 23:51:20.404
> MYT: AToM[200.28.0.15, 116]: .... No label May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: ... Check if can activate dataplane May 10
> 23:51:20.404 MYT: AToM[200.28.0.15, 116]: .... Keep dataplane up May
> 10 23:51:20.404 MYT: AToM: 3306 cumulative msgs handled. rc=0 May 10
> 23:51:20.404 MYT: AToM[200.28.0.15, 116]: Label response: label 637
> pwid 2 reqid 3 May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> Generate local event May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]:
> Ready, label 637 May 10 23:51:20.404 MYT: AToM[200.28.0.15, 116]: Evt
> local ready, ldp ready->local ready May 10 23:51:20.404 MYT:
> AToM[200.28.0.15, 116]: . Advertise local vc label binding May 10
> 23:51:20.404 MYT: AToM: 3307 cumulative msgs handled. rc=0 May 10
> 23:51:20.432 MYT: AToM[200.28.0.15, 116]: Start resend label timer May
> 10 23:51:20.432 MYT: AToM LDP[200.28.0.15, 116]: Receive label release
> May 10 23:51:20.432 MYT: AToM[200.28.0.15, 116]: Evt remote release,
> in local ready May 10 23:51:20.432 MYT: AToM[200.28.0.15, 116]: . Take
> no action May 10 23:51:20.432 MYT: AToM: 3308 cumulative msgs handled.
> rc=0 May 10 23:51:26.756 MYT: AToM[200.28.0.15, 116]: Stop resend
> label timer May 10 23:51:26.756 MYT: AToM[200.28.0.15, 116]: Evt
> resend label timer expired, in local ready May 10 23:51:26.756 MYT:
> AToM[200.28.0.15, 116]: . Resend label timer expired May 10
> 23:51:27.100 MYT: AToM[200.28.0.15, 116]: Start resend label timer May
> 10 23:51:27.100 MYT: AToM LDP[200.28.0.15, 116]: Receive label release
> May 10 23:51:27.100 MYT: AToM[200.28.0.15, 116]: Evt remote release,
> in local ready May 10 23:51:27.100 MYT: AToM[200.28.0.15, 116]: . Take
> no action May 10 23:51:27.100 MYT: AToM: 3309 cumulative msgs handled.
> rc=0
>
> ihsan
> On May 10, 2012, at 11:19 PM, adam vitkovsky wrote:
>
>> I was just thinking that maybe with the manual config one side gets
>> selected as the session initiator while with bgp it's the other way
>> around resulting in the fail -if indeed only one side is configured
>> to accept the targeted sessions (but now that I think about it I
>> guess it works the way the both ends initiate the session and than
>> the session with lower id gets thorn
>> down)
>> And what does the debug says about the session not coming up please?
>>
>> adam
>>
>> -----Original Message-----
>> From: Ihsan Junaidi Ibrahim [mailto:ihsan.junaidi at gmail.com]
>> Sent: Thursday, May 10, 2012 4:40 PM
>> To: adam vitkovsky
>> Cc: 'Pete Lumbis'; cisco-nsp at puck.nether.net
>> Subject: Re: [c-nsp] VFI LDP transport signaled down (ME3600x)
>>
>> Adam,
>>
>> That's what I initially thought but an EoMPLS or a manual VPLS work
>> just fine. Sample of EoMPLS,
>>
>> es-103-glsfb#sh mpls l2transport vc 5070 detail Local interface:
>> Gi0/19 up, line protocol up, Ethernet:1 up Destination address:
>> 200.28.0.120, VC ID: 5070, VC status: up
>> Output interface: Te0/2, imposed label stack {298 16}
>> Preferred path: not configured
>> Default path: active
>> Next hop: 200.28.2.242
>> Create time: 1d02h, last status change time: 00:03:17 Signaling
>> protocol: LDP, peer 200.28.0.120:0 up
>> Targeted Hello: 200.28.0.15(LDP Id) -> 200.28.0.120, LDP is UP
>> Status TLV support (local/remote) : enabled/supported
>> LDP route watch : disabled
>> Label/status state machine : established, LruRru
>> Last local dataplane status rcvd: No fault
>> Last BFD dataplane status rcvd: Not sent
>> Last BFD peer monitor status rcvd: No fault
>> Last local AC circuit status rcvd: No fault
>> Last local AC circuit status sent: No fault
>> Last local LDP TLV status sent: No fault
>> Last remote LDP TLV status rcvd: No fault
>> Last remote LDP ADJ status rcvd: No fault
>> MPLS VC labels: local 17, remote 16
>> Group ID: local 0, remote 0
>> MTU: local 9178, remote 9178
>> Remote interface description:
>> Sequencing: receive disabled, send disabled Control Word: On
>> (configured: autosense)
>> Dataplane:
>> SSM segment/switch IDs: 45083/8194 (used), PWID: 2 VC statistics:
>> transit packet totals: receive 1374, send 1374
>> transit byte totals: receive 118164, send 87936
>> transit packet drops: receive 0, seq error 0, send 0
>>
>> Debugging did not turn up a whole lot of useful info that can be used
>> to narrow down the problem.
>>
>> Resetting the LDP neighbours to a clean state the proceeding logs
>> only logs the EoMPLS targeted LDP session for VC 5070 (the above) but
>> no information at all for VC 116 which is part of the VFI attached
circuit.
>>
>> ihsan
>>
>> On May 10, 2012, at 9:42 PM, adam vitkovsky wrote:
>>
>>> It almost appears like one of the routers doesn't accept targeted
>>> sessions Can you please check whether both ends are conf to accept
>>> ldp targeted sessions Or maybe the debug of targeted sessions would
>>> shed some light on why the session won't come up The bgp
>>> auto-discovery looks good
>>>
>>> adam
>>>
>>> -----Original Message-----
>>> From: cisco-nsp-bounces at puck.nether.net
>>> [mailto:cisco-nsp-bounces at puck.nether.net] On Behalf Of Ihsan
>>> Junaidi Ibrahim
>>> Sent: Thursday, May 10, 2012 6:00 AM
>>> To: Pete Lumbis
>>> Cc: cisco-nsp at puck.nether.net
>>> Subject: Re: [c-nsp] VFI LDP transport signaled down (ME3600x)
>>>
>>> On PE1,
>>>
>>> es-103-glsfb#sh xconnect rib detail
>>>
>>> Local Router ID: 200.28.0.15
>>>
>>> VPLS-ID: 9930:116, Target ID: 200.28.0.120 iBGP Peer
>>> Next-Hop: 200.28.9.146
>>> Hello-Source: 200.28.0.15
>>> Route-Target: 9930:116
>>> Incoming RD: 9930:116
>>> Forwarder: VFI ME002555
>>> Provisioned: Yes
>>> NLRI handle: 69000001
>>>
>>> PE2,
>>>
>>> es-03-akhmw#sh xconnect rib detail
>>>
>>> Local Router ID: 200.28.0.120
>>>
>>> VPLS-ID: 9930:116, Target ID: 200.28.0.15 iBGP Peer
>>> Next-Hop: 200.28.2.242
>>> Hello-Source: 200.28.0.120
>>> Route-Target: 9930:116
>>> Incoming RD: 9930:116
>>> Forwarder: VFI ME002617
>>> Provisioned: Yes
>>> NLRI handle: 77000001
>>>
>>> On May 10, 2012, at 10:40 AM, Pete Lumbis wrote:
>>>
>>>> What do you see in "show xconn rib"?
>>>>
>>>> On Wed, May 9, 2012 at 10:36 AM, Ihsan Junaidi Ibrahim
>>>> <ihsan.junaidi at gmail.com> wrote:
>>>>> Hi all,
>>>>>
>>>>> My topology as follows:
>>>>>
>>>>> PE1--P1--P2--P3--P4--P5--PE2
>>>>>
>>>>> PE1 lo0 - 200.28.0.15 (15.2(2)S) loader 12.2(52r)EY1
>>>>> PE2 lo0 - 200.28.0.120 (15.2(2)S) loader 12.2(52r)EY2
>>>>>
>>>>> Are there specific nuances for an LDP signaled transport for
>>>>> EoMPLS and
>>> VPLS in the Whales platform?
>>>>>
>>>>> An xconnect from PE1 to PE2 is signaled successfully however a
>>>>> VPLS
>>> instance based on BGP autodiscovery (manual VPLS works) is unable to
>>> bring up the LDP l2transport signal although the VFI is signaled up.
>>>>>
>>>>> EoMPLS
>>>>> ----
>>>>> es-103-glsfb#sh xconnect peer 200.28.0.120 vc 5070
>>>>> Legend: XC ST=Xconnect State S1=Segment1 State S2=Segment2
State
>>>>> UP=Up DN=Down AD=Admin Down IA=Inactive
>>>>> SB=Standby HS=Hot Standby RV=Recovering NH=No Hardware
>>>>>
>>>>> XC ST Segment 1 S1 Segment 2
>>> S2
>>>>>
>>> ------+---------------------------------+--+------------------------
>>> ------+---------------------------------+--+--
>>> ------+---------------------------------+--+------
>>> -+--
>>>>> UP pri ac Gi0/19:1(Ethernet) UP mpls 200.28.0.120:5070
>>> UP
>>>>>
>>>>> es-103-glsfb#sh mpls l2transport vc 5070 detail Local interface:
>>>>> Gi0/19 up, line protocol up, Ethernet:1 up Destination address:
>>>>> 200.28.0.120, VC ID: 5070, VC status: up Output interface: Te0/2,
>>>>> imposed label stack {298 16} Preferred path: not configured
>>>>> Default path: active Next hop: 200.28.2.242 Create time: 02:10:43,
>>>>> last status change time: 02:08:57 Signaling
>>>>> protocol: LDP, peer 200.28.0.120:0 up Targeted Hello:
>>>>> 200.28.0.15(LDP Id) -> 200.28.0.120, LDP is UP
>>>>> Status TLV support (local/remote) : enabled/supported
>>>>> LDP route watch : disabled
>>>>> Label/status state machine : established, LruRru
>>>>> Last local dataplane status rcvd: No fault
>>>>> Last BFD dataplane status rcvd: Not sent
>>>>> Last BFD peer monitor status rcvd: No fault
>>>>> Last local AC circuit status rcvd: No fault
>>>>> Last local AC circuit status sent: No fault
>>>>> Last local LDP TLV status sent: No fault
>>>>> Last remote LDP TLV status rcvd: No fault
>>>>> Last remote LDP ADJ status rcvd: No fault
>>>>> MPLS VC labels: local 17, remote 16 Group ID: local 0, remote 0
>>>>> MTU: local 9178, remote 9178
>>>>> Remote interface description:
>>>>> Sequencing: receive disabled, send disabled Control Word: On
>>>>> (configured: autosense)
>>>>> Dataplane:
>>>>> SSM segment/switch IDs: 45083/8194 (used), PWID: 2 VC statistics:
>>>>> transit packet totals: receive 24, send 21
>>>>> transit byte totals: receive 2064, send 1344
>>>>> transit packet drops: receive 0, seq error 0, send 0
>>>>>
>>>>> VPLS
>>>>> ----
>>>>> es-103-glsfb#sh vfi
>>>>> Legend: RT=Route-target, S=Split-horizon, Y=Yes, N=No
>>>>>
>>>>> VFI name: ME002555, state: up, type: multipoint signaling: LDP
>>>>> VPN
>>>>> ID: 116, VPLS-ID: 9930:116
>>>>> RD: 9930:116, RT: 9930:116
>>>>> Bridge-Domain 116 attachment circuits:
>>>>> Vlan116
>>>>> Neighbors connected via pseudowires:
>>>>> Peer Address VC ID Discovered Router ID S
>>>>> 200.28.9.146 116 200.28.0.120 Y
>>>>>
>>>>> es-103-glsfb#sh mpls l2transport vc 116 detail Local interface:
>>>>> VFI
>>>>> ME002555 vfi up Interworking type is Ethernet Destination
address:
>>>>> 200.28.0.120, VC ID: 116, VC status: down Last error: Local access
>>>>> circuit is not ready for label advertise Next hop PE address:
>>>>> 200.28.9.146 Output interface: none, imposed label stack {}
>>>>> Preferred path: not configured Default path: no route No adjacency
>>>>> Create time: 02:07:55, last status change time: 02:07:55
>>>>> Signaling
>>>>> protocol: LDP, peer unknown
>>>>> Targeted Hello: 200.28.0.15(LDP Id) -> 200.28.9.146, LDP is DOWN,
>>>>> no
>>> binding
>>>>> Status TLV support (local/remote) : enabled/None (no remote
binding)
>>>>> LDP route watch : disabled
>>>>> Label/status state machine : local standby, AC-ready,
LnuRnd
>>>>> Last local dataplane status rcvd: No fault
>>>>> Last BFD dataplane status rcvd: Not sent
>>>>> Last BFD peer monitor status rcvd: No fault
>>>>> Last local AC circuit status rcvd: No fault
>>>>> Last local AC circuit status sent: Not sent
>>>>> Last local LDP TLV status sent: None
>>>>> Last remote LDP TLV status rcvd: None (no remote binding)
>>>>> Last remote LDP ADJ status rcvd: None (no remote binding)
>>>>> MPLS VC labels: local 23, remote unassigned
>>>>> AGI: type 1, len 8, 000A 26CA 0000 0074 Local AII: type 1, len 4,
>>>>> DF1C 000F (200.28.0.15) Remote AII: type 1, len 4, DF1C 0078
>>>>> (200.28.0.120) Group ID: local n/a, remote unknown
>>>>> MTU: local 9178, remote unknown
>>>>> Remote interface description:
>>>>> Sequencing: receive disabled, send disabled Control Word: On
>>>>> (configured: autosense)
>>>>> Dataplane:
>>>>> SSM segment/switch IDs: 0/0 (used), PWID: 14 VC statistics:
>>>>> transit packet totals: receive 0, send 0
>>>>> transit byte totals: receive 0, send 0
>>>>> transit packet drops: receive 0, seq error 0, send 0
>>>>>
>>>>> I'm getting the account team into the loop but if anyone has
>>>>> encountered
>>> this scenario before and managed to find the answer, that would be
>>> most helpful.
>>>>>
>>>>> ihsan
>>>>> _______________________________________________
>>>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>>
>>> _______________________________________________
>>> cisco-nsp mailing list cisco-nsp at puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/cisco-nsp
>>> archive at http://puck.nether.net/pipermail/cisco-nsp/
>>>
>>
>>
>
More information about the cisco-nsp
mailing list