[c-nsp] GigE woes
Tim Durack
tdurack at gmail.com
Wed Jun 23 21:22:39 EDT 2010
On Wed, Jun 23, 2010 at 11:59 AM, Anton Kapela <tkapela at gmail.com> wrote:
> (noting a fresh reply to this thread, i recalled i didn't answer this one from wayback)
>
> On May 17, 2010, at 1:10 PM, Tim Durack wrote:
>
>> What is PLCP?
>
> Short for "physical layer conformance protocol" -- basically, "yet more phy-specific headers" that are prepended, appended, or concatenated with 'user datagrams' on various networks. Depending on how these 'framers' operate on the providers $mystery_transport gear, various sorts of 'broken' can emerge. For example, a badly written PLCP framer could miss-interpret user datagram bits for it's own, slicing frames in half or causing all sorts of funk.
Gotcha.
>
>> Having a hard time coming up with a convincing test, especially with
>> test sets targetted at linerate rather than low bitrate tests. Haven't
>> tried varying patterns yet. Maybe that will turn something over.
>
> Seems like 64 byte frames at high rate triggered/exposed the negative behavior in your follow-up post; this seems like something was indeed 'frame aware' -- and then when they switched to 'transparent' mode, became less so (given that it now works properly).
>
> Do we know that the previous 'less-than-transparent mode' wasn't always frame-aware and stat-muxed with other users' data, into some sort of VC/VT over a sonet-like transport piece?
One side connects directly to an Atrica A-4100, which has 8-port GigE
and 1-port 10GigE. Not sure how this gear works in the Ethernet world,
especially the "clear-channel" part. Transport is a Nortel DWDM
system.
>
> Lastly, knowing something about the drop rate/frequency and intervals of the drops (during your high-rate 64 byte frame tests) could perhaps expose a drop/loss process which could indicate a FIFO somewhere in the previous config.
>
In the frame-aware mode, rfc2544 tests show:
Frame Size Throughput
(Mbps)
Frames Lost Loss Rate (%)
64 1,000.000 24,321,727 27.240
64 750.000 5 0.000
64 500.000 7 0.000
64 250.000 0 0.000
64 250.000 6 0.000
64 0.010 5 0.561
Bizarre latency figures:
Frame Size Throughput
(Mbps)
Avg Delay (us)
64 1,000.000 2,755.590
64 750.000 2,755.100
64 500.000 2,756.210
64 250.000 2,762.550
64 0.010 330,667.450
1,518 1,000.000 2,801.560
1,518 750.000 2,802.500
1,518 500.000 2,810.770
1,518 250.000 2,835.310
1,518 0.010 1,233,277.830
9,000 1,000.000 3,038.600
9,000 750.000 3,060.130
9,000 500.000 3,108.430
9,000 250.000 3,252.730
9,000 0.010 7,219,502.910
Frame Size Burst Size Frames Lost
64 24 6,994
64 124 169,044
64 224 328,449
64 324 486,780
64 424 637,727
64 524 797,163
64 624 946,447
64 724 1,096,295
64 824 1,247,705
64 924 1,387,109
64 1,024 1,544,136
1,518 24 1
1,518 124 1
1,518 224 1
1,518 324 1
1,518 424 1
1,518 524 1
1,518 624 1
1,518 724 1
1,518 824 1
1,518 924 1
1,518 1,024 1
9,000 24 1
9,000 124 1
9,000 224 1
9,000 324 1
9,000 424 1
9,000 524 1
9,000 624 1
9,000 724 1
9,000 824 1
9,000 924 1
9,000 1,024 1
--
Tim:>
More information about the cisco-nsp
mailing list