[c-nsp] Gameserver-Traffic and Cisco-routers - anything I should consider?

Dennis Nugent dennis at wcix.net
Sun May 1 16:41:27 EDT 2005


We had a client with 17 racks of game servers, and heard this all the time.
And the majority of time the packet loss was between their end-users ISP 
and the backbone, not between the game servers and the backbone.  Where is 
he pinging from? We had one guy that was complaining about latency  and 
packet loss pinging from behind his firewall, on a cable connection, from 
Wisconsin, and the servers were in San Jose.  He had 30 ms latency and 1% 
packet loss just getting to Chicago, all on his ISPs network.

Complaining about 3 packets lost out of 5000  is absurd.  If it was 1% 
between your 12000 and his server then I would start looking  for issues, 
such as   a minor issue with their NIC, the cabling, etc.   Have you done a 
continuous ping from the 12000 to his server?

I would tell him that it is certainly within the SLA parameters and to live 
with it

Dennis




At 12:15 PM 5/1/2005, you wrote:
>Hello colleagues,
>
>I hope all of you had a nice weekend.
>Mine wasn't that good - although the weather is great - because one of our
>new Gameserver customers is complaining about packet loss.
>The customer complains about a loss of 3 packets out of 5000 when he's
>pinging. Because game servers (in this case Half-life, Counterstrike,
>Battlefield and so on) are using udp which can't resend the packet like tcp
>does packet loss is a pain for them.
>The interesting thing is that neither the customer nor I can see packet loss
>on any other of our machines.
>
>I've checked the port and layer2-counters but can't find any errors so the
>problem must be somewhere on the router.
>Maybe the router doesn't like the Gameserver-traffic or something like that?
>Our other hosts are mostly web servers and we don't see the problem there.
>
>Our Corerouter is a Cisco 12000 and the line card the traffic goes in and
>out is a GE-GBIC-SC-B.
>
>The layer2-counters seem to be okay as far as you'd ask me.
>Ok there are a few drops but they're minimal. Maybe flow control would help
>to prevent them?
>
>show interfaces GigabitEthernet 2/0
>GigabitEthernet2/0 is up, line protocol is up
>   Hardware is GigMac GigabitEthernet, address is 0005.5ffd.1100 (bia
>0005.5ffd.1100)
>   Internet address is xxxxx
>   MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, rely 255/255, load 30/255
>   Encapsulation ARPA, loopback not set
>   Keepalive set (10 sec)
>   Full Duplex, 1000Mbps, link type is force-up, media type is SX
>   output flow-control is unsupported, input flow-control is unsupported
>   ARP type: ARPA, ARP Timeout 04:00:00
>   Last input 00:00:00, output 00:00:00, output hang never
>   Last clearing of "show interface" counters 03:01:25
>   Queueing strategy: fifo
>   Output queue 0/40, 0 drops; input queue 2/75, 217 drops
>   5 minute input rate 63422000 bits/sec, 44536 packets/sec
>   5 minute output rate 119220000 bits/sec, 45606 packets/sec
>      677219599 packets input, 130638497599 bytes, 0 no buffer
>      Received 255886 broadcasts, 0 runts, 6843702 giants, 0 throttles
>      0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
>      0 watchdog, 92725 multicast, 0 pause input
>      694757204 packets output, 245469781928 bytes, 0 underruns
>      0 output errors, 0 collisions, 0 interface resets
>      0 babbles, 0 late collision, 0 deferred
>      0 lost carrier, 0 no carrier, 0 pause output
>      0 output buffer failures, 0 output buffers swapped out
>
>Because I don't have any idea what could be wrong I'd simply ask you to
>provide any tips regarding Gameserver Traffic, please.
>
>My current interface-config for the customer is:
>interface GigabitEthernet2/0.202
>  description Gameserver-Customer
>  encapsulation dot1Q 202
>[...]
>  no ip redirects
>  no ip unreachables
>  no ip directed-broadcast
>  no ip proxy-arp
>  no cdp enable
>
>As you can see we're using vlans because there's a summit switch connected
>to the Cisco that splits up the vlans to customer ports.
>We're using vrrp in order to provide a redundant gateway to the customer if
>our router fails. The 2nd device is a foundry-router that stay's backup
>(I've checked the logs).
>
>Are there any other tweaks one should use in addition to the basic command
>you can see above? Maybe something with the timers or keep lives or so?
>
>
>Oh.... I've seen that the cpuload on the line card is at 41% which might be
>a little high.
>Any idea why? I'm not using acls. Show proc cpu doesn't show anything
>unusual...
>
>execute-on slot 2 show proc cpu
>========= Line Card (Slot 2) =========
>
>CPU utilization for five seconds: 42%/41%; one minute: 41%; five minutes:
>41%
>  PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
>    1           0         1          0  0.00%  0.00%  0.00%   0 Chunk Manager
>
>    2         124    463072          0  0.00%  0.00%  0.00%   0 Load Meter
>
>    3     3753572   3716151       1010  0.39%  0.24%  0.23%   0 CEF process
>
>    4      795312  12411275         64  0.00%  0.00%  0.00%   0 CEF LC IPC
>Backg
>    5     3567864    237077      15049  0.00%  0.07%  0.10%   0 Check heaps
>
>    6           0         2          0  0.00%  0.00%  0.00%   0 Pool Manager
>
>    7           0         2          0  0.00%  0.00%  0.00%   0 Timers
>
>    8           8        59        135  0.00%  0.00%  0.00%   0 GoodieMgr
>Server
>    9           0         2          0  0.00%  0.00%  0.00%   0 LC ATM Common
>Pr
>   10           0         2          0  0.00%  0.00%  0.00%   0 LC ATM OAM
>Input
>   11           0         2          0  0.00%  0.00%  0.00%   0 LC ATM OAM
>TIMER
>   12         112     38588          2  0.00%  0.00%  0.00%   0 IPC Dynamic
>Cach
>   13           0         1          0  0.00%  0.00%  0.00%   0 MDX DPE IPC
>QUEU
>   14         412   2315039          0  0.00%  0.00%  0.00%   0 IPC Periodic
>Tim
>   15           4        18        222  0.00%  0.00%  0.00%   0 IPC Seat
>Manager
>   16         340   2315040          0  0.00%  0.00%  0.00%   0 IPC Deferred
>Por
>   17           0        74          0  0.00%  0.00%  0.00%   0 MBUS Flash
>
>   18        1252   2469964          0  0.00%  0.00%  0.00%   0 MBUS
>Background
>   19           0         2          0  0.00%  0.00%  0.00%   0 Serial
>Backgroun
>   20           0         1          0  0.00%  0.00%  0.00%   0 SERIAL
>A'detect
>   21           0         1          0  0.00%  0.00%  0.00%   0 Critical
>Bkgnd
>   22         808    231911          3  0.00%  0.00%  0.00%   0 Net
>Background
>   23           0         7          0  0.00%  0.00%  0.00%   0 Logger
>
>   24         576   2315031          0  0.00%  0.00%  0.00%   0 TTY
>Background
>   25     7946372   2537614       3131  0.31%  0.38%  0.38%   0 Per-Second
>Jobs
>   26         292   2315039          0  0.00%  0.00%  0.00%   0 ICC Slave
>Reques
>   27           0         1          0  0.00%  0.00%  0.00%   0 ICC Async
>mcast
>   28         820   2315037          0  0.00%  0.00%  0.00%   0 FIA Poll
>
>   29           4        12        333  0.00%  0.00%  0.00%   0 MDX DPE IPC
>QUEU
>   30        1948   1157613          1  0.00%  0.00%  0.00%   0 LC Throttle
>
>   31           0         1          0  0.00%  0.00%  0.00%   0 Net Input
>
>   32        2540    463028          5  0.00%  0.00%  0.00%   0 Compute load
>avg
>   33      317328     38592       8222  0.00%  0.00%  0.00%   0 Per-minute
>Jobs
>   34           4         7        571  0.00%  0.00%  0.00%   0 BFLC
>switchover
>   35           0         1          0  0.00%  0.00%  0.00%   0 Logger MBUS
>tx
>   36           0         7          0  0.00%  0.00%  0.00%   0 Logger IPC tx
>
>   37           0         4          0  0.00%  0.00%  0.00%   0 ACLHash
>
>   38           0         3          0  0.00%  0.00%  0.00%   0 LC interrupt,
>IP
>   39           0         1          0  0.00%  0.00%  0.00%   0 LC interrupt,
>J1
>   40           0         1          0  0.00%  0.00%  0.00%   0 GLC FLASH
>Progra
>   41       11380  23096961          0  0.00%  0.00%  0.00%   0
>bma_req_process
>   42           0         1          0  0.00%  0.00%  0.00%   0 LC GE
>auto-negot
>   43         632   2315034          0  0.00%  0.00%  0.00%   0 LC COS
>
>   44        3132  23140021          0  0.00%  0.00%  0.00%   0 MDFS MFIB
>Proces
>   45           0         4          0  0.00%  0.00%  0.00%   0 TurboACL
>
>   46        1036    463026          2  0.00%  0.00%  0.00%   0 SSM
>connection m
>   47           0         1          0  0.00%  0.00%  0.00%   0 ACL-Free
>Process
>   48           0         1          0  0.00%  0.00%  0.00%   0 CEF MQC IPC
>Back
>   49           0         3          0  0.00%  0.00%  0.00%   0 MDFS LC
>Process
>   50           0         1          0  0.00%  0.00%  0.00%   0 Mcast TxQ
>Backgr
>   51           0         3          0  0.00%  0.00%  0.00%   0 MFIB LC
>Process
>   52         972    158332          6  0.00%  0.00%  0.00%   0 TFIB LC
>cleanup
>   53           0         9          0  0.00%  0.00%  0.00%   0 AToM SMgr
>Proces
>   54        1032     38585         26  0.00%  0.00%  0.00%   0 6PE Scanner
>
>   55         616   9261594          0  0.00%  0.11%  0.03%   0 Line Card
>Virtua
>   56          44        60        733  0.00%  0.00%  0.00%   1 Remote Exec
>
>   57       15508   5069339          3  0.00%  0.00%  0.00%   0 CEF LC Stats
>
>   58           0         6          0  0.00%  0.00%  0.00%   0 IPv6 CEF
>process
>   59       70828    163730        432  0.00%  0.00%  0.00%   0 CEF Scanner
>
>   60         120      1008        119  0.00%  0.00%  0.00%   0 BFLC IPC
>QUEUE H
>   61         856    540180          1  0.00%  0.00%  0.00%   0 OBFL Envmon
>
>   62           0         1          0  0.00%  0.00%  0.00%   0 IP Flow LC
>Backg
>   63           0         1          0  0.00%  0.00%  0.00%   0 IP Flow
>Backgrou
>
>Thanks for your help in advance,
>Gunther
>
>_______________________________________________
>cisco-nsp mailing list  cisco-nsp at puck.nether.net
>https://puck.nether.net/mailman/listinfo/cisco-nsp
>archive at http://puck.nether.net/pipermail/cisco-nsp/

Dennis Nugent
WCIX.Net, Inc.
350 S Center St Suite 500
Reno, NV  89501
dennis at wcix.net
(209) 743-6018
fax (877) 640-6608




More information about the cisco-nsp mailing list