[cisco-voip] Getting IOS to translate from DSCP to 802.1p

Tim Smith tim.smith at enject.com.au
Fri Jun 24 04:37:35 EDT 2022


Hey James,

If you have flex subscription (even on premise) - then you'll be able to
activate Control Hub and Serviceability Connector and Connected UC.
Perfect to help investigate and quantify the voice side of things.
You can just keep using everything on prem and Teams etc - no problems -
but you'll get the management features in CH - which are good, and getting
better - definitely worth having (especially since you are on 14).
Your partner should be able to spin you up a trial (even if not on flex).

That definitely requires a diagram / whiteboard (and a fair amount of
brainpower) :)

Maybe someone can chime in with some specifics on the QoS side of things.

My next thoughts would be:
- If you suspect capacity / QoS etc.. maybe schedule some testing to try
reproduce it during busy time - if you have it available to you - I would
leverage Cisco and Extreme support (even seeing if you can generate some
traffic)
- Maybe something else can be done ingress before the X440's on Enterasys
or Extreme
- Maybe check out latest SRND / CVD's on Cisco - and also latest Cisco Live
sessions (potentially from a few years back now) - there is gold in there
often

Cheers,

Tim

On Fri, 24 Jun 2022 at 18:13, James Andrewartha <jandrewartha at ccgs.wa.edu.au>
wrote:

> Hi Tim,
>
>
>
> CUCM 14.0.1.11900-132, upgraded from 12.5 in January, it was happening
> before the upgrade as well. No Webex Control hub, we’re strictly on-prem
> (and use MS Teams for chat/VC etc), but I’ll ask my partner about it.
> Happens on both 7975 and 8841 (at first we thought it was the 8841 I put in
> so I swapped it back but it still happens).
>
>
>
> My money is on QoS because the path is 2921 1Gb -> Enterasys B5 1Gb ->
> Extreme VSP7400 10Gb -> Extreme X440-G2 port 1:48 1Gb -> Extreme X440-G2
> 1Gb -> phones. The packet buffers on the X440-G2 are 1.5MB per 24 ports, so
> fairly easily overloaded going from 10Gb to 1Gb, and there’s plenty of
> client devices (largely iPads) hanging off the second switch (via APs),
> which could easily be doing large downloads and causing microburst packet
> drops. Plus I definitely see congestion drops on that port:
>
>
>
> Slot-1 dblockr1x.5 # show ports 1:48 buffer
>
> Packet Buffer Allocation for ports in range 1:25-48,49,50
>
> Total Packet Buffer Size: 1572864 bytes, Not Overcommitted
>
> Total Shared Buffer Size: 957824
>
>   Port 1:48  Max Shared Buffer Usage: 239360 bytes (25%)
>
>    QP1: Reserved Buffer: 4096 bytes
>
>    QP5: Reserved Buffer: 4096 bytes
>
>    QP8: Reserved Buffer: 4096 bytes
>
>
>
> Slot-1 dblockr1x.14 # show ports 1:48 qosmonitor congestion port-number
> no-refresh
>
> Port Qos Monitor
>
> Port         QP1      QP2      QP3      QP4      QP5      QP6
> QP7      QP8
>
>              Pkt      Pkt      Pkt      Pkt      Pkt      Pkt
> Pkt      Pkt
>
>             Cong     Cong     Cong     Cong     Cong     Cong     Cong
> Cong
>
>
> ================================================================================
>
> 1:48     1266064        0        0        0        0        0
> 0        0
>
>
>
> Slot-1 dblockr1x.13 # show ports 1:48 qosmonitor no-refresh port-number
>
> Port Qos Monitor
>
> Port            QP1         QP2         QP3         QP4
> QP5         QP6         QP7         QP8
>
>                 Pkt         Pkt         Pkt         Pkt
> Pkt         Pkt         Pkt         Pkt
>
>                Xmts        Xmts        Xmts        Xmts        Xmts
> Xmts        Xmts        Xmts
>
>
> ========================================================================================================
>
> 1:48    11521231822           0           0           0
> 12196455           0           0   179452595
>
>
>
> 802.1p priority 5 is mapped to QP5, the phones on VLAN100 do mark it
> correctly which is why you can see some packets in that queue. Most other
> buildings on campus don’t have an extra switch between them and the core,
> so they are probably better buffered at the core (which I believe is a
> Trident        3 with 32MB of packet buffers) and don’t have packet drops
> (not that I can find the show command for those statistics). We are a
> pretty simple network, single campus with a star network (ie everything
> goes direct to the core), and this is the only location reporting this
> particularly problem. While I agree there could be plenty of places to look
> for the problem, this one is standing out to me and the obvious one to
> attack first.
>
>
>
> (We also have another problem which has persisted for a decade where we
> sometimes get one-way audio at the start of a call, which fixes itself
> after a certain amount of time. Also hard to reproduce, double-pressing ?
> shows no packets in one direction, and in this case the 8841 is better than
> 7965/75, so the solution is probably to replace all of them, but I can’t
> commit to that until I get this other issue sorted).
>
>
>
> So that’s why I want to make the 2921 (or 4331) map DSCP to 802.1p when
> routing. I see mls qos which has what I want, but apparently that’s for
> switches? I couldn’t work out how to apply it to an interface anyway.
>
>
>
> voip1#show mls qos maps dscp-cos
>
>
>
>    Dscp-cos map:
>
>        dscp:   0  8 10 16 18 24 26 32 34 40 46 48 56
>
>      -----------------------------------------------
>
>         cos:   0  1  1  2  2  3  3  4  4  5  5  6  7
>
>
>
> There’s plenty of documentation on matching traffic and then applying it,
> but I just want the DSCP value to be mapped to cos using that table. I’ve
> tried some things in the past, I can see currently I’ve got a policy
> applying cos 2 to all packets going out GigabitEthernet0/0.100 but last I
> checked with Wireshark I couldn’t see it on the wire:
>
>
>
> voip1#show policy-map interface gigabitEthernet 0/0.100
>
> GigabitEthernet0/0.100
>
>
>
>   Service-policy output: default-policy
>
>
>
>     Class-map: class-default (match-any)
>
>       173595101 packets, 21932583198 bytes
>
>       5 minute offered rate 26000 bps, drop rate 0000 bps
>
>       Match: any
>
>       QoS Set
>
>         cos 2
>
>           Packets marked 173595102
>
>
>
> I’ll have to go back and test it again, it’s been a few months since I
> last had a good go at it (complaints have worsened this week), but any
> pointers as to where to start with the configuration would be greatly
> appreciated.
>
>
>
> Thanks,
>
>
>
> --
>
> James Andrewartha
>
> Network & Projects Engineer
>
> Christ Church Grammar School
>
> Claremont, Western Australia
>
> Ph. (08) 9442 1757
>
> Mob. 0424 160 877
>
>
>
> *From:* Tim Smith <tim.smith at enject.com.au>
> *Sent:* Friday, 24 June 2022 1:32 PM
> *To:* James Andrewartha <jandrewartha at ccgs.wa.edu.au>
> *Cc:* voyp list, cisco-voip (cisco-voip at puck.nether.net) <
> cisco-voip at puck.nether.net>
> *Subject:* Re: [cisco-voip] Getting IOS to translate from DSCP to 802.1p
>
>
>
> Hi James,
>
>
>
> It might be worth tidying up, but I'd be surprised if that was the cause
> of this issue.
>
>
>
> What version of CUCM are you running?
>
> Do you have a Webex Control Hub and Connected UC setup yet?
>
> (If you don't have Flex and Control Hub - a partner could spin up a trial
> to use temporarily).
>
>
>
> This integration is really useful.. especially for intermittent issues.
>
> It's going to give you a read on all the calls that go through and their
> quality.
>
> So you can pin-point issues, and then dive into them.
>
> You can also then get TAC to work these things directly (they can actually
> pull traces as well).
>
> But having the consolidated list of calls and quality is a game changer
> (especially spotting patterns etc).
>
> And of course it is really quick and easy to set up if you have some spare
> VM space.
>
>
>
> Otherwise, old school, I'd be looking at the media flows (is there
> anything else in the path - i.e. an MTP, DSP etc that could be introducing
> an issue on some calls) - where are the problematic calls traversing etc.
>
> When you have some more info on the paths - you can always look at the old
> Wiresharks.. rotating capture buffers.
>
> Unless you have a nice network with some built in capture capability -
> like Meraki!
>
>
>
>
>
> Cheers,
>
>
>
> Tim
>
>
>
>
>
>
>
> On Fri, 24 Jun 2022 at 14:29, James Andrewartha <
> jandrewartha at ccgs.wa.edu.au> wrote:
>
> Hi voipers,
>
>
>
> We have persistent reports of garbled voice quality (like when a mobile
> phone call glitches briefly) in one location on our campus, of course never
> when I’m there to observe it and get a capture. One thing I’ve noticed is
> the 802.1p priority isn’t being set on the packets. The setup is CUCM on
> VMware connected to a portgroup of VLAN 101. There’s also a 4331 which
> handles SIP termination on VLAN101. These are then routed by a (HSRP
> redundant pair of) 2921 to VLAN100 where the phones live. The 2921 doesn’t
> map the DSCP set by CUCM or 4331 to an 802.1p priority after routing, and
> for the life of me I can’t work out how to do it. I can’t work out how to
> do it on the 4331 either, I could move routing of the subnets to it but I
> was waiting until I got the second one which I ordered months ago arrives.
>
>
>
> Googling is useless, I end up with results for different variants of IOS
> or switches, none of which work on IOS 15 on the 2921. Surely it’s not that
> hard to just say “trust DSCP and map it to 802.1p”?
>
>
>
> Thanks,
>
>
>
> --
>
> James Andrewartha
>
> Network & Projects Engineer
>
> Christ Church Grammar School
>
> Claremont, Western Australia
>
> Ph. (08) 9442 1757
>
> Mob. 0424 160 877
>
>
>
> _______________________________________________
> cisco-voip mailing list
> cisco-voip at puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-voip
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://puck.nether.net/pipermail/cisco-voip/attachments/20220624/59dff6c1/attachment.htm>


More information about the cisco-voip mailing list