Langsamer Link mit Output Drops

Typischerweise sind Interface Errors meist in Verbindung mit Layer1 Problemen. Doch was ist wenn die physikalische Verbindung soweit in Ordnung ist?

Im spezifischen geht es um viele und steigende Output Drops auf einem simplen Layer2 Switch (z.B. 2960), diese kommen häufig auch von einer falschen QOS Konfiguration, z.B.

mls qos

beheben lässt sich dies,  ganz einfach via:

no mls qos

Natürlich ist dieser Rat ziemlich aus dem Kontext gegriffen und adressiert primär deployments, welche keine durchgängige QoS Strategie besitzen.



Cisco.com advice hierzu:

If you haven’t noticed, ‘mls qos’ is disabled by default on Cisco switches. Unless you are familiar with QoS, you should probably leave it disabled as once you enable it, the default mappings and thresholds aren’t really ideal for passing traffic efficiently, particularly with bursty traffic and oversubscribed ports (ex. GigE ingress to FastE egress). In such a situation you will notice output drops increment on the FastEthernet port. While this is normal when buffers are filled faster than they can empty, it is exacerbated by not having an appropriately configured QoS policy. Most protocols are built to handle these drops, but simply enabling mls qos alone will cause a lot more drops than would otherwise occur.

Ein passender Quote hierzu:

„For me, this became evident on a 2960 with 10/100 ports to hosts, and a gigE link to a server. FTP transfers from the server to the host would work fine and almost utilize the full 100mb link, which is great. Yes, the output drops on the 100mb port would increase, but the transfer did not really suffer from it. The problem became abundantly clear when using windows file transfers from the server to the host. As expected the output drops increased, but the link would only saturate between 20-45% of the full 100mb capability. This is obviously not ideal, and it shows how different protocols will react to the amount of dropped packets. Here, FTP seems much more resilient. You can watch the output drops increment with repeatedly using the ‘show int fa0/1 | i drop’ command:“

 

Switch#show int fa0/1 | i drop 
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 7266 
Switch#show int fa0/1 | i drop 
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 7858 
Switch#show int fa0/1 | i drop 
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 8247

„After disabling mls qos with ‘no mls qos’, the amount of output drops during a sample large file transfer decreased significantly. In fact, when said file transfer was occurring and no other traffic was traversing the ports, I saw NO output drops for the duration of the file copy (~500MB). When transfering the same file to multiple hosts simultaneously, I only saw a few dozen drops rather than the previous hundreds. Further, the windows file transfer speeds increased to almost fully saturate the 100mb port. This is obviously much more ideal than the previous 20-45%.“

„All that being said, on a much more busy network, you would surely run into issues where output drops start increasing and the need for mission critical traffic to pass and not drop becomes more important (ex. voice). This is where enabling mls qos and applying an appropriately configured QoS policy will pay off. I have included a standard QoS config here: 2960 QoS Sample. Also, to get started, you may want to look into AutoQoS. This can be applied to a switch interface and it will automatically put in the sample config I have attached as well as apply policies to mark the important traffic.“ 

„In closing, I am no where near being an expert on QoS. In fact, I would consider myself a novice beginner (I am aware of the redundancy). This is probably why it took me a while to notice that having mls qos turned on would cause these problems. My intention here is that other QoS novices in similar situations will take note of this article and avoid my mistake.“

Meine persönliche Erfahrung zeigt, dass QoS erst ab 3xxx Catalysten wirklich sinnvoll ist. QoS trust, Auto QoS etc.. mag ja wohl funktionieren auf einem 29xx aber sobald es ans marking geht, braucht hört der Spass auf. 

Bei Gelegenheit werd ich dazu einen seperaten Artikel verfassen.

Samuel Heinrich
Senior Network Engineer at Alpiq Intec (Basel, Switzerland)
Arbeitet in Raum Basel (Switzerland) als Senior Network Engineer mit über 10 Jahren Erfahrung im Bereich Netzwerk und Telekommunikation.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.