priority. Packets that are not classified by priority fall into the normal queue. A priority list is a set of rules that describe how packets should be assigned to priority queues. A priority list might also describe a default priority or the queue size limits of the various priority queues. Packets can be classified by the following: Keepalives sourced by the network server are always assigned to the high-priority queue; all other management traffic (such as Enhanced Interior Gateway Routing Protocol [EIGRP] updates) must be configured. PQ provides absolute preferential treatment to high-priority traffic, ensuring that mission-critical traffic traversing various WAN links gets priority treatment. In addition, PQ provides a faster response time than do other methods of queuing. Although you can enable priority output queuing for any interface, it is best suited to low-bandwidth, congested serial interfaces. When you choose to use PQ, consider that, because lower-priority traffic is often denied bandwidth in favor of higher-priority traffic, the use of PQ could, in the worst case, result in lower-priority traffic never being transmitted (the lower-priority traffic class is “starved”). To avoid this problem, you can use traffic shaping to rate-limit the higher-priority traffic. PQ introduces extra overhead that is acceptable for slow interfaces but that may not be acceptable for higher-speed interfaces such as Ethernet. With PQ enabled, the system takes longer to switch packets because the packets are classified by the processed switch path. Furthermore, PQ uses a static configuration that does not adapt readily to changing network conditions.

Content 4.3 Introducing Queuing Implementations 4.3.5 Round Robin Round robin refers to an arrangement that involves choosing all elements in a group equally in some rational order, usually starting from the top to the bottom of a list and then starting again at the top of the list and so on. A simple way to think of round robin is that it is about “taking turns.” In round-robin queuing, one packet is taken from each queue and then the process repeats. Figure illustrates round robin queuing. If all packets are the same size, all queues share the bandwidth equally. If packets being put into one queue are larger, that queue will receive a larger share of bandwidth. No queue will “starve” with round-robin queuing because all queues receive an opportunity to dispatch a packet every round. A limitation of round-robin queuing is the inability to prioritize traffic. Weighted Round Robin
The weighted round robin (WRR) algorithm provides prioritization capabilities for round-robin queuing as shown in Figure . In WRR, packets are accessed round-robin style, but queues can be given priorities called “weights.” For example, in a single round, four packets from a high-priority class might be dispatched, followed by two from a middle-priority class, and then one from a low-priority class. Some implementations of the WRR algorithm provide prioritization by dispatching a configurable number of bytes each round rather than a number of packets. The Cisco custom queuing (CQ) mechanism is an example of this implementation. Figure illustrates the worst-case scenario of the WRR algorithm, which uses the following parameters to implement WRR queuing on an interface: The example shows that the router first sent two packets with a total size of 2999 bytes. Because this size is within the limit (3000), the router can send the next packet (which is MTU-sized). The result was that the queue received almost 50 percent more bandwidth in this round than it should have received. This example shows one of the drawbacks of WRR queuing—it does not allocate bandwidth accurately. The limit or weight of the queue is configured in bytes. The accuracy of WRR queuing depends on the weight (byte count) and the MTU. If the ratio between the byte count and the MTU is too small, WRR queuing will not allocate bandwidth accurately. If the ratio between the byte count and the MTU is too large, WRR queuing will cause long delays.
Content 4.3 Introducing Queuing Implementations 4.3.6 Router Queuing Components Queuing on routers is necessary to accommodate bursts when the arrival rate of packets is greater than the departure rate, usually because of one of two reasons: Initial implementations of queuing used a single FIFO strategy. Better queuing mechanisms were introduced when special requirements required routers to differentiate among packets of different importance. Queuing has two parts as shown in Figure : Figure illustrates the actions that must occur before transmitting a packet: The Software Queue
The implementation of software queuing, as shown in Figure is optimized for periods when the interface is not congested. The software queuing system is bypassed whenever there is no packet in the software queue and there is room in the hardware queue. The software queue activates only when data must wait to be placed into the hardware queue. The Hardware Queue
The double-queuing strategy (software and hardware queues) has its impacts on the results of overall queuing. Software queues serve a valuable purpose. If the hardware queue is too long, it will contain a large number of packets scheduled in the FIFO fashion. A long FIFO hardware queue most likely defeats the purpose of the QoS design requiring a certain complex software queuing system (for example, CQ). The hardware queue (transmit queue) is a final interface FIFO queue that holds frames to be immediately transmitted by the physical interface. The transmit queue ensures that a frame is always available when the interface is ready to transmit traffic, so that link utilization is driven to 100 percent of capacity. Why use the hardware queue at all? Or why not just set its length to one? Doing so would force all packets to go through the software queue and be scheduled one by one to the interface for transmission. This approach has these drawbacks: The length of