31.
  • The class-mark policy map is applied to the input interface, Ethernet 0/0.
  • On the output interface, three class maps have been created: voice-out, videoconferencing-out, and interactive-out. The voice-out class map will match the DSCP field EF. The videoconferencing-out class map will match the DSCP field AF 41. The interactive-out class map will match the DSCP field AF 31. As shown in the figure, the qos-policy policy map will then do the following:
    Content 4.3 Introducing Queuing Implementations 4.3.1 Congestion and Queuing Congestion can occur anywhere within a network where speed mismatches (for example, a 1000-Mbps link feeding a 100-Mbps link), aggregation (for example, multiple 100-Mbps links feeding an upstream 100-Mbps link), or confluence (the joining of two or more traffic streams) accrue. Figure illustrates the concept.Congestion-management features control the congestion when it occurs. One way that network elements handle an overflow of arriving traffic is to use a queuing algorithm to sort the traffic and then determine some method of prioritizing it onto an output link. Each queuing algorithm was designed to solve a specific network traffic problem and has a particular effect on network performance. Many algorithms have been designed to serve different needs. A well-designed queuing algorithm provides some bandwidth and delay guarantees to priority traffic. Example: Congestion Caused by Speed Mismatch
    Speed mismatches, as shown in Figure are the most typical cause of congestion in a network. Speed mismatches are the most common reason for congestion. It is possible to have persistent congestion when traffic is moving from a LAN to a WAN, such as when traffic moves from a high-speed LAN environment (100 or 1000 Mbps) to lower-speed WAN links (1 or 2 Mbps). Speed mismatches are also common in LAN-to-LAN environments when, for example, a 1000-Mbps link feeds into a 100-Mbps link, but in those cases they are transient. Example: Congestion Caused by Aggregation
    The second most common source of congestion is points of aggregation in a network as shown in Figure . Typical points of aggregation occur in WANs when multiple remote sites feed into a central site. In a LAN environment, congestion resulting from aggregation often occurs at the distribution layer of networks where the access layer devices feed traffic to the distribution layer switches.
    Content 4.3 Introducing Queuing Implementations 4.3.2 Congestion Management - Queuing Algorithms Queuing is designed to accommodate temporary congestion on an interface of a network device by storing excess packets in buffers until bandwidth becomes available or until the queue depth is exhausted and packets have to be dropped. Queuing is a congestion-management mechanism that allows you to control congestion by determining the order in which identified packets leave an interface based on priorities assigned to those packets. Congestion management entails creating queues, assigning packets to those queues based on the classification of the packet, and scheduling the packets in a queue for transmission. Cisco IOS routers support several queuing methods to meet the varying bandwidth, jitter, and delay requirements of different applications. The default mechanism on most interfaces is the very simplistic first-in, first-out (FIFO) queue. Some traffic types, such as voice and video, have very demanding delay and jitter requirements, so more sophisticated queuing mechanisms must be configured on interfaces used by voice and video traffic. Figure defines queuing. Congestion and Queuing
    Complex queuing generally happens on outbound interfaces only. A router queues packets it sends out an interface. During periods with low traffic loads, when no congestion occurs, packets leave the interface as soon as they arrive. During periods of transmit congestion at the outgoing interface, packets arrive faster than the interface can send them. When you use congestion-management features, packets accumulating at an interface are placed in software queues according to their assigned priority and the queuing mechanism configured for the interface. They are then scheduled for transmission when the hardware buffer of the interface is free to send them. Figure illustrates the process. The router determines the order of packet transmission by controlling which packets go into each queue and how the queues are serviced with respect to each other. Queuing Algorithm Introduction
    Figure lists some of the key queuing algorithms:

    Content 4.3 Introducing Queuing Implementations 4.3.3 FIFO FIFO is the simplest queuing algorithm. FIFO provides basic store-and-forward capability as shown in Figure . FIFO is the default queuing algorithm in some instances, thus requiring no configuration. In its simplest form, FIFO queuing—also known as first-come, first-served queuing—involves storing packets when the network is congested and forwarding them in order of arrival when the network is no longer congested. FIFO embodies no concept of priority or classes of traffic and consequently makes no decision about packet priority. There is only one queue, and all packets are treated equally. Packets are placed into a single queue and transmitted in the order in which they arrive. Higher-priority packets are not transmitted faster than lower-priority packets. When FIFO is used, ill-behaved sources can consume all the bandwidth, and bursty sources can cause delays to time-sensitive or important traffic; also, important traffic can be dropped because less important traffic has filled the queue. When no other queuing strategies are configured, all interfaces except serial interfaces at E1 (2.048 Mbps) and below use FIFO by default. FIFO, which is the fastest method of queuing, is effective for links that have little delay and minimal congestion. If your link has very little congestion, FIFO queuing may be the only queuing you need to use. All individual queues are, in fact, FIFO queues. Other queuing methods rely on FIFO as the underlying queuing mechanism for the discrete queues within more complex queuing strategies that support advanced functions such as prioritization. Note
    Serial interfaces at E1 (2.048 Mbps) and below use weighted fair queuing (WFQ) by default.
    Content 4.3 Introducing Queuing Implementations 4.3.4 Priority Queuing Figure shows PQ that allows you to prioritize traffic in the network. You configure four traffic priorities. You can define a series of filters based on packet characteristics to cause the router to place traffic into these four queues; the queue with the highest priority is serviced first until it is empty, then the lower queues are serviced in sequence. During transmission, PQ gives priority queues absolute preferential treatment over low-priority queues; important traffic, given the highest priority, will always take precedence over less important traffic. Packets are classified based on user-specified criteria and placed in one of the four output queues—one, two, three, and four—based on the assigned