through non-RSVP clouds. Example
As an example, Figure illustrates the basic principles of how RSVP performs CAC and bandwidth reservation in a network from a functionality perspective. In this example, RSVP is enabled on each router interface in the network. In this scenario, an IntServ-enabled WAN connects three Cisco IP phones to each other and to the Cisco Unified CallManager 5.0. Because bandwidth is limited on the WAN links, RSVP determines whether the requested bandwidth for a successful call is available. For performing CAC, Cisco Unified CallManager 5.0 uses RSVP. An RSVP-enabled voice application wants to reserve 20 kbps of bandwidth for a data stream from IP-Phone 1 to IP-Phone 2. Recall that RSVP does not perform its own routing; instead, RSVP uses underlying routing protocols to determine whether to carry reservation requests. As routing changes paths to adapt to changes in topology, RSVP adapts reservations to the new paths wherever reservations are in place. The RSVP protocol attempts to establish an end-to-end reservation by checking for available bandwidth resources on all RSVP-enabled routers along the path from IP-Phone 1 to IP-Phone 2. As the RSVP messages progress through the network from Router R1 via R2 to R3, the available RSVP bandwidth is decremented by 20 kbps on the router interfaces. For voice calls, a reservation must be made in both directions. The available bandwidth on all interfaces is sufficient to accept the new data stream, so the reservation succeeds and the application is notified.
Content 3.3 Selecting an Appropriate QoS Policy Model 3.3.6 The DiffServ Model The differentiated services (DiffServ) architecture specifies a simple, scalable, and coarse-grained mechanism for classifying and managing network traffic and providing QoS guarantees on modern IP networks. For example, DiffServ can provide low-latency guaranteed service (GS) to critical network traffic such as voice or video while providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers.The DiffServ design overcomes the limitations of both the best-effort and IntServ models. The DiffServ model is described in Internet Engineering Task Force (IETF) RFC 2474 and RFC 2475. DiffServ can provide an “almost guaranteed” QoS while still being cost-effective and scalable. The concept of soft QoS is the basis of the DiffServ model. You will recall that IntServ (hard QoS) uses signaling in which the end-hosts signal their QoS needs to the network. DiffServ does not use signaling but works on the provisioned-QoS model, where network elements are set up to service multiple classes of traffic each with varying QoS requirements. By classifying flows into aggregates (classes), and providing appropriate QoS for the aggregates, DiffServ can avoid significant complexity, cost, and introducing scalability issues. For example, DiffServ groups all TCP flows as a single class, and allocates bandwidth for that class, rather than for the individual flows as hard QoS (DiffServ) would do. In addition to classifying traffic, DiffServ minimizes signaling and state maintenance requirements on each network node. The hard QoS model (IntServ) provides for a rich end-to-end QoS solution, using end-to-end signaling, state-maintenance (for each RSVP-flow and reservation) and admission control at each network element. This approach consumes significant overhead, thus restricting its scalability. On the other hand, DiffServ is not an end-to-end QoS strategy because it cannot enforce end-to-end guarantees, but DiffServ QoS is a more scalable approach to implementing QoS. This is because DiffServ maps many applications into small sets of classes. DiffServ assigns each class with similar sets of QoS behaviors and enforces and applies QoS mechanisms on a hop-by-hop basis, uniformly applying global meaning to each traffic class to provide both flexibility and scalability. Figure shows the key characteristics of the DiffServ model against a network with many nodes to reinforce the concept per-hop behavior (PHB). DiffServ divides network traffic into classes based on business requirements. Each of the classes can then be assigned a different level of service. As the packets traverse a network, each of the network devices identifies the packet class and services the packet according to that class. It is possible to choose many levels of service with DiffServ. For example, voice traffic from IP phones is usually given preferential treatment over all other application traffic, e-mail is generally given best-effort service, and nonbusiness traffic can either be given very poor service or blocked entirely. DiffServ works like a packet delivery service. You request (and pay for) a level of service when you send your package. Throughout the package network, the level of service is recognized and your package is given either preferential or normal service, depending on what you requested. As shown in Figure , the DiffServ model has several benefits and some drawbacks:
Content 3.4 Using MQC for Implementing QoS 3.4.1 Methods for Implementing QoS Policy Figure shows the methods that have been used for implementing QoS policies over the years.A few years ago, the only way to implement QoS in a network was by using the command-line interface (CLI) to configure individual QoS policies at each interface. This is a time-consuming and error-prone task involving cutting and pasting configurations from one interface to another.Cisco introduced the Modular QoS CLI (MQC) to simplify QoS configuration by making configurations modular. MQC provides a building-block approach that uses a
single module repeatedly to apply a policy to multiple interfaces. Cisco AutoQoS represents innovative technology that simplifies the challenges of network administration by reducing QoS complexity, deployment time, and cost to enterprise networks. Cisco AutoQoS incorporates value-added intelligence in Cisco IOS software and Cisco Catalyst software to provision and assist in the management of large-scale QoS deployments.The first phase of Cisco AutoQoS VoIP offers straightforward capabilities to automate VoIP deployments for customers that want to deploy IP telephony but lack the expertise and staffing to plan and deploy IP QoS and IP services. The second phase, Cisco AutoQoS Enterprise, adds these features but is supported only on router interfaces. Cisco AutoQoS Enterprise uses NBAR to discover the traffic. After this discovery phase, the AutoQoS process can then configure the interface to support up to 10 traffic classes. Customers can easily configure, manage, and successfully troubleshoot QoS deployments by using the Cisco Router and Security Device Manager (SDM) QoS wizard. The Cisco SDM QoS wizard provides centralized QoS design, administration, and traffic monitoring that scales to large QoS deployments. The table in Figure summarizes a comparison of these methods.
Content 3.4 Using MQC for Implementing QoS 3.4.2 Configuring QoS at the CLI Cisco does not recommend the legacy CLI method for initially implementing QoS policies. The CLI method is time-consuming and prone to errors. Nonetheless, QoS implementation at the CLI remains the choice for some administrators, especially for fine-tuning and adjusting QoS properties. The legacy CLI method of QoS implementation has the following limitations: To implement