[TCI]) that are inserted within an Ethernet frame following the source address field. The TPID field is currently fixed and assigned the value 0x8100. Figure shows the structure of the Ethernet frame. The TCI field is composed of three fields: Standard Definitions of CoSs CoS Definition CoS 7 (111) Network CoS 6 (110) Internet CoS 5 (101) Critical CoS 4 (100) Flash-override CoS 3 (011) Flash CoS 2 (010) Immediate CoS 1 (001) Priority CoS 0 (000) Routine One disadvantage of using CoS markings is that frames lose their CoS markings when transiting a non-802.1Q to a non-802.1p link. Trunking with 802.1Q must be enabled before the CoS field even exists. As soon as the packet encounters Layer 3 forwarding, either with a router or a Layer 3 switch, the old LAN header gets discarded and the CoS field will be lost. Therefore, a ubiquitous permanent marking should be used for network transit. This is typically accomplished by translating a CoS marking into another marker or simply using a different marking mechanism. Classification and Marking in the Enterprise
Before the Internet Engineering Task Force (IETF) defined QoS methods for the network layer, the ITU-T and the Frame Relay Forum (FRF) had already derived standards for link layer QoS in Frame Relay networks. Frame Relay provides a simple set of QoS mechanisms to ensure a committed information rate (CIR): congestion notifications called forward explicit congestion notification (FECN) and backward explicit congestion notification (BECN), in addition to fragmentation of data frames when voice frames are present, as described in Frame Relay Forum standard FRF.12. Figure shows the structure of a Frame Relay frame. One component of Frame Relay QoS is packet discard eligibility when congestion is experienced in the network. Frame Relay will allow network traffic to be sent at a rate exceeding its CIR. The frames that exceed the committed rate can be marked as discard eligible (DE) at the ingress Frame Relay switch. If congestion occurs in the network, frames marked DE will be discarded in preference to frames that are not marked. Marking in MPLS
When a customer transmits IP packets from one site to another, the IP precedence field (the first three bits of the DSCP field in the header of an IP packet) specifies the CoS. Based on the IP precedence marking, the packet is given the desired treatment, such as guaranteed bandwidth or latency. The MPLS experimental bits comprise a 3-bit field that you can use to map IP precedence into an MPLS label. This allows MPLS-enabled routers to perform QoS features indirectly based on the original IP Precedence field inside the IP packets encapsulated by MPLS, without the need to spend resources to look into the IP packet header and examine the IP Precedence field. If the service provider network is an MPLS network, then the IP precedence bits are copied into the MPLS Experimental (EXP) field at the edge of the network. However, the service provider might want to set an MPLS packet QoS to a different value that is determined by the service offering. The MPLS EXP field allows the service provider to provide QoS without overwriting the value in the customer IP Precedence field. The IP header remains available for customer use, and the IP packet marking is not changed while the packet travels through the MPLS network. Figure shows the structure of the MPLS frame.
Content 4.1 Introducing Classification and Marking 4.1.4 DiffServ Model In contrast to integrated service (IntServ), which is a fine-grained, flow-based mechanism, differentiated service (DiffServ) is a coarse-grained, class-based mechanism for traffic management. DiffServ architecture is based on a simple model in which data packets are placed into a limited number of traffic classes, rather than differentiating network traffic based on the requirements of an individual flow. Each router on the network is configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss, low-latency forwarding properties or best-effort forwarding properties. All the traffic flowing through a router that belongs to the same class is referred to as a behavior aggregate (BA). A more detailed discussion of PHBs and BAs occurs later in this lesson. The DSCP values mark packets to select a PHB. Within the core of the network, packets are forwarded according to the PHB that is associated with the DSCP. The PHB is an externally observable forwarding behavior applied at a DiffServ-compliant node to a collection of packets with the same DSCP value. One of the primary principles of DiffServ is that you should mark packets as close to the edge of the network as possible. It is often a difficult and time-consuming task to determine which traffic class a data packet belongs to. You want to classify the data as few times as possible. By marking the traffic at the network edge, core network devices and other devices along the forwarding path will be able to quickly determine the proper CoS to apply to a given traffic flow. A key benefit of DiffServ is ease of scalability in comparison to IntServ. DiffServ is used for mission-critical applications and for providing end-to-end QoS. Typically, DiffServ is appropriate for aggregate flows because it performs a relatively coarse level of traffic classification. DiffServ describes services and allows many user-defined services to be used in a DiffServ-enabled network. Services are defined as QoS requirements and guarantees that are provided to a collection of packets with the same DSCP value. Services are provided to classes. A class can be identified as a single application or multiple applications with similar service needs, or it can be based on source or destination IP addresses, or it can be based on a flow. Provisioning is used to allocate resources to defined traffic classes. Provisioning provides a mechanism for the set of methods that are used to set up the network configurations on devices to enable the devices to provide the correct set of capabilities for a particular traffic class. The idea is for the network to recognize a class without having to receive specific requests from applications. This allows the QoS mechanisms to be applied to other applications that do not have