|
||||
|
|
||||
![]() Certifications ![]() Cisco ![]() Downloads ![]() IP ![]() PC ![]() Protocols ![]() RemoteAccess ![]() Security ![]() Telecommunications ![]() Tools ![]() Unix ![]() Web |
Signal Transmission Signaling is the way that data is transmitted across a medium. This is done using electrical energy to communicate. The information to be transmitted can exist in two formats, analog or digital. Analog data is continuously changing and represents all values within a range. Digital data, on the other hand, consists of discreet states, ON or OFF, or 1 or a 0. Current-State Encoding Data is encoded in current-state strategies by the presence or absence of a signal characteristic or state. For example, a +5 might represent a binary 0, while a –5 voltage could represent a binary 1. The following encoding techniques use current-state encoding:
State-Transition Encoding State-transition encoding methods differ from current-state encoding methods in that they use a transition in the signal to represent data, as opposed to encoding data by means of a particular voltage level or state. The following encoding techniques use state-transition encoding:
Encoding Schemes
Bit Synchronization Asynchronous Synchronous
Analog signaling:
Analog waves have three characteristics used to describe them:
Analog Signal modulation: All three of these characteristics can be used to encode data in an analog signal. For example, a higher amplitude could represent a 1 and a lower amplitude could represent a 0. Using frequency, a 0 could be represented with a higher frequency while a 1 could be represented using a slower frequency. There are three main strategies to encoding data using analog signals. Amplitude shift keying and frequency shift keying are both considered current-state encoding schemes because a measurement is made to detect a particular state or key. Phase shift keying, on the other hand, is a state-transition encoding scheme because it relies on the presence or absence of a transition from one phase to another.
Broadband vs. Baseband Transmission Baseband Broadband
Physical Circuits Basic Terminology
Analog Analog Multiplexing Each voice channel was assigned to a different 4Khz band. When added together, 24 voice channels totaled 96Khz, well within the capacity of twisted pair technology at the time. Due to restrictions in vacuum technology at the time, 100Khz was actually the top end, however, 96Khz was used as a conservative estimate. This is known as frequency division multiplexing. Line Noise Because every wire acts like an antenna, any electrical signal will cause electro magnetic interferance. Line noise exists uniformly across the circuit. As the analog signal traverses the line, it starts out strong near the sender and diminishes in quality farther from the sender. The voice signal can be amplified easily, however, the amplifier cannot separate the signal from the noise. TDM To bring the noise problem under control, the telephone industry developed digital transmission techniques. Digital techniques allowed for most of the line noise to be filtered from a signal by using digital signal regenerators. With the advent of the transistor and digital logic, a new systems was developed for multiplexing signals : time division multiplexing. A time division multiplexor, or a channel bank, assigns short time intervals or time slots to each channel in rotation. With the aid of multiplexors, 24 voice channels take turns using the line. The analog telephone remains the standard, however, its signal must be converted to digital form. This function is done by a Coder-DECoder, or CODEC. Input is an analog voice signal from 300-3,300 Hz and output is a 64,000 bits per second digital stream. Why is analog bandwidth limited to 56K? Loading coils and other filtering equipment limit the frequencies that can be transmitted across a voice grade analog channel. If we were to draw a graph of the typical telephone channel (line), we could make the vertical axis signal power (the more power the louder the signal) and the horizontal axis the signal frequency. A voice grade channel would permit signal power between frequencies 300 and 3,400 cycles per second. Below 300 and above 3,400, all signal power would be soaked up or absorbed by the telephone network. (The total frequency range used by the telephone company is 0 to 4,000 cycles per second. Guard bands limit the effective voice range from 300 to 3,400 cycles per second.) Now, the line is limited to 64Kbps, however, what about the bandwidth from 56Kbps to 64Kbps. This has been allotted to the telephone companies per the FCC for signaling purposes. All of the digital encoding techniques listed below rob bits from the 64Kbps signal for signaling, timing, and other information. You will see this concept of robbed bits come up several times in the rest of the class. Key words Local Loops - A local loop is the wire that runs from your facility to the closest telephone central office. It is generally from 2 to 25 miles of 19 AWG unshielded twisted pair wire. Today local loops are terminated at one point in a facility. It is then the responsibility of the facility owner to route the local loop wires to its final destination. Some local loops are trunk lines that carry more than a single call. Other local loops provide high digital voice access to the telephone company central office. They may be run over glass fiber links rather than the more common copper wire. Loading Coils - Loading Coils are devices placed in analog local loops to assure predictable electrical signaling over the range of frequencies that carries voice communications on the telephone network. Loading coils cannot be used on digital transmission links because they absorb the digital pulses effectively killing the digital signaling. Channel Banks - Channel banks are multiplexing/demultiplexing analog to digital and digital to analog conversion devices. A channel bank converts the analog signal from a phone into a DS-0 64 Kbps digital signal that is time division multiplexed (combined) with other phone signals and sent over the telephone network. Typically, digital encoding of a voice analog signal is done using Pulse Code Modulation (PCM). Digital Switches - In the telephone company central office (the office nearest your facility) the telephone company has a digital switching system. The digital switching system is a very large un-manned branch exchange telephone switch. It takes the digital signals from the channel bank and routes them with other signals across the telephone company wide area backbone network. The digital switch belongs to your Local Exchange Carrier (LEC). Trunk Circuits - Trunk circuits are typically high speed (1.544 Mbps and up) digital links between telephone company central office switches. A phone call is switched from the Local Exchange Carrier (LEC) central office switch onto a high speed digital trunk that carries the call to a long distance carrier (an Inter-eXchange Carrier -- IXC) Point of Presence (POP) location. Troubleshooting Integrated Services Digital Network (ISDN) (DS0) (BRI)
Components
SPIDs Switch Types This refers to variations on implementation of the signaling protocols by different switch vendors. Three switch types are commonly used in North America:
If the telephone company tells you the switch is 5ESS or DMS-100, ask for the software type. If they say the switch uses National software, then use NI-1 as the switch type. Centrex Centrex (central office exchange service) is a service from local telephone companies in the United States in which up-to-date phone facilities at the phone company's central (local) office are offered to business users so that they don't need to purchase their own facilities. The Centrex service effectively partitions part of its own centralized capabilities among its business customers. The customer is spared the expense of having to keep up with fast-moving technology changes (for example, having to continually update their private branch exchange infrastructure) and the phone company has a new set of services to sell. In many cases, Centrex has now replaced the private branch exchange. Effectively, the central office has become a huge branch exchange for all of its local customers. In most cases, Centrex (which is sold by different names in different localities) provides customers with as much if not more control over the services they have than PBX did. In some cases, the phone company places Centrex equipment on the customer premises. Typical Centrex service includes direct inward dialing (DID), sharing of the same system among multiple company locations, and self-managed line allocation and cost-accounting monitoring.
Call Connection Procedures:
Troubleshooting T1/Primary Rate Interface (PRI) (DS1) The T1 system was designed to carry 24 digitized telephone calls. Hence the capacity of a T1 line is divided into 48 channels, 24 in each direction. The 24 lines plus 1 D channel combines to allow a bandwidth of 1.544 Mbps. Physical CSU/DSU (channel service unit/data service unit)
A specialized DSU device is used to connect to a T3 line as a T3 uses coaxial cable for its transmission media. Multiplexing Although we often think of these channels as flowing across the line together, the bits are actually transmitted across the line one at a time. One byte from the first call is sent, followed by the next, and so forth. This is done using time division multiplexing. Each call is assigned a 1-byte time slot. The transmitting device sends one byte for a channel each time the channel’s time slow comes around. DS1 Frames 24 Sixty-four Kbps channels plus 1 eight Kbps signaling channel (Framing channel). Can be used for voice calls or data or both. Uses multiplexors and inverse multiplexors to combine channels as needed. Multiplexors are typically part of the router or within the PBX. A DS1 frame consists of a framing bit followed by 24 bytes, one each for each of the 24 channels. Thus, a frame consists of 193 bits. Eight thousand frames are sent per second, the sampling rate used by the telephone company, giving a total signal rate of 1,544,000 bits per second. This signal is called digital signal level 1 (DS1). T1 and DS1 are often used interchangeably, however, T1 is a physical implementation while DS1 defines the format of the signal transmitted on a T1 line. Packaging the DS1 signal – DS4 and ESF The original packaging that was defined for a DS1 signal was called a D4 superframe, and consisted of 12 consecutive frames. The 12 framing (F) bits in a D4 superframe contained the pattern: 1 0 0 0 1 1 0 1 1 1 0 0 Telecommunications equipment locked into this pattern to locate D4 superframes and maintain alignment. An improved extended superframe (ESF) was adopted later. It is made up of 24 consecutive frames. Its framing bits are used for three purposes:
Troubleshooting CSU/DSU Alarms:
Telephone company testing *Note: When a B8ZS code is injected into a test pattern that contains a long string of zeros, the pattern is no longer testing to the full consecutive zero requirement. Circuit elements, such as line repeaters, that are intended to operate with or without B8ZS should be tested without B8ZS. T3/DS3 Consists of 672 sixty-four Kpbs channels, or 28 DS1’s. Verio typically uses these as a channelized T3 and sells each of the 28 lines as a separate T1. Multiplexing Lower Level Signals into a DS3 Signal Lower level signals can be multiplexed into a payload of a DS3 M-frame in a number of ways. For example, one way is to take the input of 28 complete DS1 signals and send these into a multiplexor. Each consists of 1.544 Mbps, and includes the T1 framing bits. These signals are byte interleaved into a DS3 signal at a multiplexor. This direct multiplexing scheme is called the synchronous DS3 M13 multiplex format.
Troubleshooting See T1/PRI troubleshooting above. SMDS Switched Multimegabit Data Service (SMDS) is a connectionless high speed digital network service based on cell relay for end-to-end application usage. This allows for a logical progression to ATM if the need arises. Switched means it can be used to reach multiple destinations with a single physical connection. Originally rolled out in December 1991, SMDS allows transport of mixed data, voice, and video on the same network. SMDS provides higher speeds (56kbps - 34Mbps) than Frame Relay or ISDN and is a cross between Frame Relay and ATM. It uses the same 53 byte cell transmission technology as ATM but differs from Frame Relay in that destinations are dynamic (not predefined). This allows data to travel over the least congested route. However, it does provide some of the same benefits as Frame Relay including:
There are 6 implimentations of SMDS that currently exist (that I know of): 1) 1.17Mbps SIP (SMDS Interface Protocol) - a special (T1) SMDSU (not CSU/DSU) must be used. Common SMDSU's are Kentrox and Digital Link 2) 1.536Mbps DXI (a regular T1 CSU/DSU is used) 3) 4,10,16,25,34Mbps A special (T3) SMDSU is used. Common SMDSU's are Kentrox and Digital Link. 4) T3 DXI (45Mbps) - I don't know much about this because B.A. doesn't sell it (to my knowledge). I do know that it does not use SIP and a normal T3 CSU/DSU is used. 5) ATM to SMDS (I don't know anything about this. Bell Atlantic is not selling this to my knowledge, however it exists.) 6) 64Kbps SMDS...not very common. You can probably skip over the DBDQ section, it's not really necessary information as I've only had the issue come up once and that was several years ago. As you'll see from the document, SIP is Layers 1, 2, and a little bit of 3. There used to be an organization called the SMDS Interest Group (who I think wrote the SMDS standards), but the URL I had for their site is no longer valid and I can not find a new one. Simple router config (from ucsc1, but made simpler): interface Hssi0/1/0 description Bell Atlantic SMDS CID: 3QCDQ650002 ip address 130.94.46.3 255.255.255.128 no ip redirects no ip directed-broadcast no ip proxy-arp ! THIS IS VERY IMPORTANT.. W/O IT, THE ROTUER ACTS AS A ! PROXY FOR ARP RESPONSES FOR OTHER ROUTERS ! AND GIVES ITS OWN HARDWARE ADDR INSTEAD OF THEIRS encapsulation smds smds address c121.5215.1279 ! Unique "single-cast" address assigned by telco smds multicast ARP e101.2150.2129 130.94.46.0 255.255.255.128 smds multicast IP e101.2150.2129 130.94.46.0 255.255.255.128 ! The mtulicast address e101...is assigned by telco ! and is used to "group" the circuits together. ! This is what makes ARP work. BOTH IP and ARP lines ! must be present in the config smds enable-arp ! Tells the router to use ARP crc 32 ! CRC is set by Telco. on the switch; either 16 or 32. Customer router has the same config (T1's would be on a serial interface though). For some reason in the past, I've had to use "no smds dxi-mode" and that was with a DXI T1 (strange enough as it looks), otherwise, the interface went up/down. I don't know if that was a bug or not...you might want to check with the guys currently config'ing customer routers to see if they have run into that problem more recently. For more information about SMDS, check out Cisco’s web site: http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/rbook/rsmds.htm http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/smds.htmTroubleshooting Layer 2 – Data Link Layer Bridging and Switching Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI reference model. As such, they are widely referred to as data link layer devices. Bridging and switching occur at the link layer, which controls data flow, handles transmission errors, provides physical (as opposed to logical) addressing, and manages access to the physical medium. Bridges provide these functions by using various link-layer protocols that dictate specific flow control, error handling, addressing, and media-access algorithms. Examples of popular link-layer protocols include Ethernet, Token Ring, and FDDI. Bridges and switches are not complicated devices. They analyze incoming frames, make forwarding decisions based on information contained in the frames, and forward the frames toward the destination. In some cases, such as source-route bridging, the entire path to the destination is contained in each frame. In other cases, such as transparent bridging, frames are forwarded one hop at a time toward the destination. Upper-layer protocol transparency is a primary advantage of both bridging and switching. Because both device types operate at the link layer, they are not required to examine upper-layer information. This means that they can rapidly forward traffic representing any network-layer protocol. Bridges are capable of filtering frames based on any Layer 2 fields. A bridge, for example, can be programmed to reject (not forward) all frames sourced from a particular network. Because link-layer information often includes a reference to an upper-layer protocol, bridges usually can filter on this parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast packets. By dividing large networks into self-contained units, bridges and switches provide several advantages. Because only a certain percentage of traffic is forwarded, a bridge or switch diminishes the traffic experienced by devices on all connected segments. The bridge or switch will act as a firewall for some potentially damaging network errors, and both accommodate communication between a larger number of devices than would be supported on any single LAN connected to the bridge. Bridges and switches extend the effective length of a LAN, permitting the attachment of distant stations that were not previously permitted. Although bridges and switches share most relevant attributes, several distinctions differentiate these technologies. Switches are significantly faster because they switch in hardware, while bridges switch in software and can interconnect LANs of unlike bandwidth. A 10-Mbps Ethernet LAN and a 100-Mbps Ethernet LAN, for example, can be connected using a switch. Switches also can support higher port densities than bridges. Some switches support cut-through switching, which reduces latency and delays in the network, while bridges support only store-and-forward traffic switching. Finally, switches reduce collisions on network segments because they provide dedicated bandwidth to each network segment.
Frame Relay Frame relay is a packet-switching protocol based on X.25 and ISDN standards. Unlike X.25 however, which assumed low speed, error-prone lines and had to perform error correction, frame relay assumes error-free lines. By leaving the error correction and flow control functions to the end points (customer premise equipment), frame relay has lower overhead and can move variable-sized data packets at much higher rates. Each location gains access to the frame relay network through a Frame Relay Access Device (FRAD). A router with frame relay capability is one example. The FRAD is connected to the nearest carrier point-of-presence (POP) through an access link, usually a leased line. A port on the edge switch, provides entry into the frame relay network. FRADs assemble the data to be sent between locations into variable-sized frame relay frames, like putting a letter in an envelope. Each frame contains the address of the target site, which is used to direct the frame through the network to its proper destination. Once the frame enters the shared network cloud or backbone, any number of networking technologies can be employed to carry it. The path defined between the source and the destination sites is known as a virtual circuit. While a virtual circuit defines a path between two sites, no backbone bandwidth is actually allocated to that path until the devices need it. Frame relay supports both permanent and switched virtual circuits. A Permanent Virtual Circuit (PVC) is a logical point-to-point circuit between sites through the public frame relay cloud. PVCs are permanent in that they are not set up and torn down with each session. They may exist for weeks, months or years, and have assigned end points which do not change. The PVC is available for transmitting and receiving all the time and, in that regard, is analogous to a leased line. In contrast, a Switched Virtual Circuit (SVC) is analogous to a dial-up connection. It is a duplex circuit, established on demand, between two points. Existing only for the duration of the session, it is set up and torn down like a telephone call. FRADs which support SVCs perform the call establishment procedures. Currently, all public frame relay service providers offer PVCs, while only a very small number offer SVCs. By supporting several PVCs simultaneously, frame relay can directly connect multiple sites, through a single physical connection. (In contrast, a leased line network would require multiple physical connections, one for each site.) A Data Link Connection Identifier (DLCI), assigned by the service provider, identifies each PVC. A header in each frame contains the DLCI, indicating which virtual circuit the frame should use.
The real benefit of frame relay comes from its ability to dynamically allocate bandwidth and handle bursts of peak traffic. When a particular PVC is not using backbone bandwidth it is "up for grabs" by another. When purchasing PVCs, the bandwidth or Committed Information Rate (CIR) must be specified. The CIR is the average throughput the carrier guarantees to be always available for a particular PVC. A device can burst up to the Committed Burst Information Rate (CBIR) and still expect the data to get through. The duration of a burst transmission should be short, less than three or four seconds. If long bursts persist, then a higher CIR should be purchased.
The frame relay network does try to police itself and keep congestion and thus packet loss down. It can do this in two ways. It can try to control the flow of packets with Forward Explicit Congestion Notification (FECN), which is a bit set in a packet to notify a receiving interface device that it should initiate congestion avoidance procedures. Backward Explicit Congestion Notification (BECN) is a bit set to notify a sending device to stop sending frames because congestion avoidance procedures are being initiated.
A second way to inform the end devices that there is congestion is through the Local Management Interface (LMI). This specification describes special management frames sent to access devices. A Discard Eligiblility Bit (DE Bit) is set by the public frame relay network in packets the device is attempting to transmit above the CIR or the CBIR for any length of time. It will also be set if there is high network congestion. This means that if data must be discarded, packets with the DE bit set should be dropped before other packets. Notice that the network itself has no way to enforce congestion flow control. It is up to the end device to support and obey these codes. When all is said and done, the frame travels to its destination where it is disassembled by the receiving FRAD, and data is passed to the user. X.25
There are major differences between frame relay and X.25 data networks. See the table below for a brief summary: X.25 Frame X.25 has link and packet protocol Frame relay is a simple data link protocol. Levels. The link level of X.25 consists of a Frame relay provides basic data transfer Data link protocol called LAPB. That doesn’t guarantee reliable delivery of data. X.25 LAPB information frames are Frame relay frames are not numbered or numbered and acknowledged. acknowledged. Circuits are defined at the packet layer, Circuits are identitfied by an address field in the which runs on top of the LAPB data link frame header. layer. Packets are numbered and acknowledged. There are complex rules that govern the Data is packaged into simple frame relay frames flow of data across an X.25 interface. and transmitted toward its destination. These rules often interrupt and impede the Data can be sent across a frame relay network flow of data. Whenever there is bandwidth available to carry it. ATM ATM is a cell-switching and multiplexing technology that combines the benefits of circuit switching (guaranteed capacity and constant transmission delay) with those of packet switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth from a few megabits per second (Mbps) to many gigabits per second (Gbps). ATM is a layered architecture allowing multiple services like voice, data and video, to be mixed over the network. Three lower level layers have been defined to implement the features of ATM. The Adaptation layer assures the appropriate service characteristics and divides all types of data into the 48 byte payload that will make up the ATM cell. The ATM layer takes the data to be sent and adds the 5 byte header information that assures the cell is sent on the right connection. The Physical layer defines the electrical characteristics and network interfaces. This layer "puts the bits on the wire." ATM is not tied to a specific type of physical transport. Three types of ATM services exist: permanent virtual circuits (PVC), switched virtual circuits (SVC), and connectionless service (which is similar to SMDS). A PVC allows direct connectivity between sites. In this way, a PVC is similar to a leased line. Among its advantages, a PVC guarantees availability of a connection and does not require call setup procedures between switches. Disadvantages of PVCs include static connectivity and manual setup. An SVC is created and released dynamically and remains in use only as long as data is being transferred. In this sense, it is similar to a telephone call. Dynamic call control requires a signaling protocol between the ATM endpoint and the ATM switch. The advantages of SVCs include connection flexibility and call setup that can be handled automatically by a networking device. Disadvantages include the extra time and overhead required to set up the connection. ATM networks are fundamentally connection oriented, which means that a virtual channel (VC) must be set up across the ATM network prior to any data transfer. (A virtual channel is roughly equivalent to a virtual circuit.) Two types of ATM connections exist: virtual paths, which are identified by virtual path identifiers, and virtual channels, which are identified by the combination of a VPI and a virtual channel identifier (VCI). A virtual path is a bundle of virtual channels, all of which are switched transparently across the ATM network on the basis of the common VPI. All VCIs and VPIs, however, have only local significance across a particular link and are remapped, as appropriate, at each switch. Thus, ATM uses a "cloud" system almost exactly like that of frame relay. Unlike frame, however, ATM uses a 53 bytes fixed cell length, DLCI’s have been switched to VPI’s and VCI’s, and ATM offers a guaranteed service level. HDLC One of the oldest data communications protocols in use today is IBM’s Synchronous Data Link Control (SDLC). SDLC defined rules for transmitting data across a digital line and was used for long distance communications between terminals and computers. IBM submitted SDLC to the standards organizations and they revised it and generalized it into the High Level Data Link Control (HDLC) protocol. HDLC is the basis of a family of related protocols.
LAPD Protocol - Belongs to the High-level Data Link Control (HDLC) family of protocols Three types of HDLC/LAPD frames:
PPP – Point-to-Point Protocol
BackgroundThe Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network-layer address negotiation and data-compression negotiation. PPP supports these functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to negotiate optional configuration parameters and facilities. In addition to IP, PPP supports other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet. This chapter provides a summary of PPP's basic protocol elements and operations. PPP is capable of operating across any DTE/DCE interface. The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link-layer frames. PPP does not impose any restrictions regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.
PPP ComponentsPPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three main components:
General OperationTo establish communications over a point-to-point link, the originating PPP first sends LCP frames to configure and (optionally) test the data-link. After the link has been established and optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP frames to choose and configure one or more network-layer protocols. When each of the chosen network-layer protocols has been configured, packets from each network-layer protocol can be sent over the link. The link will remain configured for communications until explicit LCP or NCP frames close the link, or until some external event occurs (for example, an inactivity timer expires or a user intervenes).
The following descriptions summarize the PPP frame fields illustrated in Figure 13-1 :
PPP Link-Control ProtocolThe PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases:
Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, while link-maintenance frames are used to manage and debug a link. These frames are used to accomplish the work of each of the LCP phases. PPP Multilink
This page was created in 1.17 seconds Comments and Questions
Last modified: August 08 2004. |
|||