Content Overview Troubleshooting networks is more important than ever. As time goes on, services continue to be added to networks. With each added service comes more variables. This adds to the complexity of the network troubleshooting as well as the network itself. Organizations increasingly depend on network administrators and network engineers having strong troubleshooting skills. Troubleshooting begins by looking at a methodology that breaks down the process of troubleshooting into manageable pieces. This permits a systematic approach, minimizes confusion, and cuts down on time otherwise wasted with trial and error troubleshooting. Network engineers, administrators, and support personnel realize that troubleshooting is a process that takes the greatest percentage their time. One of the primary goals in this module is to present efficient troubleshooting techniques, in order to shorten overall troubleshooting time when working in a production environment. Two extreme approaches to troubleshooting almost always result in disappointment, delay, or failure. On one extreme is the theorist, or rocket scientist, approach. On the other is the practical, or caveman, approach. Since both of these approaches are extremes, the better approach is somewhere in the middle using elements of both. The rocket scientist analyzes and re-analyzes the situation until the exact cause at the root of the problem has been identified and corrected with surgical precision. This sometimes requires taking a high-end protocol analyzer and collecting a huge sample, possibly megabytes, of the network traffic, while the problem is present. The sample is then inspected in minute detail. While this process is fairly reliable, few companies can afford to have their networks down for the hours, or days, it can take for this exhaustive analysis. The caveman’s first instinct is to start swapping cards, cables, hardware and software until miraculously the network begins operating again. This does not mean that the network is working properly, just that it is operating. Unfortunately, the troubleshooting section in some manuals actually recommends caveman style procedures as a way to avoid providing more technical information. While this approach may achieve a change in symptoms faster, this approach is not very reliable and the root cause of the problem may still be present. In fact, the parts used for swapping may include marginal or failed parts swapped out during prior troubleshooting episodes. Analyze the network as a whole rather than in a piecemeal fashion. One technician following a logical sequence will almost always be more successful than a gang of technicians, each with their own theories and methods for troubleshooting.
Content 2.1 Using a Layered Architectural Model to Describe Data Flow 2.1.1 Encapsulating data Logical networking models separate network functionality into modular layers. These modular layers are applied to the physical network to isolate network problems and even create divisions of labor. For example, if the symptoms of a communications problem suggest a physical connection problem, the telephone company service person can focus on troubleshooting the T1 circuit that operates at the physical layer. The repair person does not have to know anything about TCP/IP, which operates at the network layer, or attempt to make changes to devices operating outside of the realm of the suspected logical layer. The repair person can concentrate on the physical circuit. If it functions properly, then either the repair person or a different specialist looks at areas in another layer that could be causing the problem. The Open Systems Interconnection (OSI) model provides a common language for network engineers. Having looked at using a systematic approach, documentation, and network architectures, it can be seen that the OSI model is pervasive in troubleshooting networks. The model allows troubleshooting to be described in a structured fashion. Problems are typically described in terms of a given OSI model layer. At this stage it is assumed that there should be an intimate familiarity with the model. Taking a quick look at the OSI model helps clarify its role in troubleshooting methodology. The OSI reference model describes how information from a software application in one computer moves through a network medium to a software application in another computer. The OSI reference model is a conceptual model composed of seven layers, each specifying particular network functions. With this technique, one transition is guaranteed for each bit cycle, or bit time. The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered the primary architectural model for intercomputer communications. The OSI model divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups. A task, or group of tasks, is then assigned to each of the seven OSI layers. Each layer is reasonably self-contained, so that the tasks assigned to each layer can be implemented independently. This enables the solutions offered by one layer to be updated without adversely affecting the other layers. The figure details the seven layers of the Open System Interconnection reference model. The OSI model provides a logical framework and a common language used by network engineers to articulate network scenarios. The Layer 1 through Layer 7 terminology is so common that most engineers do not think twice about it any more. The upper layers (5-7) of the OSI model deal with application issues and generally are implemented only in software. The application layer is closest to the end user. Both users and application layer processes interact with software applications that contain a communications component. The lower layers (1-4) of the OSI model handle data-transport issues. The physical layer and data-link layer are implemented in hardware and software. The other lower layers generally are implemented only in software. The physical layer is closest to the physical network medium, such as the network cabling, and is responsible for actually placing information on the medium. When sending data from an application in one host to an application in a second, the network software on the source host takes data from an application and converts it as needed for transmission over a physical network. The process involves: The data is now ready for travel over the physical medium as bits. The encapsulation process as a whole represents the initial stage in transferring data between two end systems.
Content 2.1 Using a Layered Architectural Model to Describe Data Flow 2.1.2 Bits on the physical medium The Ethernet receiver derives the clock rate from the incoming data stream. Using a direct signal encoding of 0 volts for a logic 0 value and 5 volts for a logic 1 value could lead to timing problems. Specifically, a long string of 1s or 0s could cause the receiver to lose synchronization with the data. Further, the recipient would be unable to determine the difference between an idle sender (0 voltage) and a string of 0s (again 0 voltage). The solution for this dilemma is found in the Ethernet encoding scheme. Rather than transmitting the logic level directly, Manchester encoding is used. With this technique, one transition is guaranteed for each bit cycle: With a Manchester encoded signal, a binary 1 is represented by a change of amplitude from a low to a high during the middle of a bit-time. Conversely, a binary 0 is represented by a change of amplitude from a high to a low during the middle of a bit-time. However, the trade-off for this synchronization