the switch. Latency is directly related to the
configured switching process and volume of traffic. Latency is
measured in fractions of a second. With networking devices
operating at incredibly high speeds, every additional
nanosecond of latency adversely affects network performance.
Content 4.2 Introduction to LAN Switching
4.2.7 Layer 2 and layer 3 switching Switching is the
process of receiving an incoming frame on one interface and
delivering that frame out another interface. Routers use Layer
3 switching to route a packet. Switches use Layer 2 switching
to forward frames.The difference between Layer 2 and Layer 3
switching is the type of information inside the frame that is
used to determine the correct output interface. Layer 2
switching is based on MAC address information. Layer 3
switching is based on network layer addresses or IP addresses.
Layer 2 switching looks at a destination MAC address in the
frame header and forwards the frame to the appropriate
interface or port based on the MAC address in the switching
table. The switching table is contained in Content Addressable
Memory (CAM). If the Layer 2 switch does not know where to send
the frame, it broadcasts the frame out all ports to the
network. When a reply is returned, the switch records the new
address in the CAM. Layer 3 switching is a function of the
network layer. The Layer 3 header information is examined and
the packet is forwarded based on the IP address. Traffic flow
in a switched or flat network is inherently different from the
traffic flow in a routed or hierarchical network. Hierarchical
networks offer more flexible traffic flow than flat networks.
Content 4.2 Introduction to LAN Switching
4.2.8 Symmetric and asymmetric switching LAN switching may
be classified as symmetric or asymmetric based on the way in
which bandwidth is allocated to the switch ports. A symmetric
switch provides switched connections between ports with the
same bandwidth. An asymmetric LAN switch provides switched
connections between ports of unlike bandwidth, such as a
combination of 10 Mbps and 100 Mbps ports. Asymmetric switching
enables more bandwidth to be dedicated to the server switch
port in order to prevent a bottleneck. This allows smoother
traffic flows where multiple clients are communicating with a
server at the same time. Memory buffering is required on an
asymmetric switch. The use of buffers keeps the frames
contiguous between different data rate ports.
Content
4.2 Introduction to LAN Switching 4.2.9 Memory
buffering An Ethernet switch may use a buffering technique to
store and forward frames. Buffering may also be used when the
destination port is busy. The area of memory where the switch
stores the data is called the memory buffer. This memory
buffer can use two methods for forwarding frames, port-based
memory buffering and shared memory buffering. In port-based
memory buffering frames are stored in queues that are linked to
specific incoming ports. A frame is transmitted to the outgoing
port only when all the frames ahead of it in the queue have
been successfully transmitted. It is possible for a single
frame to delay the transmission of all the frames in memory
because of a busy destination port. This delay occurs even if
the other frames could be transmitted to open destination
ports. Shared memory buffering deposits all frames into a
common memory buffer which all the ports on the switch share.
The amount of buffer memory required by a port is dynamically
allocated. The frames in the buffer are linked dynamically to
the transmit port. This allows the packet to be received on one
port and then transmitted on another port, without moving it to
a different queue. The switch keeps a map of frame to port
links showing where a packet needs to be transmitted. The map
link is cleared after the frame has been successfully
transmitted. The memory buffer is shared. The number of frames
stored in the buffer is restricted by the size of the entire
memory buffer, and not limited to a single port buffer. This
permits larger frames to be transmitted with fewer dropped
frames. This is important to asynchronous switching, where
frames are being exchanged between different rate ports.
Content 4.2 Introduction to LAN Switching
4.2.10 Two switching methods The following two switching
modes are available to forward frames:
- Store-and-forward – The entire frame is received
before any forwarding takes place. The destination and source
addresses are read and filters are applied before the frame is
forwarded. Latency occurs while the frame is being received.
Latency is greater with larger frames because the entire frame
must be received before the switching process begins. The
switch is able to check the entire frame for errors, which
allows more error detection.
- Cut-through – The
frame is forwarded through the switch before the entire frame
is received. At a minimum the frame destination address must be
read before the frame can be forwarded. This mode decreases the
latency of the transmission, but also reduces error detection.
The following are two forms of cut-through
switching: - Fast-forward – Fast-forward
switching offers the lowest level of latency. Fast-forward
switching immediately forwards a packet after reading the
destination address. Because fast-forward switching starts
forwarding before the entire packet is received, there may be
times when packets are relayed with errors. Although this
occurs infrequently and the destination network adapter will
discard the faulty packet upon receipt. In fast-forward mode,
latency is measured from the first bit received to the first
bit transmitted.
- Fragment-free – Fragment-free
switching filters out collision fragments before forwarding
begins. Collision fragments are the majority of packet errors.
In a properly functioning network, collision fragments must be
smaller than 64 bytes. Anything greater than 64 bytes is a
valid packet and is usually received without error.
Fragment-free switching waits until the packet is determined
not to be a collision fragment before forwarding. In
fragment-free mode, latency is also measured from the first bit
received to the first bit transmitted.
The latency
of each switching mode depends on how the switch forwards the
frames. To accomplish faster frame forwarding, the switch
reduces the time for error checking. However, reducing the
error checking time can lead to a higher number of
retransmissions.
Content 4.3 Switch
Operation 4.3.1 Functions of Ethernet switches A
switch is a network device that selects a path or circuit for
sending a frame to its destination. Both switches and bridges
operate at Layer 2 of the OSI model. Switches are sometimes
called multiport bridges or switching hubs. Switches make
decisions based on MAC addresses and therefore, are Layer 2
devices. In contrast, hubs regenerate the Layer 1 signals out
of all ports without making any decisions. Since a switch has
the capacity to make path selection decisions, the LAN becomes
much more efficient. Usually, in an Ethernet network the
workstations are connected directly to the switch. Switches
learn which hosts are connected to a port by reading the source
MAC address in frames. The switch opens a virtual circuit
between the source and destination nodes only. This confines
communication to those two ports without affecting traffic on
other ports. In contrast, a hub forwards data out all of its
ports so that all hosts see the data and must process it, even
if that data is not intended for it. High-performance LANs are
usually fully switched. - A switch concentrates
connectivity, making data transmission more efficient. Frames
are switched from incoming ports to outgoing ports. Each port
or interface can provide the full bandwidth of the connection
to the host.
- On a typical Ethernet hub, all ports
connect to a common backplane or physical connection within the
hub, and all devices attached to the hub share the bandwidth of