Online Documentation Server
 ÏÎÈÑÊ
ods.com.ua Web
 ÊÀÒÅÃÎÐÈÈ
Home
Programming
Net technology
Unixes
Security
RFC, HOWTO
Web technology
Data bases
Other docs

 


 ÏÎÄÏÈÑÊÀ

 Î ÊÎÏÈÐÀÉÒÀÕ
Âñÿ ïðåäîñòàâëåííàÿ íà ýòîì ñåðâåðå èíôîðìàöèÿ ñîáðàíà íàìè èç ðàçíûõ èñòî÷íèêîâ. Åñëè Âàì êàæåòñÿ, ÷òî ïóáëèêàöèÿ êàêèõ-òî äîêóìåíòîâ íàðóøàåò ÷üè-ëèáî àâòîðñêèå ïðàâà, ñîîáùèòå íàì îá ýòîì.




Cisco Switched Internetworks

Chris Lewis

Forward
Chapter: | 1 | 2 | 3

Chapter 1

Switching Technology Primer

Objectives

This chapter delivers an introduction to the network devices repeaters, bridges, routers and switches and the concepts of layer 2, 3 and 4 switching. The focus is on understanding how the different devices alter the traffic, packets, performance and cost of a network. The concepts presented in this chapter form the foundation for the remainder of the book. The concepts of ATM are also introduced and an explanation of the most common ATM terms is provided.

Network Devices

A network exists to get packets of information from a sender to the receiver or receivers in the quickest possible way, with the least cost. To achieve these goals, various addressing mechanisms are employed, and protocols implemented to deal with different network media, conditions and fault scenarios. At its simplest, devices that operate at the lower end of the OSI seven layer model are quickest and cheapest, while devices operating at the upper layers of the OSI model are more expensive and take more CPU to achieve the same throughput. Put simply, a repeater, which is a physical layer device (OSI layer one device) will only affect the level of electrical signals that form the 1s and 0s that constitute frames of data and is therefore relatively cheap. Alternatively, an application layer firewall (which will typically operate at several of the higher layers of the OSI stack) will examine in detail the meaning of the 1s and 0s of a frame and cost a lot more than a repeater.

What adds to the cost and slows a device down, is the complexity of what it is trying to do to the information within a packet as it is received. Let's review what each of the major network devices do to the packets they receive, and how that impacts the speed of operation and cost of device.

Repeater Operation

A repeater as its name suggests merely repeats whatever it receives. A repeater is used just to extend the distance a signal can be transported over a given medium, by regenerating the electrical signal. As such, the repeater is not an interesting network device. The repeater is an OSI level 1 device and does not understand anything other than voltage levels, it knows nothing about the electrical signals it is repeating representing 1s and 0s in a network message.

Bridge Operation

The venerable bridge was the mainstay of many corporate multi segment LANs in the mid 1980s, however, its time has come and past. The switch is a very similar device that has gained significant popularity in today's LANs and we will discuss the switch a little later on. I do recommend that when you consider switch implementations, you also consider the shortfalls of the bridge, as many can still apply. I'm not suggesting that a switch is no better than a bridge, just that poorly chosen switches badly implemented will give you the same problems that bridges used to.

So, let's get back to bridges. The most common form of bridge was the transparent bridge, it was named transparent because it could be placed in a network and not alter any of the MAC addresses in packets that flow through it. This at first, may seem a violation of what the MAC addresses are there to do, namely identify the source and destination addresses of the devices sending and receiving frames. That is the conundrum of a networking device that operates purely at OSI layer 2 (in many ways a contradiction in terms). Layer 2 devices only understand physical addresses, and in an ethernet LAN environment, those are the 6 byte MAC addresses we are all familiar with. The source MAC address is supposed to identify the machine sending the packet out on to the LAN. However, the transparent bridge does not alter the MAC address of any packet that passes through it (unlike a router). The result is that a packet will pass through a bridge from one segment to another, and when the destination device receives the packet, it will be able to determine the MAC addressing of the station originating the packet.

The basic job of a bridge is to receive packets, store them and retransmit them on the attached LANs connected to the bridge. A bridge was useful for extending simple LANs, by restricting traffic to only the cable segments necessary. Bridges "learn" which MAC addresses of workstations are on which LAN cable and either forward or block packets according to a list of MAC addresses associated with interfaces kept in the bridge. Let's look at how a bridge would handle a very simple multi-LAN environment as depicted in figure 1-1.

First it must be noted that as far as any layer 3 protocol, such as IP or IPX are concerned, LAN 1 and LAN 2 are the same network number. The process operated by the transparent bridge is as follows:

· Listen to every packet on every interface

· For each packet heard, keep track of the packet's source MAC address and the interface from which it originated. This is referred to as the station cache.

· Look at the destination field in the MAC header. If this address is not found in the station cache, forward the packet to all interfaces other than the one on which the packet was received. If the destination MAC address is in the cache, forward the packet to only the interface the destination address is associated with. If the destination address is on the same bridge interface as the source address, drop the packet, otherwise duplicate delivery of packets will result.

· Keep track of the age of each entry in the station cache. An entry is deleted after a period of time if no packets are received with that address as the source address. This ensures that if a workstation is moved from one segment to another, its old location is deleted from the station cache after time.

Using this logic, and assuming that workstations A,B,C and D in Fig. 1-1 all communicate with each other, the bridge will produce a station cache that associates workstations A and B with interface 1, then C and D with interface 2. This potentially relieves congestion on a network. All traffic that originates at and is destined for LAN 1 will not be seen on LAN 2 and vice versa. As well as reducing the amount of traffic on each segment, we have created two collision domains as a result of implementing the two segments. By this I mean that a collision on one segment will not affect workstations on another segment.

This form of bridging works well for any LAN topology that does not include multiple paths between two LAN segments. We know that multiple paths between network segments are desirable to maintain connectivity if one path fails for any reason. Let's look at what a simple transparent bridge would do if implemented in a LAN environment such as that shown in Fig. 1-2.

Let's say the network is starting up and the station cache for both bridge A and bridge B are empty. Suppose workstation X on LAN 1 wants to send a packet. Bridges A and B will hear the packet, note that workstation X is on LAN 1, and queue the packet for transmission on to LAN2. Either bridge A or bride B will be first to transmit the packet on to LAN 2; for argument's sake, say bridge A is first. This causes bridge B to hear the packet with workstation X as the originator on LAN B. We have already run in to problems. Bridge B will note that workstation X is on LAN 2 and forward the packet to LAN 1. Bridge A will then forward the packet on to LAN2 as before and a very vicious circle is established.

This all occurs because bridges do not alter MAC addresses and have their LAN interfaces set to what is known as promiscuous mode, so that they take in all packets transmitted on the LAN connected to each interface.

To enable bridges to work in the above scenario, bridge suppliers implemented the spanning tree protocol. Essentially spanning tree identifies a loop free path and temporarily disables bridge interfaces to keep that loop free topology in effect. If there is a link failure, spanning tree will recalculate a new loop free path and change the interfaces that are temporarily disabled. The following is an overview of spanning tree operation.

Spanning Tree

Bridges operating spanning tree, dynamically select a subset of the LAN interfaces available on each bridge, these selected interfaces form a loop free path from any LAN to any other LAN. This avoids the nasty packet duplication problems we discussed in relation to Fig. 1-2. A spanning tree enabled bridge will both send out bridge protocol data units (BPDU), and listen to BPDUs of other bridges. The configuration BPDU contains enough information so that all bridges can perform the following.

· Select a single bridge that will act as the "root" of the spanning tree.

· Calculate the distance of the shortest path from itself to the root bridge.

· Designate for each LAN segment, one of the bridges as the closest one to the root. That bridge will handle all communication from that LAN to the root bridge and be known as the designated bridge.

· Let each bridge choose one of its interfaces as its root interface, which gives the best path to the root bridge.

· Allow each bridge to mark the root interface- and any other interfaces on it that have been elected as designated bridges for the LAN to which it is connected - as being included in the spanning tree.

· The result is a tree structure, originating from the root bridge, that spans connectivity to all LAN segments.

Packets are then forwarded to and from interfaces included in the spanning tree. Packets received from interfaces not in the spanning tree are dropped and packets should never be forwarded onto interfaces that are not part of the spanning tree.

This improved the operation of a LAN built with multiple bridges, in that the topology could automatically recover from link failures, but all those interfaces blocked by spanning tree does waste a lot of potential bandwidth that could be used to carry traffic on the network. Making use of this closed off bandwidth in a network with switches is discussed later. However, as a plug and play device that did not require and configuration, it had its uses.

Conceptually, spanning tree enabled the layer 2 bridge to perform functions that were really meant for layer three devices, like directing traffic between network segments. As such, spanning tree and learning bridges were useful for smallish LANs (up to a couple of hundred nodes), in that congestion on segments was reduced and the network could recover from link outages. Where the layer 2 bridge fell down was in its handling of broadcasts. A layer 2 bridge always forwards a broadcast to all interfaces in the spanning tree and therefore does not give you the opportunity to control broadcasts within a network. Additionally, spanning tree is not as configurable as RIP, IGRP or other routing protocols that allow you to reduce the size of routing updates. This makes spanning tree inappropriate for scaling to larger networks. Layer three networking allows the concepts of address hierarchy to reduce the size of routing tables (equivalent to a bridge station cache) and route updates (equivalent to BPDUs). Layer two switches also rely on spanning tree, but implement virtual LANs to manage broadcast domains. Implementing VLANs also provides the opportunity to assign interface priority on switches, so that different instances of spanning tree (one per VLAN), will select different interface to be blocked than other instances of spanning tree. The advantage here is that all links on the physical network can be utilized. However, before we look at that technology more closely, lets see what came next chronologically, the router.

Router Operation

The Cisco TCP/IP Routing Professional Reference, covered this in some detail, however, for the sake of completeness, we'll cover the pertinent details here also. A router does a whole lot more looking at and modifying the contents of packets than a bridge. That is what makes a router more intensive in its use of software, memory and processing power than a bridge. In essence, routers in a network do the following:

Keep track of layer 3 (by this I mean IP or IPX type protocol) network numbers and work out the best way to route packets through the network from source network to destination network. Within routers this is known as the routing table and is updated either by automatic processes (like RIP, IGRP and OSPF) or by hand using static routes.

When it comes time to deliver a packet to its destination host, a router will use its ARP table to obtain the layer 2 address of the specific host that the ARP table associates with the destination IP (or other layer 3 protocol) address.

Each time a packet comes in to a router, and is forwarded on to another network segment, the layer 2 information is changed by the router. This is illustrated in figure 1-3.

A router will look at the layer 2 (MAC)address, layer 3 (IP, IPX or other network protocol) address and optionally layer 4 (application port) address. Layer 4 ports can be thought of as addresses of different application running within the host. For example, if a workstation needs to establish a telnet session with a host, it will set the layer 4 destination port to 23, as that is known as the port that a host will receive calls on for telnet sessions. FTP, Rlogin and other applications accept calls on different port numbers.

Figure 1-4 shows how each layer of network software encapsulates the information it receives from the layer above it. This encapsulation added includes the addressing information used by the layer itself, in layer 2, the Mac address is added, in layer 3 the protocol address is added (like an IP address) and layer 4 adds port numbers.

All of the traditional router’s functions are implemented on general purpose hardware by software programs. This places a considerable load on the hardware to execute the instructions issued by the software as the traffic to be handled by the router increases. This has led to some manufacturers implementing router functions in hardware. The terms "implemented in hardware" and "implemented in software" can be somewhat confusing, as software of some description needs to be there for any decisions to be made, and hardware needs to be there for any device to exist. What is meant by hardware implementation is that the software functions have been built in to special purpose hardware that is optimized to perform that function. Currently, these hardware specific devices are known as ASICs, which is short for Application Specific Integrated Circuit.

What causes the software router to require more processing power than a bridge to perform adequately under the same load, is that the router performs table lookup for both IP address (the routing table) and MAC address (the ARP table) before a packet can be forwarded through an interface. Of course, in most live networks, a routing protocol of some kind is deployed, such as RIP or OSPF, which adds to the processing burden of the router. Even though a routing protocol for a router is in many ways equivalent to spanning tree for a bridge, it does take more processing power than spanning tree.

Switches

The topic that will constitute the majority of the text in this book is that of the various switching technologies available to us today. When looking at a simple LAN switch that has no VLANs defined, it really is very difficult to explain why it is not a bridge in a new box. A simple LAN switch creates a station cache, runs the spanning tree protocol and does not utilize layer 3 network protocol information, or change the MAC addresses as the packet passes through it. Identical to the operation of a transparent bridge. When pushed, some manufacturers simply state that its switches are faster than a bridge. This is just a function of the software being implemented on special purpose hardware, rather than software on general purpose hardware. I suspect the real reason in these cases is marketing, it is unlikely that a manufacturer will sell a bridge these days. A definition I am happier with is that a LAN switch provides the ability to implement VLANs, something that a bridge never could. If you choose to implement switches without VLANs, you really are implementing a bridged network and need to remember the problems that existed with them.

So, for the purposes of introduction, a switch is a network device that will forward packets between network segments based on logic built in to specialized hardware chips, with a minimum of recourse to table lookup. Beyond that rather general statement, we have to consider what type of switching we are referring to, which is the topic of the next section.

Switching Concepts

Switches were originally introduced as layer 2 devices operating essentially in the same manner as the bridge. However, once the advantages of purpose built hardware were observed, manufacturers started to make available layer 3 and even layer four switches. In many ways, these distinctions are becoming artificial, the lines between which layer a device operates at are becoming more and more blurred. I have already said that a layer 2 switch is separated from a bridge by the fact that it supports VLANs, but we will see that to support VLANs, some type of routing function is necessary which has led to many switches offering a simple internal routing device, thus giving it some layer three functionality. On the flip side, router manufacturers have seen the benefits of switching and are enhancing their router devices with switch type features, silicon switching and tag switching are Cisco features we will overview later in this section and cover in more detail in later chapters.

The key to managing these devices in the future is a solid understanding of how a device forwards packets within a network and not necessarily with its label as a switch, router or whatever else. Having stated that, it must be conceded that categorization makes it easier to explain how devices function, so for the next few sub-sections, we will consider each layer 2,3 and 4 function in isolation.

Layer 2 Switching

Layer 2 switches first came to the marketplace in the guise of layer 2 bridges implemented in hardware. These early devices provided the same benefits and drawbacks of the layer 2 bridge, in that they could introduce multiple collision domains, but were limited to the single broadcast domain. Collision and broadcast domains are illustrated in figure 1-5.

A hub forwards all packets out all ports whether the attached workstation wants the packet or not. The switch will decide to send packets only to the workstation requiring the data, however it still sends broadcasts out all ports. With VLANs broadcasts are only sent out the ports that belong to the VLAN the broadcast originated from.

Simple layer 2 switches differed from layer 2 bridges by generally operating in one of three modes as regards packet forwarding. Layer 2 bridges only operated in the store and forward mode, which meant they would have to receive the complete packet, read the source and destination MAC address, perform the cyclic redundancy check (CRC) and apply filters before forwarding it on to any other segment. This of course introduced some latency in to the network. Switches introduced two more modes of packet forwarding, called cut-through and frag-free. In cut-through mode, the switch checks the destination MAC address and starts to forward the packet immediately, which significantly reduces latency. Frag-free is a kind of middle ground, the switch will take in more than just the destination MAC address, before forwarding the packet, but will not take the whole frame and perform the whole CRC. The aim of this is to identify fragmented frames on the network that appear as the result of collisions and enable the switch to drop these packets before they are forwarded unnecessarily on to another segment.

The other difference we have mentioned between a bridge and a switch is the Virtual LAN, or the VLAN. Virtual LANs really came in to existence to help with moves adds and changes on a network. In a typical LAN, 30 to 40% of the users will move within the year, which gives rise to a lot of cabling changes. With VLANs, there is no need to move user’s cables ever again, users can be moved from one VLAN to another purely in the configuration of the switch. Prior to this, users would need their network cables moved from one LAN to another, effectively moving them from one router interface to another.

Cisco’s VLAN implementation associates an interface with a VLAN. Some other manufacturers work on the basis of assigning workstations to VLANs via MAC address. My opinion is that the interface assignment that Cisco uses will lead to the best results. Figure 1-6 illustrates how this looks in physical terms. In figure 1-6, it will be noted that interfaces are assigned to VLANs that are classified according to color. This is historical and has become the standard notation for identifying different VLANs on a network.

Figure 1-7 shows how VLANs could be assigned on some other manufacturers hardware. In this figure we see differently colored VLANs on the same switch interface. This works well for packets originating from these PCs, but the drawback is that any broadcast (or multi-cast) originated elsewhere in the network on any of the three VLANs (red, blue or green) will result in a broadcast coming on to the interface 1 segment and interrupting all the PCs on that segment irrespective of the VLAN they are assigned to. This somewhat negates the benefit of VLANs in the first place. The only time it really makes sense for multi-colored ports is in the case of trunking that will be covered later.

In summary, simple layer 2 switches give you two main benefits. The first is that each port has the full ethernet bandwidth available, dedicated on a per interface basis. If you connect just one end station to a switch interface, that one station will receive the full ethernet bandwidth available, which is in stark contrast to the bandwidth supplied by shared hubs that effectively divide the available bandwidth between all end stations. The second is that collisions are local to the switch port only and are not carried froward to other ports.

Layer 3 Switching

The definition of what a layer 3 switch does and why it is different from a router is troublesome. The fact that most manufacturers identify a layer 3 switch as operating much faster than a traditional router does not help. I find it difficult to draw the line in terms of packets per second throughput that will differentiate a device as a switch rather than a router. In terms of forwarding packets from one interface to another, the decision process followed by a layer 3 switch and a router are, for all intents and purposes the same, the only difference being in physical implementation. Layer 3 switches use the popular ASIC, whereas routers use general purpose microprocessors. So, both a layer 3 switch and a router will forward packets based on IP destination, manipulate MAC addresses, decrement the Time To Live (TTL) field and perform a frame check sequence.

As well as packet forwarding, routers are responsible for creating and dynamically maintaining routing tables, usually via some routing protocol such as IGRP or OSPF. It needs to be the same for layer 3 switches, by one method or another, a layer 3 switch needs to operate from a current routing table. This can be achieved in practice by a layer 3 switch participating in the routing protocol process, or the layer 3 switch receiving its routing table from a traditional router.

What is more interesting as a difference between the layer 3 switch and a router is the concept of "route once, switch many". We shall visit this concept in more detail in Chapter 3 when we discuss Tag Switching, but essentially Cisco has implemented a scheme that will allow a layer 3 switch to discover the correct route from the routing table, the first time a remote destination needs to be contacted, then use a "short-cut" switching process for subsequent packets to the same destination. This short-cut usually takes the form of an identifier that is appended to the packet, that identifies it as part of a particular flow. The benefit is that by switching packets, according to the simple identifier, the device does not need to examine each and every packet in its entirety. This mode of operation provides a clean differentiator between a layer three switch and a router, in that the switch is doing something identifiably different to a traditional router. In the Catalyst 5000 range that we will be exploring later, layer 3 switching is provided via a combination of the Route Switch Module for handling routing protocols and the NetFlow feature card that does the high speed layer 3 packet switching.

Layer 4 switching

This is a relatively new term that refers to a layer 3 switch, whose ASIC hardware has the capability of interpreting the layer 4 (TCP or UDP) information and applying different levels of service to different applications. Interpreting this information allows the device to assign individual priority to different applications (identified via port number). When implemented in Cisco hardware, the NetFlow feature card caches flows based on source and destination port, as well as source and destination IP address. This has little, if any impact on performance of the switch as all the processing takes place in ASIC hardware, which is in contrast to the implementation of layer four control functions in traditional routers. Traditional routers can take significant performance hits through complex access lists and custom queuing to provide layer 4 switching functionality.

In practice, layer 4 switching is most beneficial when controlled by a central policy server that will manage priorities for applications across the entire network.

Integrating, Switches and Routers

The first implementations of switches in networks were to operate basically as fast layer 2 bridges. This introduced multiple collision domains, and enabled network performance to remain constant, or even improve, while reducing the number of router interfaces used. This created subnet masks that had many more useable IP addresses in them than was previously the case. This was possible as each switch interface delivered the full ethernet bandwidth to the connected workstation or workstations. Figure 1-8 shows a before and after depiction of a network, first using multiple router interfaces to achieve segmentation, then using switches to reduce the number of router interfaces necessary. The main advantage to this procedure, is that the overall cost of the network is reduced as router interfaces are very costly. Additionally one could argue that switch interfaces operate faster than router interfaces and improved network performance will result.

This first method of introducing switches in to a network really did not take advantage of switch capabilities to the greatest extent possible. Switches really start to make sense when we add VLAN capability, as we are then introducing multiple broadcast domains as well as multiple collision domains. Adding VLAN capability however, means that we need some type of routing function to communicate between VLANs, as each VLAN needs to be its own subnet. Fortunately we do not need to grow the number of router interfaces on a network to make use of VLANs. The first way multiple subnets were introduced in to a VLAN environment was by sub interfaces on what is commonly termed a One Armed Router (OAR). An OAR is illustrated in figure 1-9.

In this setup, router sub-interface 0.1 is associated with the Red LAN, which in the figure has switch 1 interface 1, 2 and 3 assigned to it. In practice this means that router sub interface 1 will have an IP address in the same subnet as the devices connected to switch 1, interface 1, 2 and 3 and that the VLAN has the IP address of sub-interface 0.1 defined as its default gateway. Any packet that needs to travel from one VLAN to another will be routed between sub-interfaces in the router. Some switches, like the Catalyst 5000, can have a Route Switch Module inserted that performs the function of the OAR.

With the introduction of simple switches, we saw IP subnet masks allocate more hosts per subnet. Now that VLANs are more common, we are seeing subnets being re-assigned to allow fewer hosts per subnet.

One last topic to introduce in this brief introduction to switches, is the Cisco protocol CGMP. CGMP (Cisco Group Management Protocol), is designed to enable Cisco routers and switches to communicate configuration information. This is particularly useful for enabling switches to deal more effectively with multicasts.

Multicasts are sent to special IP addresses (224.0.0.1 and above), hosts that wish to receive a particular multicast will take in packets destined for the multicast address of interest, while hosts on the same subnet, not interested in the multicast will not take the packet in. Cisco switches do not automatically learn broadcast or mulitcast addresses and treat a multicast address the same as a packet destined for an unknown MAC address, it is flooded out of all interfaces within the VLAN it originated from. This may not sound bad, but if several video streams are present on the network, having all of those packets forwarded out of each interface will severely limit the performance of the network. It is possible to program the Catalyst manually with the multicast addresses in use, effectively associating a static set of interfaces with a multicast group.

This is not a particularly satisfactory solution, most multicast applications use IGMP at the host, sending signals to a multicast router and dynamically joining and leaving multicast groups. Clearly static tables are not only cumbersome to administer, but also ineffective in some situations. The answer is CGMP, that allows the multicast router to signal to the switch, which interfaces should be part of which multicast group, in a dynamic fashion.

Router Switching Modes.

It should be becoming clear that the distinction between layer 2 switches and layer 3 devices is getting less and less clear. It should also be clear that routing is a form of switching, all we are really talking about are different methods for a device to decide which interface to forward a packet out of. In fact traditional software routers had many types of switching, which is what we will take a brief look at now.

The default mode of switching in Cisco routers is fast switching which relies upon a cache in main memory, created by previous packets. Basically the router creates a table of where packets with given destinations get routed to and then uses this table, when possible to switch future packets to the same destination. The benefit of this is that all the routing and ARP table lookups do not need to be performed every time a packet moves through a router.

The mode that is often invoked on a router that connects fast LANs to much slower WANs (for example, a router connecting a 10Mbit/sec ethernet to a 64K leased line) is process switching. This is done by inserting the "no ip route-cache" command in to the configuration of the interfaces in use. The benefit of doing this is to slow down packets coming from a fast media on to a slow media and overwhelming the slower media interface. Autonomous switching is available in the 7000 series routers and uses a cache that is similar in concept to the cache used for fast switching, but is resident on the interface processor. This has the added benefit of not generating any interrupts to the system processor when switching packets. The fastest mode is SSE switching, which is sometimes referred to as silicon switching. SSE switching again uses a cache to perform lookups, but the cache is held in specialized hardware in the silicon switching engine of the silicon switch processor module. Silicon switching is only available on Cisco 7000 routers equipped with the silicon switch processor. One point to note is that if header compression is enabled (as is often the case on dialup asynchronous links to save bandwidth), fast switching is not available.

The Briefest of Introductions to ATM.

Asynchronous Transfer Mode as we know it is a small, but very important part of the broadband ISDN scheme. Broadband ISDN refers to ISDN at rates over 2 Mbit/sec, as opposed to the narrow-band ISDN that supplies two 64Kbit/sec channels and one 16Kbit/sec channel. ATM was really designed as a WAN technology for carriers to transport all types of traffic on the one physical network, rather than have separate voice, data and video systems. As such, it operates most efficiently over the WAN, and only really comes in to difficulty when it has to emulate a LAN, as ATM is really a point to point technology.

ATM is a connection oriented protocol, which means it will contact the destination node and initiate a call sequence before it sends any data from source to destination. If the intended destination is not on-line, or the network cannot guarantee the required level of service requested in the call setup, the connection is not made and the source will not send any packets on to the network.

The A in ATM Stands For Asynchronous

The idea of asynchronous that most communications people have is related to a PC communications port, or the dialup asynchronous modem. This type of asynchronous means that timing signals are sent with every character, rather than synchronous systems that have a separate clock signal and use timing signals for groups of characters. With this mode of operation, the asynchronous means "I’ll only send a timing signal when I need to send data". The asynchronous in ATM refers to a different concept. The current generation of synchronous networks (typified by the well known T-1 and E-1 circuits) consist of fixed channels, for example, an E-1 consists of 32 individual 64Kbit/sec channels. Even if there are no packets to be transmitted, a channel will carry a keep alive or idle poll in every timeslot to maintain synchronization. By contrast an ATM network will only send data that is associated with a connection when there is live data to send.

A synchronous network will identify whose traffic belongs to whom, by position in the data stream, typically a time division timeslot (illustrated in figure 1-10).

ATM is not based on position in the data stream, a header identifies whose traffic it is, which also defines where the traffic goes. All ATM traffic is sent on demand and no bandwidth is wasted on idle channels. There are essentially two types of ATM device (we’ll over all the LANE servers necessary in chapter 3), a client and a switch. To have an ATM network you need at least one device that acts as a switch, much the same as a frame relay network. Indeed if you are just trying to understand ATM for the first time, thinking of ATM PVCs as the same as frame relay DLCIs is a good start.

Within the ATM network, there are two interfaces of prime importance, the UNI (User to Network Interface) and NNI (the Network to Network Interface). These interfaces are depicted in figure 1-11.

With such a network it is possible to accommodate all traffic types on the one media and finally move away from the dedicated voice, data and video networks of the present. However, getting all traffic on the one physical path is one thing, to effectively replace all the different types of networks prevalent today, ATM has to accommodate the different types of service delivery each network currently delivers. For example, the telephone network is connection oriented, and sensitive to delay variations, whereas data networks are either connection oriented or connectionless (meaning they send out data with no knowledge of whether the recipient is available) and relatively insensitive to variations in delay. In addition to differences to tolerance of delay variation, some networks have to support constant bit rate traffic (like cable TV) and others have to support variable bit rate traffic (like Internet access). The challenge to ATM is to support all these different demands equally well and the key to the way ATM does this is via the ATM Cell.

The ATM Cell

The Cell header is defined in figure 1-12.

Before we look at the function of each field, the most glaring difference between the header format of ATM and other encapsulations (like ethernet, IP or token ring) is that there is no source or destination address. So how does traffic get moved from one location to another? Within an ATM network, all communication is associated with individual connections. These connections are termed virtual circuits, and in practice are almost exclusively of the Switched Virtual Circuit (SVC) type.

When an ATM device wants to contact another ATM device across an ATM cloud, it will ask the ATM switch it is connected to for a connection identifier that it can use to contact the specified ATM address through. The ATM switch will setup the call and assign a call identifier to the virtual circuit associated with the end destination. From that point on, the end station uses the connection identifier to address packets, not the ATM destination address. The connection identifier is valid for the duration of the call and is then re-usable for other calls once the original call is finished. The ATM address is only therefore used at call setup time and is not transmitted over the ATM cloud. During data transfer, the ATM addresses are not seen in the traffic stream between source and destination. We’ll examine the call setup procedures in more detail in chapter 3. The following is a description of the ATM cell header fields.

GFC; The Generic Flow Control field has local significance only, meaning it is used by the device originating the cell and is not understood by receiving devices. At the moment, no specific definition exists for the GFC field.

VCI; the Virtual Channel Identifier identifies the virtual circuit that has been established to carry traffic to the required destination. The VCI is actually half of the identifier, the other half is the VPI, which is discussed next. The value has local significance only on the UNI. Each ATM switch will map an incoming VCI to an outgoing VCI through its switching process.

VPI; The Virtual Path Identifier is the second half of the definition of a connection in the ATM world. The VPI can be thought of as a superset of VCIs, in that several VCIs can be grouped together and addressed via one VPI. The advantage that this provides is that a switching node in an ATM cloud can forward all the VCIs associated with a VPI with one switching decision (this is known as VP switching).

PT; The Payload Type, this field is of limited value and is used to identify what type of data is being transported.

CLP; is the Cell Loss Priority, and can only have values 0 or 1 and indicates the likelihood that a cell will be dropped. This value can be set either by the ATM access device (the client) or an ATM switch. The ATM access device will set the CLP based on the current congestion on the network and the ATM switch will set the CLP based on the access device’s adherence (or lack of) to the traffic contract established at call setup time.

HEC; The Header Error Check is a value that is calculated based on the first four bytes. Single bit errors in these bytes can be corrected and multiple bit errors can be detected by this value.

The key fact and differentiator of a cell to an IP packet is that it is fixed length, always. This may not seem such a big deal, but it is huge, in fact it is a complete departure from the way things used to be done with pre ATM protocols.

In TCP/IP networking, IP is responsible for fragmentation (at routers) and re-assembly (at hosts) of data, taking place on a hop by hop basis. As packets travel through a network, they will be fragmented by routers needing to send them over segments that are incapable of handling such large packets (for example, a packet originated on a token ring can be over 4Kbytes in size, whereas ethernet only handles just over 1.5Kbytes). This is seen as efficient in that no space in the packet is wasted, if you only need to send a small amount of data, a small packet is sent, if you need to send lots of data, large packets are sent. This has the added benefit that with larger packets, the fixed size of the packet header constitutes a smaller protocol overhead in percentage terms.

However, this mode of operation presents significant difficulties to devices that must manage constant bit rate and constant delay services. If a device never knows the size of an incoming packet, it is very difficult to deliver a specific level of service to other traffic also using the device. Essentially, a network device does not know how much resource any given incoming packet will present to either bandwidth or processing requirements.

Having a fixed cell length gets around these problems. Given that a device only receives fixed format, fixed length cells, its switching logic can be programmed in to fast ASIC hardware that allows the device to forward packets as fast as the connected bandwidth allows. In this mode of operation, the device will know its capabilities in terms of attached bandwidth and be able to commit to specific levels of service for new connections made through it.

Once we acknowledge that having fixed length cells, rather than variable length packets provides the opportunity for delivering constant bit rate and delay services using a packet switching device (a cell is only a specific form of packet), the question arises of how big should that packet (cell) be? In ATM the answer is 53 bytes, which breaks down in to a 5 byte header and a 48 byte payload. A 10% header overhead, but one that is worth it.

There are rumors that 48 bytes was a compromise between one camp in the ATM Forum wanting 32 bytes and another wanting 64 bytes for the payload. The arguments are obvious, the smaller the cell, the more is wasted in header overhead. The larger the cell, the more payload is potentially wasted each time a partially loaded cell is transmitted. However, there was no disagreement on payload size, the figure of 48 bytes is a statistically derived figure that provides optimum results for carrying voice, data and video traffic over the one physical network.

ATM Multiplexing

There are two important concepts that we have discussed already, that contribute towards ATM’s ability to provide fixed services to both voice and data systems. These are the fixed length cell, and the idea of ATM devices transmitting data in an asynchronous fashion, on as an needed basis, rather than having specific streams of data dedicated to specific and fixed capacity channels. What pulls this together and enables ATM to work its magic is the way ATM multiplexes data from several logical connections on to one physical network. Before looking at the ATM multiplex model, let’s review traditional Time Division Multiplexing, the kind that’s still in use to supply many services, like T-1s and E-1s.

Figure 1-13 shows how a traditional TDM device works on the basis of having a fixed length timeslot allocated to each input channel. What it is achieving is having several physical connections multiplexed on to one wire, effectively a parallel to serial conversion. If B is the only channel with data to transmit, all 4 timeslots are used, with only timeslot 2 carrying data. The ownership of the data is defined by the position in the data stream. If channel B has more data to transmit, even if no other channels have data to transmit, it must wait its turn in the data stream to come around again.

In contrast, ATM multiplexing is shown in figure 1-14. In this scheme, timeslots are assigned to whichever channel needs to transmit. As the timeslot in the data stream no longer identifies which channel the data belongs to, there needs to be some other identification in the timeslot to associate it with a channel. This association is made by the VPI/VCI number in the cell header. Effectively fixed length timeslots have been replaced with fixed length cells, with each cell having the information in it that associates it with a channel.

This mode of operation allows ATM to support both circuit mode and packet mode. Circuit mode is there to support guaranteed bit rate services, enabling an ATM connection to emulate a T-1 type of service. Figure 1-15 shows how an ATM connection supports both fixed and variable bit rate services on the one physical link. First we must look at how ATM cells are transported over SONET networks, as that is the most common medium for ATM.

SONET allocates 8000 timeslots a second, which equates to 125 microseconds to transmit one timeslot (1/8000). For an OC-3 (sometimes referred to as STS-3) circuit, with a potential bandwidth of 155.52 Mbits/sec, each timeslot (confusingly referred to as a frame in SONET parlance) fits about 44 ATM cells. Effectively this gives 135.168Mbits/sec of throughput, calculated as 44 cells per frame, times 8000 frames per second, times 48 bytes of data per cell, times 8 bits per byte. This is interesting, but the point is that ATM over SONET still involves some use of fixed timeslot technology. This is what allows ATM to offer circuit mode operation. To do this, an ATM device will dedicate a specified number of SONET timeslots to the user requesting circuit mode service. Let’s look at some simple math that will illustrate how this works.

Let’s say that a user wants a T-1 circuit emulated service over ATM running on SONET. We can get 1.536Mbit/sec by dedicating one cell every other timeslot. This comes to 48 bytes in the ATM cell payload times 8 for the number of bits, which equals 384, multiply this by 4000, for the number of cells transmitted per second (half of the 8000 timeslots per second of SONET) yields the 1.536Mbit/sec. So, by committing one cell every other timeslot, the user gets an emulated T-1. Note that as we have 44 cells per timeslot, this is not much of the bandwidth available. The rest of the cells are available for packet mode connections, which will contend for the remaining (and very significant) bandwidth.

The ATM Communications Reference Model

In chapter 6, we will be looking at Classic IP Over ATM (CIOA), which is one way of utilizing ATM that effectively views ATM as just another data link layer protocol. This use places ATM and the adaptation layers as shown in figure 1-16.

In this implementation, cell services are only available (via the adaptation layers) to the network layer, however, the IP layer is still doing all the routing and path selection functions. CIOA does not take full benefit of all ATM’s features, implementations that do take advantage of ATM fully must adhere to a different model. The familiar seven layer OSI model does not really accommodate the ATM modes of communication, so ATM has its own communications reference model, that is actually defined in three dimensions as a cube. This cube is based around the concept of three planes, the user, control and management planes.

The user plane defines flow control, error detection and correction and transfer of data. The control plane defines signaling functions, for example call setup and termination. The Management plane is split in to two sub-planes, plane management and layer management. Plane management is there to arbitrate between two or more entities trying to use the same channel at the same time. Layer management defines how to manage resources and negotiate parameters.

All this talk of a reference model operating in three planes is confusing, at least initially. All it represents is a way of viewing the standards and procedures that go in to defining the operation of ATM. The three dimensional cube is shown for reference in figure 1-17.

If this model does not make sense, there is no need to worry about it. This book will proceed in a step by step fashion to build simple CIOA and other ATM networksnetworks as well as the overly complex LANE networks, without referring once to this "magic" standards cube.

The most important thing to recognize at this stage is that the ATM documentation specifies how ATM switches data, signals for connections, transmits bits of data, manages faults and traffic flow, and how it provides services (such as circuit or packet mode). All of these features are made accessible to legacy communication methods via the adaptation layers that define how legacy voice and data systems take advantage of cell services.

The ATM Adaptation Layers

The first of these adaptation layers are ATM Adaptation Layer 1 (AAL 1), which details how ATM works with traditional time division multiplexed services, like T-1 services, AAL 3 and 4 were defined for SMDS. With different AALs for different services, it was getting a difficult to manage, so AAL 5 was defined as the Simple Effective Adaptation Layer (referred to as SEAL) that now covers most interfaces. AAL 5 offers lower overhead and therefore better bandwidth utilization at the expense of error recovery ( much the same as ethernet in this respect, as both leave error recovery to a higher layer, like TCP, or the application layer).

The goal of these adaptation layers is to prepare data, coming from upper layers, in to 48 byte Segmentation And Reassembly (SAR) Protocol Data Units (PDUs). As most applications tend to use AAL5 and selecting the AAL is not normally the responsibility of a network engineer, we’ll just briefly review the AAL5 operation, rather than look at all AAL types.

The operation of AAL with respect to converting upper layer data to fixed length cells is illustrated in figure 1-18.

Within the convergence sub-layer of the adaptation layer, upper layer data frames are appended with an 8 byte trailer, which includes payload length, CRC, and between 0 and 47 bytes of padding to ensure that the SAR sub layer will be able to cut up this unit in to 48 byte chunks. These 48 byte chunks are passed down to the ATM layer, are prepended with the 5 bytes of ATM header and delivered to the physical layer for transmission on to the network cable. The upper layer data discussed will typically be an IP datagram, containing IP source and destination addresses, UDP or TCP port numbers and all the other header information as well as the actual application data we wish to transmit.

Cisco Supported ATM Interfaces

Earlier we mentioned the two main interfaces UNI and NNI that are present within an ATM network. To finish this chapter, we’ll look at these interfaces in a little more detail, and how they are supported on Cisco hardware.

There are two flavors of UNI, one for connecting to private networks and one for connecting to public networks. The UNI is a specification for how an end-station, such as a bridge or router can connect to an ATM switch. The latest version of UNI is version 4.0. The end-station to switch interface always has two permanent virtual circuits active, we’ll look at how to view these connections on a Cisco end-station in chapter 6.

The first connection is 0/5 (VPI 0, VCI 5) which is used for signaling, i.e. requesting a VPI/VCI number to use when connecting to a remote ATM address. This connection uses Q.2931 signaling, which is very similar to the Q.931 signaling used in narrow band ISDN connections.

The second PVC is 0/16, which is used for the ILMI (the Integrated Local Management Interface), which performs a similar function to the LMI used in Frame Relay. The ILMI enables an end-station to obtain configuration and status information from an ATM switch. ILMI is really SNMP under another name (the ATM specifications excel at renaming all kinds of standards). The difference between SNMP and ILMI is that ILMI uses AAL5 as the transfer mechanism, as opposed to UDP and IP as in SNMP. When an end-station connects to an ATM switch, the end-station acts as the SNMP client and the switch as the SNMP server. ILMI MIBs exist in both the end-station and the ATM switch and use an Interface Management Entity (IME) to communicate MIB information between devices. This facility allows for a base level of plug and play for ATM end-stations, in that they can be plugged in with no configuration and obtain most of what they need from the ATM switch. These concepts are illustrated in figure 1-19

Support for UNI as an end-station, is available in the Lightstream 1010 (when connecting to a public network), the Catalyst 5000 switch and both the ATM Interface Processor for 7500 series routers and the ATM Network Processor Module (NPM) for the 4x00 series routers. The Lightstream 1010 also supports the UNI function on the network side that end-stations connect to.

The NNI also comes in two flavors, plain NNI that specifies the interface between switches in a public network and Private NNI (PNNI) which describes the interface between switches in a private network. Prior to version 1 of the PNNI being released, this interface was known as the Interim Inter-switch Signaling Protocol (IISP). IISP is really a way of statically configuring ATM routes within an ATM network, whereas PNNI does it dynamically. The Lightstream 1010 currently supports PNNI version 1.

On the physical side, Cisco supports ATM various transmission rates over various media as specified in figure 1-20. It must be noted that this is only a guide at the time of going to press and is constantly subject to change by the standards bodies. The most recent information can be got from www.atmforum.com.

Although it is likely that in the future all WAN based ATM traffic will be carried over SONET, there still exists the Digital Signaling system in use today on T-1 and E-1 circuits. In the same way that DS1 has the same transmission rate as T-1 (DS is the generic term, T refers to transmission over copper). OC (optical carrier) and STS (Synchronous Transport System) rates are equivalent. OC-3 and STS-3 each have a transmission rate of 155Mbit/sec. For both OC and STS, the number that follows, for example 3, is the multiple of 51.84 Mbit/sec that comprises the bandwidth of the link. The key lesson from this figure is that you must be sure the interface framing and the transmission media match between the Cisco equipment and the Telco, as well as the transmission rate, before equipment is ordered.

Summary

This chapter provided a brief overview of the operation of traditional repeaters, bridges and routers and discussed how these devices altered packets that travel though them. Layer 2, 3 and 4 switches were discussed and the similarities between switching and routing were identified. The concepts of broadcast and collision domains were introduced, along with the concept of different layers encapsulating data with their own headers.

Spanning Tree and VLANs were explained and the blurring of the differences between current day switches and routers was discussed. It is now difficult to identify single devices as layer 2 or layer 3 devices, as switches have routing functions and routers perform switching.

ATM was introduced as a connection oriented protocol that uses fixed length cells to deliver both constant and variable bit rate and delay services. ATM is asynchronous as it only transmits data when necessary, uses connection identifiers to route packets in a network and has a LMI function similar to Frame Relay to auto configure end stations.

Forward
Chapter: | 1 | 2 | 3
Copyright © 1999 The McGraw-Hill Companies, Inc. All rights reserved. Any use is subject to the Terms of Use the corporation also has a comprehensive Privacy Policy governing information we may collect from our customers.


With any suggestions or questions please feel free to contact us
You can learn formula for retained earnings calculation on this website.