Gigabit Networking: High-Speed Routing and Switching

Sadikin Djumin ,

The focus of this paper is the recent development of gigabit networking. The basic concepts of gigabit networking, issues on high-speed switching and routing, and current gigabit technologies and products are discussed.

Other Reports on Recent Advances in Networking

Table of Contents
1. Introduction
2. Basic Concepts of Gigabit Networking
        2.1. Fiber Optics
        2.2. Cell Networking
3. Routing and Switching Issues on Developing Gigabit Networking
        3.1. Basic Routing Functions
        3.2. Shortcomings of Routers
        3.3. Significance of Routers
        3.4. Approaches to Solve Routing Issues
                   3.4.1. Minimizing the Need for Routing
                   3.4.2. Switching/Routing Integration
                   3.4.3. Improving Routing Performance
                   3.4.4. Improving Router Performance
                   3.4.5. Improving Protocol Efficiency
4. Technologies Supporting Gigabit Networking
        4.1. Switching Technology
        4.2. IBM's HPR (High Performance Routing)
        4.3. Gigabit Routers
        4.4. Routing Switch
                   4.4.1. Benefits of Routing Switch
                   4.4.2. Products of Routing Switch
        4.5. I/O Switching
                   4.5.1. Benefits of I/O Switching
                   4.5.2. Products of I/O Switching
5. Current Gigabit Technologies Available for High-speed LAN
        5.1. Asynchronous Transfer Mode (ATM)
        5.2. Fiber Channel
        5.3. Gigabit Ethernet
        5.4. Serial HIPPI (High Performance Parallel Interface)
6. Conclusion
7. List of Acronyms
8. References

1. Introduction

Technology advancement in the fields of fiber optics, computing systems, computer applications, data communications and internetworking has been linked closely to the development of networks that have the capability of operating at gigabit speeds. The capability of today's fiber optic signaling equipment to transmit several gigabits per second over long distances with very low error rates through optical fiber has convinced the researchers that gigabit networks are technologically feasible.

Further, technology has realized a tremendous increase in the power and bandwidth of many parts of computing systems today at an affordable price. This is demonstrated by the existence of fast CPUs (for acronyms, please refer to the list of acronyms), fast memory, and high-speed buses in desktop computers, workstations, and servers. According to Moore�s Law, processor speeds double every 18 months, while it is commonly agreed that network capacity is increasing even faster at the factor of 1.78 per year [ Prof. Jain's Networking Trend and Their Impact Lecture]. Itis also predicted that computing power in 1997 is 2 GIPS (Gillion Instruction Per Second). High-bandwidth storage systems have also been improved in performance. It is now possible to have gigabit-bandwidth file systems with a technology known as RAID [Partridge's Gigabit Networking].

As computing power and storage systems become increasingly powerful, it is easier now to support new and existing network and computer applications with high-bandwidth data, high-resolution graphics, and other complex and rich multimedia data. Real-time video conferencing, 3D animation modeling, Internet telephony, medical imaging, CAD/CAM applications, and Mbone transmissions � just to name a few were unthinkable in a few years ago, but are being used extensively today. Table 1, taken from Gigabit Ethernet White Paper, gives a good summary of the new and existing applications that drive network growth.

Table 1. Summary of Applications Driving Network Growth [Gigabit Ethernet White Paper]
Data Types/Size  
Network Traffic Implication  
Network Need  
Scientific Modeling,     
Data files; 100�s f Megabytes to Gigabytes Large files increase bandwidth required Higher bandwidth for desktops, servers, and backbone
Medical Data Transfer
Data files; 100�s of Megabytes to Gigabytes Large files increase bandwidth required Higher bandwidth for desktops, servers, and backbone
Internet/Intranet Data files now; Audio now; Video will emerge; High Transaction Rate; Large Files, 1 MB to 100 MB Large files increase bandwidth required; Low transmission latency; Class ofservice reservation; High volume of data streams Higher bandwidth for servers and backbone; Low latency
Data Warehouse Data files; Gigabytes to terabytes Large files increase bandwidth required; Search and access require low latency Higher bandwidth for servers and backbone; Low latency
Network Backup Data files; Gigabytes to terabytes Large number of large files; Transmitted during fixed time period Higher bandwidth for servers and backbone; Low latency
Video Conferencing,    
Interactive Whiteboard
Constant Data Stream; 1.5 to 3.5 Mbps at the desktop Class of service reservation; High volume of data streams Higher bandwidth for servers and backbone; Low latency; Predictable Latency

Most existing networks today are slower than most current computers and servers. Many current computers and servers using an industry-standard PCI (Peripheral Component Interconnect) bus architecture are capable of processing raw I/O with throughput of 132 MBps, or 1.05 Gbps. When these computers or servers are connected to the network through FDDI (Fiber Distributed Data Interface) or Fast Ethernet, the most widely implemented networks today, the maximum transfer rate is just 12.5 MBps. As a result, a bottleneck occurs between the computers or servers and the network, or a relatively high number of CPU interrupts per transfer happens as the computer adapts itself to the slower networks.

Further, the explosive growth of the Internet, the WWW (World Wide Web) and enterprise intranets is radically changing the pattern of network traffic by introducing more and more different subnets. Network users are constantly accessing servers from many subnets and geographies rather than local servers to serve internal organizational needs. As a result, the traditional "80/20" rule is no longer true. In the past, the network traffic was 80% locally based in the subnet and 20% leaving the subnet, or running over the corporate backbone and across WAN (wide area network). The reverse trend is happening today. Today's network must be able to handle anywhere-to-anywhere traffic with 80% of the traffic crossing subnet boundaries.

With today's data-intensive applications, increasing number of network users, enterprise intranets, LANs (Local Area Networks), and new methods of information delivery, pressure for higher bandwidth is growing rapidly at desktops, servers, hubs, and switches. The concern is how to achieve a high-performance network with a bandwidth that matches the capabilities of its processing power and memory capacity. Therefore, the primary goal of data communications today is not only to facilitate data exchange between computing systems, but to do it fast as well. This drives a widespread interest in the technologies for gigabit networking.

Further, achieving true gigabit networks is not only the matter of raw bandwidth increases. Other aspects of networking should be considered. Such aspects are the existing �legacy� infrastructure networks in the existing switches, the software and network interface cards (NICs), and the ability of the protocol stacks to move data in and out of the computer, fast routing and switching. Other issues are increasing traffic demands, unpredictable trafic flows, and the priority of critical applications. Therefore, all of these aspects of the networking system should be taken into account in order to achieve true high-bandwidth networking.

This paper discusses the basic concepts of gigabit networking, and the issues on switching and routing. It also presents the recent development of gigabit technologies and products. Finally, current gigabit technologies available for high-speed LAN are discussed.


2. Basic Concepts of Gigabit Networking

What is the speed for true gigabit networks? From the ATM (Asynchronous Transfer Mode) world, it could be 622,000,000 bps (OC-12), 1,244,000,000 bps (OC-24), or/and 2,488,000,000 bps (OC-48). With 100 MBps Fiber Channel, it would be 800,000,000 bps. In Ethernet, it is 1,000,000,000 bps. It also could be 1,073,741,800 bps (which is equal to 2 30 bps, where 2 10 equals 1,024 or 1 k). Standardized by IEEE 802.3z, a true gigabit network will provide connections between two nodes at a rate of at least 1,000 Mbps. By comparison, it is approximately ten times that of both FDDI and Fast Ethernet.

The networks with at least 1 Gbps are feasible today basically due to the technology advancement in fiber optics, and cell networking (cell switching, or cell relay).


2.1. Fiber Optics

Light has the properties of reflection and refraction. When light passes from one medium to another, some part of it gets reflected and the rest gets refracted (Figure 2.1). Fiber optics use the properties of light refraction to send signals over long distances across a thin strand glass (core), which is surrounded by a thicker outer layer (cladding). The structure of a fiber is shown in Figure 2.2. In fiber optics, bits are sent by transmitting pulses of light through the core of fiber.

Figure 1: Reflection and Refraction of Light [Partridge's Gigabit Networking].


Figure 2: Fiber structure [Partridge's Gigabit Networking].

Since the transmission speed of fiber is 0.69 the speed of light in vacuum, or about 2.1x108 m/s, it is not significantly different from the transmission speed of copper. This means a transmission through fiber is not faster than that through copper. The difference between fiber and copper then is information density (bandwidth). Fiber has more bandwidth because it can carry more bits per unit of cable than copper. According to Partridge, fiber has a bandwidth of 25 terahertz (THz) using a spectrum band of 200 nanometer centered on the wavelengths of 0.85, 1.3, and 1.5 microns. With standard equipment signaling capable of transmitting between 1 and 1.4 bits per Hz, a single fiber has a bandwidth between 50 and 75 terabits per second [Partridge's Gigabit Networking].

There are two types of fiber: single-mode fiber and multimode fiber. Single-mode fiber is superior in its transmission quality and properties, while multimode fiber is more error tolerant in fitting to transmitter or receiver. For further information on fiber optics, please refer to [Partridge's Gigabit Networking].


2.2. Cell Networking

Another important concept of gigabit networking is cell networking. The basic idea of cell networking is to transmit all data in small, fixed-size packets (cells). Figure 2.3 shows the concept of cell and packets. By choosing small, fixed-size cells, it is possible to reduce waste and to minimize delays. When one sends data with any size cells, on average, half of the last cell will be unused.

Figure 3: The concept of cells and packets.
Secondly, if packets vary in size, the delay will also vary. As a result, it is very difficult to guarantee delays required for interactive voice and video traffics. Other advantages of cells are as follows:
  1. Reducing the number of transmission networks
  2. Providing easier support for multicasting
  3. Offering a better multiplexing scheme than ISDN (Integrated Services Digital Network) for high speeds
For more explanation, please refer to [Partridge's Gigabit Networking].

Therefore, the basic concepts of cell networking introduce faster transmission and lower delays, which both are requirements for gigabit-speed networks.

3. Routing and Switching Issues in Developing Gigabit Networking

Today's data communications and networks would not exist without routers. Routers are essential to links LANs and remote sites. Since routing is considered to be one of the major bottlenecks in networks, within the past few years, routers have become less central of building network and being replaced by switches. The current trend is "switch when you can, route when you must" or "switch many, route once". In the LAN environment, multiport bridges (segment switches) are used instead of routers to link LANs. In WANs, frame switches have replaced the need of routers [Netreference White Paper].

Switching and routing issues are very crucial in designing gigabit networking. Increasing bandwidth in the magnitude of gigabit will not be very useful if the gain of bandwidth is only offset by the slowness of routers. On the other hand, routing is required more than ever, especially with the current and future network traffic moving away from the traditional 80/20 rules to the new 20/80 rule.

In this section, the basic routing functions, and the significance and shortcomings of routers are discussed. Several approaches to improve the performance of routing and routers are also observed.


3.1. Basic Routing Functions

Routing has two basic functions: determination of routing path (route calculation) and frame forwarding(switching). Routing protocols provide and exchange information about the topology of the overall network to all participating routing devices. Path determination may be based on a variety of metrics (values from algorithmic calculations on a particular variable such as network delay, link cost, etc.) or metric combination. Routing algorithm will then use route metrics and information, which are stored and maintained in routing table to find an optimal path to a destination. Secondly, route calculation runs the application of filters such as restriction, priority criteria, firewall, etc. on each packet in a particular traffic flow between two points [Netreference White Paper].

The function of frame forwarding is to forward incoming packets to the appropriate output link. Before forwarding the incoming packet, router has to look at the source and destination addresses of an incoming packet, which is evaluated at layer 3.


3.2. Shortcomings of Routers

Several shortcomings of typical routers are described as follows [Netreference White Paper]:

High and variable latency
Typical routers yield a relatively high and variable latency (delay); thus, they are not capable of supporting many time-sensitive multimedia applications. Also, they may be incapable of operating at the gigabit speeds.

Cost and Performance Issues
In terms of cost and performance, routers cannot compete with layer 2 switches. Layer 2 switching by Media Access Control (MAC) address is generally faster and more cost-effective than layer 3 switching by network address of routers. Layer 3 switching has to examine each layer 3 header to apply the appropriate routing while layer 2 switching can just ignore this. Hence, layer 3 switching of routers is slower.

Administration Issues
It is easier to setup and maintain switches than routers. For routers, a relatively large set of configuration parameters for each router in the network needs to be specified. These parameters are subnet addresses and masks, routing information, and input and output ports. They are often different for each protocol stack, but are required to be consistent in order to work properly with other routers in the network. Hence, routers are administratively rich and costly.


3.3. Significance of Routers

Despite the shortcomings of routers, there are certain networking functions that routers perform better than switches [Netreference White Paper]:

Traffic Filtering for WAN
Routers are capable of keeping unnecessary traffic off of WAN because they perform much better job than layer 2 switches in selecting necessary traffic to forward over often-limited WAN links. As a result, routers can reduce networking costs and increase performance due to reduced contention for WAN links.

Broadcast Support
Broadcast messages in large networks that using layer 2 hub or bridge switches can grow out of control. This results in inefficient use of network bandwidth. Routers can provide necessary firewalls to control broadcast messages because they look at IP source and destination addresses in the packet headers.

Since routers can filter messages, they provide security firewalls between corporate networking sites and the public Internet. The decision of forwarding messages is made only after routers have reviewed IP source and destination addresses in the headers of all received packets and consulted access control lists.

Multiple Administrative Domains
Many large organizations break their large network into separate subnets for the reasons of network scalability, coordination, and administration. However, it is desirable that the approach can still reduce the need and costs for multiple network administrators. Routers can be used in this case to create multiple administrative network domains.


3.4. Approaches to Solve Routing Issues

Since routers are still required in the future networks, several techniques have been developed to solve the routing drawbacks to improve network performance.


3.4.1. Minimizing the Need for Routing

This evidence of this effort is the scheme �switch when you can, route when you must,� or �route once, switch many.� The idea is to avoid routing at all if possible, or at least to minimize the need for routing. A common approach in this scheme is to use switches to replace routers wherever is possible. However, the standard layer-2 switches have the following limitations [ Rapid City's (Nortel) routing Switch]:

Newer switching technologies like ATM (Asnchronous Transfer Mode) and VLANs can also reduce the number of routes in the network [Netreference White Paper].

3.4.2. Switching/Routing Integration

If properly done, integrating the functions of switching and routing in one product can improve network performance and achieve easy-to-manage administration. One approach is to simply add routing code to a switch. However, there are two drawbacks to this approach. First, it does not make routers easier to manage because it takes more routers to be registered. Secondly, this approach does not fully take advantages of the potential synergy between switching and routing. Making switch �routing aware� and router �switching aware� is not enough since true integration can apply optimal techniques to issues like broadcast, best path dtermination, and support for network services to application requirements [Netreference White Paper].
Another approach is to use router servers.A router server acts as a center with all the information of network topology and state information from all network switching/routing nodes to calculate all paths for traffic. Thus, there is no distinction between routing and switching functions in the network using router servers because the same path determination is applied for both functions. In other words, paths from other subnets and paths inside the subnet are calculated in the same way [Netreference White Paper]. A router server performs routing once at the initial setup call. During this session, the router server selects the optimal route and passes the information to each of the switches along the chosen path. The route is only recomputed if new routing information becomes available. After this initial setup, data are being switched and forwarded along the chosen path with no further involvement of the router server. So, router servers perform the �route once, switch many� scheme.

3.4.3. Improving Routing Performance

Another technique to improve routing performance is to separate control information traffic from the regular data traffic. The next effort then is to optimize the paths of these two types of traffic. With this approach, data traffic can bypass routers. Some techniques of improving routing performance have been developed and modified from this basic idea. Such techniques are Ipsilon IP Switching, Toshiba Cell Switch Router, 3Com FastIP, IBM Switched Virtual Networking/Multiprotocol Switched Services, CiscoFusion and NetFlow Switching, Multiprotocol over ATM (MPOA), Cabletron SecureFast Virtual Networking, etc.

3.4.4. Improving Router Performance

Due to the complexity and difficulty of establishing good solutions for �route once, switch many� scheme, the industry has taken a different effort, which is to improve the performance of routers. Rapid City Communications has come up with the concept of Routing Switch. IBM has announced its High Performance Routing (HPR). Netstar Inc. has introduced their Gigabit Router. Multigigabit Router are being developed. Technologies from this approach are described fully in Section 4.

3.4.5. Improving Protocol Efficiency

Network protocols lay an important role in network performance. The performance of network protocols is directly linked with the performance of protocol processing and protocol control algorithms. Recently, several techniques have been developed for higher performance network protocol. According to Feldmeier, the average-case performance of a protocol can be improved by reducing the number of protocol operation, improving performance of required operations, and improving efficient bandwidth utilization [Feldmeier's Survey].

Reducing the Number of Protocol Operations
This can be achieved by choosing an appropriate configuration for the protocolto best suit the operating environment and eliminate unnecessary functionality. There are two kinds of protocol configuration: dynamic protocol configuration and quasi-static protocol configuration. Dynamic protocol configuration enables the protocol to adjust its behavior dynamically to the environment changes to reduce the amount of required processing. Quasi-static protocol configuration selects which protocols to be used in a protocol stack. The protocol selection is based on the information received at connection setup time. This information contains the services required by the application and the services provided by the network layer. Quasi-static protocol configuration is best if each necessary communication function is performed exactly once in the protocol stack.
Secondly, separating control information processing from data processingcan further reduce the number of protocol operations. This is true because processing packet headers is much simpler than processing control information headers. A good protocol design should process control messages less often than data packets and also eliminate processing the redundant control messages.

The protocol operations can be further reduced by performing fewer acknowledgments. In a reliable data transfer, at each hop, the receiver has to send acknowledgments to tell the sender what data it has been received. This results in a lot of acknowledgment messages on the network, and acknowledgment processing at the sender and receiver. The problem can be reduced if fewer acknowledgments are sent. For instance, fewer acknowledgments can be achieved by sending multiple acknowledgments with a single acknowledgment message.

Improving Performance of Required Operations
According to Feldmeier, the required operations of searching and timerscan be improved. Searching has to be done fast and efficiently because it is performed by many protocol operations, such as connection demultiplexing, routing-table lookups, and timer management. Some implementations of searching are direct addressing, hashing, and comparison search of ordered keys. Since the search operation is mostly comparison, several efforts have been attempted to optimize the speed of comparison such as using parallel comparison and the probability of various events to predict search order. Detail description of each approach is shown in [Feldmeier's Survey].

Timers are essential for some protocol operations. Timer settings, resetting and cancellations are often expensive due to search operation on the timer list to find the desired entry. Thus, reducing the number of timer manipulations can improve the performance of protocols. Ordering the timer list by expiration time to reduce searching and minimizing timer resetting can reduce the timer manipulations.

Increasing Efficient Bandwidth Utilization
The reduction of data movementis essential to improve the efficient use of bandwidth. This is especially true for computers with the RISC (Reduced Instruction Set Computer) processors since the bandwidth of bus and memory is often the major bottleneck. One way to reduce data movement is to reduce the I/O rate of a processor by using cache. Another approach is to use software pipelining, which leaves data in the processor between processing operations. In a reliable network, selective retransmissioncan reduce data movement. When a packet is lost, the protocol will ask the sender to retransmit only the lost packet.

In summary, routers are still required in shaping gigabit networking. Switches have improved the network performance significantly, but they cannot perform some nice features of routers. Thus, the industry has recently shown their efforts to improve the performance of both routing and router.


4. Technologies Supporting Gigabit Networking

Some technologies and products have been introduced recently to support the development of gigabit networking. In this paper, Routing Switch, IBM's HPR, Gigabit Routers, Multigigabit Routers, and I/O Switching are presented. These technologies described here might become obsolete pretty soon in favor of new upcoming techniques and technologies. (As a side note, there might be other new technologies introduced as this paper was written.)


4.1. Switching Technology

Switching has become the key element in most networks in segmenting traffic, reducing latency, and improving performance. It is simple, cost-effective to implement, and requires only a minimum amount of hardware implementation. Switching allows specific computers, workgroups, or servers to have their own dedicated access to the full bandwidth available on the network. As a result, switching provides more bandwidth for users than a shared network (Figure 4).

10 MBps Ethernet Network
Figure 4: Shared vs. Switched on 10 Mbps Ethernet [Digital White Paper].

One switching technology to produce quicker network throughput is crossbar switching. It uses a non-blockingswitching matrix to allow multiple simultaneous connections with very low latency and fast throughput. This technology has been implemented today in the design of high-speed routers like the NetStar GigaRouter and Multigigabit Router.


4.2. IBM's HPR (High Performance Routing)

IBM's High Performance Routing (HPR) is the advanced System Network Architecture (SNA) technology, based on the latest standards from the ATM Forum and the APPN (Advanced Peer-to-Peer Network) Implementers' Workshop. The key features of HPR are high performance, dynamic rerouting, priority and class of service, congestion avoidance, scalability, and economy.


4.3. Gigabit Routers

Gigabit Routers, such as Multigigabit Router, are on their way to the market. Some companies have recently introduced their gigabit routers, such as Cisco ( Cisco 12000 series), NetStar (GigaRouter), and FORE (Marconi). Basically, all the designs of high-speed routing adopt the same functional component as shown in Figure 5 [Newman et al's Paper]. The functions of each component in a general high-speed router are shown in Table 2.

Figure 5: General structure of a high-speed router [Newman et al's Paper].
Table 2: The functions of each component of a general high-speed router.  
 Component  Functions
Line Card Contains physical layer components to interface the external data link to the switch fabric
Switch Fabric Interconnects the various components of the gigabit router; Offers higher aggregate capacity than that of the more conventional backplane bus
Forwarding Engine  Inspects packet headers; Determines outgoing line card of a packet; Rewrites the header
Network Processor  Runs the routing protocol; Computes the routing tables that are copied into each of the forwarding engines; Handles network management; Processes special handling for unusual packets

There are two types of approaches used in designing a fabric switch: crossbar switchand ATM switch. The NetStar GigaRouter uses a 16 port crossbar switch with each port operating at 1 Gbps. Cisco 12000 series use multigigabit crossbar switch fabric. Multigigabit Routers will use a crossbar switch with 15 port each operating at 3.3 Gbps. On the other hand, IP/ATM, Cell Switch Router, and IP switching use ATM switch.

The forwarding engine may be designed physically as a separate component or an integrated component with either the line card or the network processor. The packet-forwarding rate of a separate forwarding engine can be changed independently from the aggregate capacity based on the ratio of forwarding engines to line cards. However, this approach creates additional overhead across the switch fabric. Multigigabit Router implements separate forwarding engines in its architecture. The NetStar GigaRouter integrates its forwarding engine with each line card. The architecture of IP Switch allows integration of its forwarding engine with the network processor, although it is not prohibited to have combination with the line card or a separate implementation.

Since the average packet size now is about 2000 bits, the packet-forwarding rate required is about 500 kilo packets per second (kpps) for each 1 Gbps traffic. To achieve this magnitude of rate, two approaches have been proposed: the silicon forwarding engine and a high-speed general-purpose processor with destination address on an internal (on-chip) cache [ Newman et al's Paper ]. The features of both approaches are shown in Table 3.

Table 3: Silicon approach vs. General-purpose processor with caching approach
  Silicon Design Processor with Caching Design
Design Silicon hardware A 415 MHz general purpose processor with internal cache
Memory 4 MB Additional 8 MB (for a complete routing table of several hundred thousand routes)
Forwarding Capability 5 Mpps on average 10 Gbps of traffic 11 Mpps if all the requested destinations in the cache
Advantage Maintains its maximum forwarding rate regardless past history of destination addresses Maintains its full forwarding rate if at least 60% chance the required destination address has been seen in the past and is still in the cache
Disadvantage Fixed Solution Flexible to the traffic profile and traffic change
Further, there is an ongoing discussion about the best way to include additional functions, such as multicasting, Quality of Service, firewall filtering, and complex policy-based routing in Gigabit Routers. To offer such functionality, more fields in the packet header, besides the destination address, should be used.


4.4. Routing Switch

Routing Switch is designed to improve the performance of routers to achieve the identical performance of switching. The concept is to apply switching techniques to those protocols that require optimized routing performance and fully integrate high performance routing into switch fabric.


4.4.1. Benefits of Routing Switch

With routing switches, the performance penalty of layer 3 traffic are eliminated, thereby yields five benefits of designing a network [ Rapid City's Routing Switch].

Simpler network design
It is a common practice to avoid router hops in designing networks to reduce routing latency. One way to do this is to add local servers on the same subnet as primary clients to reduce router hops. However, this setup requires extra switch and links when the server is located in the data center, not in the workgroup. Avoiding router boundaries becomes unnecessary with routing switches.

Supporting priorities and low latency for advanced applications
Some applications require not only raw bandwidth but also consistent latencies (e.g. multimedia streams) and priorities. A well-designed routing switch provides priority queuing to offer consistency latency across entire intranets. It also provides software-type latencies.

Ease of migration
Because no new protocol is required, routing switches can integrate seamlessly with existing network infrastructures. Out of the box, routing switch is a high-performance layer-2 switch. Once it is configured as a router, it can increase IP performance and reduce the load of router on the existing network center.

The performance characteristics of most of current switches and routers vary widely depending on the �turned-on� features. For example, the performance of some switches operating on VLAN policy can drop by over 50%. The performance of routers also suffers huge performance drops when supporting priorities. This is because switches and routers depend on CPU, which is a shared resource for advanced features. The more processes put on a CPU, the slower it executes them. However, routing switching can run all features like VLANs and priorities at the same high-performance levels.

Powerful Controls by Integrated Management
Another benefit of routing switches is Integrated Management. Configuring VLANs, IP Multicast, routing between VLANs can be all done from one console. A MAC address for the router on the VLAN is completely configured within the switch.

4.4.2. Products of Routing Switch

Rapid City Communicationshas recently introduced their robust and cost-effective routing switches � the Fully Integrated Routing Switch Technology (FIRST) Family. The FIRST family is capable of modularity and scalability, supporting 96 10/100 autosensing ports, up to 12 Gigabit Ethernet ports, and combinations of two. As one of the FIRST family of routing switches, the Rapid City f1200 routing switchis a high-performance, high-feature switch that allows complete flexibility in switching or IP routing (Figure 6) [ Rapid City's Routing Switch ]. Some features of f1200 are as follows:

Figure 6: Six networking configurations at switching speeds made possible by a routing switch
[ Rapid City's Routing Switch].


4.5. I/O Switching

The objective of I/O Switching is to provide a solution to the I/O and backbone bottleneck problems in networks. These bottleneck happen as many companies have centralized critical network resources, such as file servers, disk and tape storage devices in the data center. I/O switching allows direct connectivity between server or storage I/O system to the fabric switch to provide significantly higher throughput and reduced latency due to the elimination of unnecessary translation to network protocols at the server interface [GigaLabs White Paper]. As a new approach to improve overall network performance, I/O switching works compatible with critical I/O resources on the LAN, such as PCI, Sbus servers, and SCSI (Small Computer System Interface) storage devices.

Traditionally, to connect a server to a network, one needs a network interface card (NIC), which provides a translation layer between the server bus to a LAN media protocol, and converts server data into frames or cells. The corresponding media card on the switch will then transmit those frames and cells. NICs in a server typically has the bandwidth of 100/155 Mbps, thus reducing the capacity of the I/O bus of the server, which is generally capable of 1.05 Gbps (Figure 7). Further, NIC provides specialized drivers to optimize I/O transfer by offloading much of the I/O processing overhead from the server CPU and operating system [GigaLabs White Paper]. Thereby, it introduces additional overheads of protocol processing at the server.

Figure 7: The Comparison between Existing Network and I/O Switching [GigaLabs White Paper].

With the approach of I/O Switching, an I/O NIC in the server is connected directly to the switch through either copper of fiber bypassing protocol translations (Figure 8) [GigaLabs White Paper]. Once the I/O request and the only media translation at the outbound switch port have been processed, the server can move data fast and directly from its bus to the backbone switch. The connectivity between storage devices directly to the switching fabric can be through the SCSI bus since I/O Switching supports native SCSI II and III links. In this design, it is possible to achieve scalability and reliability by grouping processing resources, data and retrieval resources on the network as virtual computer system.

Figure 8: The Switch-to-Storage Connectivity of Existing Network and I/O Switching [GigaLabs White Paper].


4.5.1. Benefits of I/O Switching
Several benefits gained from using I/O Switching are as follows [GigaLabs White Paper]:

I/O Switching scales well to continuously meet the demand for greater network with higher bandwidth. For instance, the architecture PCI bus has moved from 32 bits to 64 bits and therefore has increased its throughput. In fact, the standard for 4 Gbps PCI bus is scheduled for the near future, replacing the 1 Gbps and 2 Gbps buses. In the same way, I/O Switching will offer higher performance approach than network protocols to match switch-to-server throughput to server bus capacity.

Risk-Free Migration to Maximum I/O Performance
Since I/O Switching is compatible with virtually any existing network protocol, Ethernet, Fast Ethernet, FDDI, ATM, and Gigabit Ethernet, it can be transparently integrated into a legacy LAN network without risky or costly changes the network infrastructure.

Supporting Existing and New Applications
I/O Switching is very effective in handling high-bandwidth client/server applications, such as groupware, intranets, large data transfers, high-speed imaging, video conferencing, and streaming multimedia. It is also an enabling technology for emerging system architectures such as network computing (NC) by functioning as a high-speed server connector [GigaLabs White Paper].


4.5.2. Products of I/O Switching

I/O Switching is available across the entire switching products of GigaLabs, which supplies I/O
switches, I/O NICs, and storage bus interface. The GigaLabs switching product family includes a
stackable, modular workgroup switch and the industry's highest performance multi-gigabit
backbones switch. For instance, the GigaStar 3000 is a high-performance backbone switch with
multigigabit, non-blocking switch fabric, delivering up to eight full-duplex, full-bandwidth Gigabit
Ethernet and/or I/O Switching ports through an 18 Gbps switching fabric.

5. Current Gigabit Technologies Available for High-speed LAN

There are four technologies competing each other in production or development today to provide gigabit networks. They are ATM, Fiber Channel, Gigabit Ethernet, and Serial HIPPI.


5.1. Asynchronous Transfer Mode (ATM)

Originally, the goal of ATM design was to simplify and standardize international telecommunications. Today, it has become standard for WANs. ATM provides a high-speed transmission for all types of communications, from voice to video to data, over one network with small, fixed-size cells. It also provides unparalleled scalability and Quality of Service. Currently, ATM technology is used in network backbones or specific workgroup applications with heavy traffic load with the mix traffics of voice, video, and data into a single network. To achieve gigabit speeds, ATM is being developed to operate on 622 Mbps (OC-12) and 1.244 Gbps (OC-24).

The future of ATM is still unknown. It depends heavily on its ability to integrate with existing LAN and WAN network technologies. Most observers feel ATM technology will not become a major force in future networks since the other networking technologies can easily achieve the advantages of ATM. Other observers believe that ATM seems to meet the future needs of WAN and a few highly specialized LAN environments.


5.2. Fiber Channel

The standards and architectures of Fiber Channel are still under development although some vendors have settled on a standard known as Arbitrated Loop, which is basically a ring topology. It is very sensitive to adding new users, which can cause increased congestion and reduced bandwidth to each user. At present, Fiber Channel is used to attach storage devices to computers.

Arbitrated Loop Fiber Channel runs at a gigabit per second and supports the SCSI protocol. This design seems to be a good choice for peripheral-attachment operations. However, many experts agree that Fiber Channel is not a good choice to replace IP technology and to provide future gigabit networks.

5.3. Gigabit Ethernet

Many network experts agree that Gigabit Ethernet will become the gigabit technology for the LAN environments. It is a good choice for providing a higher capacity enterprise backbone throughout an organization and high-performance workstations with a cost-effective gigabit networking connection. Nevertheless, Gigabit Ethernet is not a good solution for moving applications with huge data rapidly due to the issues of the Ethernet-based CSMA/CD support, host-bust connection issues, and relatively small packet size.


5.4. Serial HIPPI (High Performance Parallel Interface)

Serial HIPPI is the fiber-optic version of the HIPPI, which was originally developed in the late 1980s to serve the connectivity and high-bandwidth needs of super computers and high-end workstations. It provides a simple, fast point-to-point unidirectional connection. Recently, this technology shows its establishment as the gigabit technology for big data applications, clustering and a broad of server-connectivity environments, providing a speed of 1.2 Gbps over distances up to 10 kilometers. Serial HIPPI implements non-blocking switching technology and packet sizing up to 64 KB in size. It also provides reliable ANSI (ANSI X3T9.3)- and ISO-standardized Gbps connectivity with the packet loss rate approaching zero percent.

Serial HIPPI operates within the physical- and data-link layers in the ISO seven-layer model. At higher layers, Serial HIPPI supports IPI-3 for storage connection and TCP/IP for networking which makes it compatible with Ethernet, Token Ring, FDDI, and the wide-area protocols used on the Internet. It also supports ARP (Address Resolution Protocol) to automatically specify how to find IP addresses on its network. At physical layer, Serial HIPPI provides flow control to eliminate errors and data loss due to congestion, guaranteed end-to-end, in-order packet delivery, and error reporting.  Other protocols have to rely on TCP/IP for data lost detection, which is not efficient.

At present, Serial HIPPI seems to be the only available technology that offers gigabit performance with 100% reliability. However, this does not eliminate the possibility of other technologies, such as ATM and Gigabit Ethernet to be a significant factor in the implementation of gigabit networking.


6. Conclusion

Today's advanced technology in fiber optics, computing systems and networking has made the development of gigabit networks possible. With the bandwidth more than 1 Gbps, gigabit networks can support the demand of increasing network traffic, and many sophisticated computer applications. To achieve true gigabit networks, other aspects of networking, such as routing, switching, protocols should also be considered.

Although routers are considered the major bottleneck and being replaced by cost-effective switches, they are still the key component in building future high-speed networks. With 80% of today's network traffic crossing subnet boundaries, routers are required more than ever because they can provide network security and firewalls. Thus, several approaches have been developed recently to improve the performance of routing and routers. Such approaches described in this paper are Routing Switch, High Performance Routing, Gigabit Routers, and I/O Switching. However, it is important to note that these technologies described here might become obsolete in favor of new upcoming techniques and technologies.

Finally, there are at least four gigabit technologies available for high-speed LAN today. They are ATM, Fiber Channel, Gigabit Ethernet, and Serial HIPPI. This listing may soon be changing with new emerging technologies. At present, Serial HIPPI seems to be the only available technology that offers gigabit performance with 100% reliability. However, this does not eliminate the possibility of other technologies, such as ATM and Gigabit Ethernet to be a significant factor in the implementation of gigabit networking.


7. List of Acronyms

ANR - Automatic Network Routing
ANSI- American National Standards Institute
APPN- Advanced Peer-to-Peer Network
ARB - Adaptive Rate Based
ARP- Address Resolution Protocol
ATM- Asynchronous Transfer Mode
bps- bit per second
CAD/CAM - Computer Aided Design/ Computer Aided Manufacturing
CPU - Central Processing Unit
CSMA/CD- Carrier Sense Multiple Access/Collision Detection
FDDI - Fiber Distributed Data Interface
Gbps - Gigabits per second
GIPS - Gillion Instruction Per Second
HIPPI - High Performance Parallel Interface
HPR- High Performance Routing
I/O - Input/Output
IP- Internet Protocol
ISDN- Integrated Services Digital Network
ISO- International Standards Organization
kpps- kilo packet per second
LAN - Local Area Network
MAC- Medium Access Control
MB - Mega Bytes
Mbps- Megabits per second
MBps - Mega Bytes per second
MPOA - Multiprotocol over ATM
Mpps- Million packets per second
NIC - Network Interface Card
OC- Optical Carrier
PCI- Peripheral Component Interconnect
RAID- Redundant Arrays of Inexpensive Disks
RISC- Reduced Instruction Set Computer
RTP - Rapid Transport Protocol
SNA - System Network Architecture
SCSI- Small Computer System Interface
TCP/IP- Transmission Control Protocol/Internet Protocol
VLAN- Virtual Local Area Network
WAN- Wide-Area Network
WWW- World Wide Web


8. References

  1. Rapid City Communication Paper, "The Routing Switch," 1997,
    A good paper on the concept of the routing switch.
  2. Peter Newman, Greg Minshall, Tom Lyon, Larry Huston, "IP Switching and Gigabit Routers," 1996,

  3. A good paper on the designs of IP Switching and Gigabit Routers.
  4. Dr. Simon Fok, Kon Leong, "I/O Switching," GigaLabs White Paper,

  5. A good discussion on I/O Switching.
  6. Netreference White Paper, "The Evolution of Routing," September 1996,

  7. A good paper on the current status of routing in today's network trend.
  8. D. C. Feldmeier, "A Survey of High Performance Protocol Implementation Techniques," High Performance Networks. Technology and Protocols, Kluwer Academic, Boston, 1994.

  9. A good discussion on improving high performance protocol implementation.
  10. Digital's Network Product Business: White Paper, "High Speed Networking Technologies. A Business Guide"

  11. A business guide for high-speed networking technologies (Fast Ethernet, FDDI, and ATM).
  12. C. Partridge, "Gigabit Networking," Addison-Wesley, Reading, MA, 1993.

  13. A comprehensive book that discusses the current status and technology of the gigabit networking.
  14. R. Jain, "Networking Trends and Their Impact,"

  15. A good lecture on today's networking trends and their impact.
  16. Gigabit Ethernet White Paper, August 1996,

  17. A good and detail paper on Gigabit Ethernet.
Last Modified: August 12, 1997