Networks are increasingly important in the business use of computers as well as for the applications and data that networks can deliver. If a single computer with standard desktop software, such as, word processing, spreadsheets, and databases, can make anyone more productive, then connecting multiple computers on a network brings individuals and data together to improve communications, bolster productivity, and open up opportunities for collaboration and the exchange of information. The most elementary of all networks consists of two computers, each connected to the other using some kind of wire or cable to permit information exchange.
Regardless of how many computers may be interlinked, or what kinds of connections may be used, all networking derives from the basic premise. The primary motivation for networking arises from a need for businesses to share data within their organization, quickly and efficiently. PCs alone are a valuable business tool, but without a network, PCs are isolated and can neither share data with other computers nor access network-attached devices such as printers, scanners, and fax machines. Because data sharing permits messages, documents, and other files to circulate among users, it can also improve human communication.
Although no company installs a network simply to support electronic mail (e-mail), e-mail remains the most popular networked application in most organizations because it makes communication between individuals easy and efficient. Invention of Ethernet A gentlemen by the name of Bob Metcalfe realized that he could improve on a system called the Aloha System which arbitrated access to a shared communications channel. He developed a new system that included a mechanism that detects when a collision occurs (collision detect).
The system also includes “listen before talk,” in which stations listen for activity (carrier sense) before transmitting, and supports access to a shared channel by multiple stations (multiple access). Put all these components together, and you can see why the Ethernet channel access protocol is called Carrier Sense Multiple Access with Collision Detect (CSMA/CD). Metcalfe also developed a much more sophisticated backoff algorithm, which, in combination with the CSMA/CD protocol, allows the Ethernet system to function all the way up to 100 percent load.
In late 1972, Metcalfe and his Xerox PARC colleagues developed the first experimental Ethernet system to interconnect the Xerox Alto. The Alto was a personal workstation with a graphical user interface, and experimental Ethernet was used to link Altos to one another, and to servers and laser printers. The signal clock for the experimental Ethernet interfaces was derived from the Alto’s system clock, which resulted in a data transmission rate on the experimental Ethernet of 2. 94 Mbps. Metcalfe’s first experimental net was called the “Alto Aloha Network.
In 1973, Metcalfe changed the name to “Ethernet,” to make it clear that the system could support any computer, and not just Altos, and to point out that his new network mechanisms had evolved well beyond the Aloha system. He chose to base the name on the word “ether” as a way of describing an essential feature of the system: the physical medium (cable) carries bits to all stations, much the same way that the old “luminiferous ether” was once thought to propagate electromagnetic waves through space.
Physicists Michelson and Morley disproved the existence of the ether in 1887, but Metcalfe decided that it was a good name for his new network system that carried signals to all computers. Thus, Ethernet was born. The Ethernet System The Ethernet system consists of three basic elements: 1. the physical medium used to carry Ethernet signals between computers, 2. a set of medium access control rules embedded in each Ethernet interface that allow multiple computers to fairly arbitrate access to the shared Ethernet channel, and 3. Ethernet frame that consists of a standardized set of bits used to carry data over the system. The analysis of this system describe the configuration rules for the first element, the physical media segments. The second and third elements; the set of medium access control rules in Ethernet, and the Ethernet frame, are followed and analyzed as well. Operation of Ethernet Each Ethernet-equipped computer, also known as a station, operates independently of all other stations on the network: there is no central controller.
All stations attached to an Ethernet are connected to a shared signaling system, also called the medium. Ethernet signals are transmitted serially, one bit at a time, over the shared signal channel to every attached station. To send data a station first listens to the channel, and when the channel is idle, the station transmits its data in the form of an Ethernet frame, or packet. After each frame transmission, all stations on the network must contend equally for the next frame transmission opportunity. This ensures that access to the network channel is fair, and that no single station can lock out the other stations.
Access to the shared channel is determined by the medium access control (MAC) mechanism embedded in the Ethernet interface located in each station. The medium access control mechanism is based on a system called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). The CSMA/CD Protocol The CSMA/CD protocol functions somewhat like a dinner party in a dark room. Everyone around the table must listen for a period of quiet before speaking (Carrier Sense). Once a space occurs everyone has an equal chance to say something (Multiple Access).
If two people start talking at the same instant they detect that fact, and quit speaking (Collision Detection. ) To translate this into Ethernet terms, each interface must wait until there is no signal on the channel, then it can begin transmitting. If some other interface is transmitting, there will be a signal on the channel, which is called carrier. All other interfaces must wait until carrier ceases before trying to transmit, and this process is called Carrier Sense. All Ethernet interfaces are equal in their ability to send frames onto the network.
No one gets a higher priority than anyone else, and democracy reigns. This is what is meant by Multiple Access. Since signals take a finite time to travel from one end of an Ethernet system to the other, the first bits of a transmitted frame do not reach all parts of the network simultaneously. Therefore, it’s possible for two interfaces to sense that the network is idle and to start transmitting their frames simultaneously. When this happens, the Ethernet system has a way to sense the “collision” of signals and to stop the transmission and resend the frames.
This is called Collision Detect. The CSMA/CD protocol is designed to provide fair access to the shared channel so that all stations get a chance to use the network. After every packet transmission, all stations use the CSMA/CD protocol to determine which station gets to use the Ethernet channel next. Collisions If more than one station happens to transmit on the Ethernet channel at the same moment, then the signals are said to collide. The stations are notified of this event, and instantly reschedule their transmission using a specially designed backoff algorithm.
As part of this algorithm, the stations involved each choose a random time interval to schedule the retransmission of the frame, which keeps the stations from making transmission attempts in lock step. It’s unfortunate that the original Ethernet design used the word “collision” for this aspect of the Ethernet medium access control mechanism. If it had been called something else, such as “stochastic arbitration event (SAE),” then no one would worry about the occurrence of SAEs on an Ethernet. However, “collision” sounds like something bad has happened, leading many people to think that collisions are an indication of network failure.
The truth of the matter is that collisions are absolutely normal and expected events on an Ethernet, and simply indicate that the CSMA/CD protocol is functioning as designed. As more computers are added to a given Ethernet, and as the traffic level increases, more collisions will occur as part of the normal operation of an Ethernet. The design of the system ensures that the majority of collisions on an Ethernet that is not overloaded will be resolved in microseconds, or millionths of a second. A normal collision does not result in lost data.
In the event of a collision the Ethernet interface backs off (waits) for some number of microseconds, and then automatically retransmits the data. On a network with heavy traffic loads it may happen that there are multiple collisions for a given frame transmission attempt. This is also normal behavior. If repeated collisions occur for a given transmission attempt, then the stations involved begin expanding the set of potential backoff times from which they chose their random retransmission time.
Repeated collisions for a given packet transmission attempt indicate a busy network. The expanding backoff process, formally known as “truncated binary exponential backoff,” is a clever feature of the Ethernet MAC that provides an automatic method for stations to adjust to traffic conditions on the network. Only after 16 consecutive collisions for a given transmission attempt will the interface finally discard the Ethernet packet. This can happen only if the Ethernet channel is overloaded for a fairly long period of time, or is broken in some way. Best Effort Data Delivery
This brings up an interesting point, which is that the Ethernet system, in common with other LAN technologies, operates as a best effort” data delivery system. To keep the complexity and cost of a LAN to a reasonable level, no guarantee of reliable data delivery is made. While the bit error rate of a LAN channel is carefully engineered to produce a system that normally delivers data extremely well, errors can still occur. A burst of electrical noise may occur somewhere in a cabling system, for example, corrupting the data in a frame and causing it to be dropped.
Or a LAN channel may become overloaded for some period of time, which in the case of Ethernet can cause 16 collisions to occur on a transmission attempt, leading to a dropped frame. No matter what technology is used, no LAN system is perfect, which is why higher protocol layers of network software are designed to recover from errors. It is up to the high-level protocol that is sending data over the network to make sure that the data is correctly received at the destination computer.
High-level network protocols can do this by establishing a reliable data transport service using sequence numbers and acknowledgment mechanisms in the packets that they send over the LAN. Ethernet Frame and Ethernet Addresses The heart of the Ethernet system is the Ethernet frame, which is used to deliver data between computers. The frame consists of a set of bits organized into several fields. These fields include address fields, a variable size data field that carries from 46 to 1,500 bytes of data, and an error checking field that checks the integrity of the bits in the frame to make sure that the frame has arrived intact.
The first two fields in the frame carry 48-bit addresses, called the destination and source addresses. The IEEE controls the assignment of these addresses by administering a portion of the address field. The IEEE does this by providing 24-bit identifiers called “Organizationally Unique Identifiers” (OUIs), since a unique 24-bit identifier is assigned to each organization that wishes to build Ethernet interfaces. The organization, in turn, creates 48-bit addresses using the assigned OUI as the first 24 bits of the address.
This 48-bit address is also known as the physical address, hardware address, or MAC address. A unique 48-bit address is commonly pre-assigned to each Ethernet interface when it is manufactured, which vastly simplifies the setup and operation of the network. For one thing, pre-assigned addresses keep you from getting involved in administering the addresses for different groups using the network. In addition, if you’ve ever tried to get different work groups at a large site to cooperate and voluntarily obey the same set of rules, you can appreciate what an advantage this can be.
As each Ethernet frame is sent onto the shared signal channel, all Ethernet interfaces look at the first 48-bit field of the frame, which contains the destination address. The interfaces compare the destination address of the frame with their own address. The Ethernet interface with the same address as the destination address in the frame will read in the entire frame and deliver it to the networking software running on that computer. All other network interfaces will stop reading the frame when they discover that the destination address does not match their own address.
Multicast and Broadcast Addresses A multicast address allows a single Ethernet frame to be received by a group of stations. Network software can set a station’s Ethernet interface to listen for specific multicast addresses. This makes it possible for a set of stations to be assigned to a multicast group, which has been given a specific multicast address. A single packet sent to the multicast address assigned to that group will then be received by all stations in that group.
There is also the special case of the multicast address known as the broadcast address, which is the 48-bit address of all ones. All Ethernet interfaces that see a frame with this destination address will read the frame in and deliver it to the networking software on the computer. High-Level Protocols and Ethernet Addresses Computers attached to an Ethernet can send application data to one another using high-level protocol software, such as the TCP/IP protocol suite used on the worldwide Internet. The high-level protocol packets are carried between computers in the data field of Ethernet frames.
The system of high-level protocols carrying application data and the Ethernet system are independent entities that cooperate to deliver data between computers. High-level protocols have their own system of addresses, such as the 32-bit address used in the current version of IP. The high-level IP-based networking software in a given station is aware of its own 32-bit IP address and can read the 48-bit Ethernet address of its network interface, but it doesn’t know what the Ethernet addresses of other stations on the network may be.
To make things work, there needs to be some way to discover the Ethernet addresses of other IP-based stations on the network. For several high-level protocols, including TCP/IP, this is done using yet another high-level protocol called the Address Resolution Protocol (ARP). As an example of how Ethernet and one family of high-level protocols interact, let’s take a quick look at how the ARP protocol functions.
Operation of the ARP Protocol The operation of ARP is straightforward. Let’s say an IP-based station (station “A”) with IP address 192. 0. 2. ishes to send data over the Ethernet channel to another IP-based station (station “B”) with IP address 192. 0. 2. 2. Station “A” sends a packet to the broadcast address containing an ARP request. The ARP request basically says “Will the station on this Ethernet channel that has the IP address of 192. 0. 2. 2 please tell me what the address of its Ethernet interface is ” Since the ARP request is sent in a broadcast frame, every Ethernet interface on the network reads it in and hands the ARP request to the networking software running on the station.
Only station “B” with IP address 192. 0. 2. 2 will respond, by sending a packet containing the Ethernet address of station “B” back to the requesting station. Now station “A” has an Ethernet address to which it can send data destined for station “B,” and the high-level protocol communication can proceed. A given Ethernet system can carry several different kinds of high-level protocol data. For example, a single Ethernet can carry data between computers in the form of TCP/IP protocols as well as Novell or AppleTalk protocols.
The Ethernet is simply a trucking system that carries packages of data between computers; it doesn’t care what is inside the packages. Signal Topology and Media System Timing When it comes to how signals flow over the set of media segments that make up an Ethernet system, it helps to understand the topology of the system. The signal topology of the Ethernet is also known as the logical topology, to distinguish it from the actual physical layout of the media cables. The logical topology of an Ethernet provides a single channel (or bus) that carries Ethernet signals to all stations.
Multiple Ethernet segments can be linked together to form a larger Ethernet LAN using a signal amplifying and retiming device called a repeater. Through the use of repeaters, a given Ethernet system of multiple segments can grow as a “non-rooted branching tree. ” This means that each media segment is an individual branch of the complete signal system. Even though the media segments may be physically connected in a star pattern, with multiple segments attached to a repeater, the logical topology is still that of a single Ethernet channel that carries signals to all stations.
The notion of “tree” is just a formal name for systems like this, and a typical network design actually ends up looking more like a complex concatenation of network segments. On media segments that support multiple connections, such as coaxial Ethernet, you may install a repeater and a link to another segment at any point on the segment. Other types of segments known as link segments can only have one connection at each end. An example is below: “Non-rooted” means that the resulting system of linked segments may grow in any direction, and does not have a specific root segment. Most importantly, segments must never be connected in a loop.
Every segment in the system must have two ends, since the Ethernet system will not operate correctly in the presence of loop paths. The caption box shows several media segments linked with repeaters and connecting to stations. A signal sent from any station travels over that station’s segment and is repeated onto all other segments. This way it is heard by all other stations over the single Ethernet channel. The physical topology may include bus cables or a star cable layout. The three segments connected to a single repeater are laid out in the star physical topology, for example.
The point is that no matter how the media segments are physically connected together, there is one signal channel delivering frames over those segments to all stations on a given Ethernet LAN. Round Trip Timing In order for the media access control system to work properly, all Ethernet interfaces must be capable of responding to one another’s signals within a specified amount of time. The signal timing is based on the amount of time it takes for a signal to get from one end of the complete media system and back, which is known as the “round trip time.
The maximum round trip time of signals on the shared Ethernet channel is strictly limited to ensure that every interface can hear all network signals within the specified amount of time provided in the Ethernet medium access control system. The longer a given network segment is, the more time it takes for a signal to travel over it. The intent of the configuration guidelines is to make sure that the round trip timing limits are met, no matter what combination of media segments are used in the system.
The configuration guidelines provide rules for combining segments with repeaters so that the correct signal timing is maintained for the entire LAN. If the specifications for individual media segment lengths and the configuration rules for combining segments are not followed, then computers may not hear one another’s signals within the required time limit, and could end up interfering with one another. The correct operation of an Ethernet LAN depends upon media segments that are built according to the rules published for each media type.
More complex LANs built with multiple media types must be designed according to the multi-segment configuration guidelines provided in the Ethernet standard. These rules include limits on the total number of segments and repeaters that may be in a given system, to ensure that the correct round trip timing is maintained. Extending Ethernets with Hubs Ethernet was designed to be easily expandable to meet the networking needs of a given site. To help extend Ethernet systems, networking vendors sell devices that provide multiple Ethernet ports. These devices are known as hubs since they provide the central portion, or hub, of a media system.
There are two major kinds of hub: repeater hubs and switching hubs. As we’ve seen, each port of a repeater hub links individual Ethernet media segments together to create a larger network that operates as a single Ethernet LAN. The total set of segments and repeaters in the Ethernet LAN must meet the round trip timing specifications. The second kind of hub provides packet switching, typically based on bridging ports as described in Chapter 15. The important thing to know at this point is that each port of a packet switching hub provides a connection to an Ethernet media system that operates as a separate Ethernet LAN.
Unlike a repeater hub whose individual ports combine segments together to create a single large LAN, a switching hub makes it possible to divide a set of Ethernet media systems into multiple LANs that are linked together by way of the packet switching electronics in the hub. The round trip timing rules for each LAN stop at the switching hub port. This allows you to link a large number of individual Ethernet LANs together. A given Ethernet LAN can consist of merely a single cable segment linking some number of computers, or it may consist of a repeater hub linking several such media segments together.
Whole Ethernet LANs can themselves be linked together to form extended network systems using packet switching hubs. While an individual Ethernet LAN may typically support anywhere from a few up to several dozen computers, the total system of Ethernet LANs linked with packet switches at a given site may support many hundreds or thousands of machines. Conclusion By analyzing a business information-processing and communications needs, a conclusion that a network is not necessary to meet those needs, may be reached. Especially for small businesses, the added cost and complexity of a network sometimes can overshadow its benefits.
When it comes to obtaining funding, a key ingredient for any network installation, the only way to justify a network is to prove that its benefits will outweigh its costs. The best way to do this is to demonstrate that the return on investment (ROI) will be greater than the initial and ongoing costs of the network. Many businesses use fundamental methods to measure ROI. Before this issue can be addressed, an investigation on how the business calculates the return on its investments, but determining ROI fundamentally reduces two activities: Establishing a budget for the planned network that includes all potential sources of cost.
In addition to the costs of cabling, equipment, and installation, access costs for employee time (includes costs for time that goes into design, installation, configuration, and management for IS staff, as well as costs for time to train employees in the new way of doing things), consultants, and periods of lost productivity, which will often occur when systems are changing over from an old approach to a new one. Assign dollar values to the benefits of the network, once it is in place.
This often requires estimating increases in productivity and then using that value as a multiplier on current employee productivity to estimate increases in the business revenue or employee output One of the best techniques to help justify a network within your organization and to help quantify its potential return on investment, is to document its potential uses and ten to try to assign them a dollar value. At this point, a business will be able to determine if a new network is right for them.