Sharing the Network: From Circuit Switching to Packet Switching (The Hitchhiker's Guide to Computer Networks)


Prelude ... From Circuit Switching to Packet Switching

The simplest approach for two people to communicate over long distances is to connect them through a dedicate physical wire. However, how would we scale that model to a million people that individually want to be able to communicate with everyone else? One approach is to extend wires between all pairs of users, requiring hundreds of billions of wires. 


This is definitely not a good way to design a network. So what's wrong with that design? First, it is unlikely that every single person in that network will want to talk to everyone else at the same time. This means that we can use fewer wires. Second, not all connections are created equal. Longer wires cost a lot more money than short ones. Our design should have short wires whenever possible. Also it will be nice if the cost of extending long wires is shared between multiple users at the same time.

Let's revisit our connection strategy. Rather than extending wires between every pair of people, we can take their geographical distance into account. We can connect all users in a single geographical area (say a town or a neighborhood) to a communication "hub" through short wires. The "hub" can connect these users to each other through tiny wires within the hub. The number of tiny wires at the hub can be small and the hub can reconnect its set of tiny wires to connect people that need to communicate at the moment. A pair of users that are connected to the same "hub" can communicate with each other through a dedicated set of wires (their own wires connecting them to the hub and the tiny wire within the hub). This configured connection is called a circuit. The configuration of these circuits change depending on which pair of users want to communicate.

This is what a hub with tiny wires looked like back in the day. operators connect actual wires between the caller and the callee in the circuit board in front of them.

Hubs in a single geographically area (say a metro or a large city) can be connected through a large hub using the same ideas. We can then create huge hubs to connect different cities in the same state or county and mega hubs to connect states or even continents. If we want to make sure that everyone can communicate with everyone over dedicate wires, then the number of "wires" in a link will be equal to the number of users using one end of the hub multiplied by the number of users on the other end. Say if a million people on the east coast want to communicate with a million people on the west coast we will still need a trillion wires. Now we can leverage the fact that not all of users will communicate at the same time. Thus, we can have the number of wires connecting two hubs be equal to the maximum number of a pairs of users that want to communicate at the two ends of a hub. 

The upside of circuit switching is that any pair of users get the full bandwidth of the dedicate circuit connecting them. They don't have to compete for bandwidth with anyone. This dedication of resources simplifies the network technologies significantly. However, there are several drawbacks of circuit switching. First, the cost of having dedicated wires is large, especially for "long wires". Second, configuring circuits on demand adds significant latency. The GIF above shows how operators used to configure the circuits manually. Even today automated circuit switching still adds significant latency.

Seems like this dedicated wire business is not working ... 

What if multiple people can share the same wire? This is where packetization comes in. 

Packet switching relies on breaking any data that a user sends into many small packets of data. Each packet specifies its source and destination. The network handles each packet individually, delivering it to its destination. If two users want to use the same wire, they can take turn each sending a single packet at a time. If the available bandwidth can allows the network to send both packets at "the same time", then the performance perceived by the user will not be different from circuit switching. However, when there is not enough bandwidth for packets from both users to be sent at the same time, then the network will have to hold the packets, put them in a queue, order them based on the importance of their users, and then send time one at a time. If users send much more packets than the network can handle, packets can face very long delays. This leads to one of the biggest challenges that result from resource sharing in computer networks: congestion.


In follow up posts, I will present how modern networks deal with packet queues.

Comments

Popular posts from this blog

Attaching a WiFi Dongle to an AR Drone 2.0 (a.k.a. Compiling Drivers for ARM Based Devices)

Video on Demand: Part 1 (The Hitchhiker's Guide to Computer Networks)