When the Ghost in the Machine Fails: The Costs of Customization

A Review of “Understanding BGP Misconfiguration” by Ratul Mahajan, David Wetherall, and Tom Anderson

Among routing protocols, what makes Border Gateway Protocol (BGP) stand out is in its being defined by local operational configurations, not by global optimization criterion. Instead of honest and full information route announcements by homogeneous nodes to their neighbors and the rest of the networks, we have arbitrary, “human” polices for selecting which route to advertise and which route to accept. Unfortunately, whether intentionally or consciously, this configuration lapses has a tendency to lead to systemic instability in the Internet, as wrong routes when advertised are often systematically re-propagated.

Mahajan, Wetherall, and Anderson pioneering approach in knowing the causes and effects of BGP misconfigurations remains to inform us of the dangers posed by policy-based routings, and how adjustments such as automated verification of configuration and transactional semantics for configuration commands. One can also use ergonomics – the user interface can be redesigned to reduce the possibility of slips. Basically, it appears that they are championing a direction towards a “human proof” internet – which is a logical response to all-too-human individual flaws that can have costly effects on the rest of the network. Continue reading “When the Ghost in the Machine Fails: The Costs of Customization”

BGP: Paying the Price of Anarchy

A Review of “Lecture 3: Interdomain Internet Routing” by Hari Balakrishnan

The Internet is a rowdy place. Of course, we know that it was designed to be a distributed, decentralized system that can scale, but its growth in terms of size, connections and complexity has probably exceeded the earlier projections of its creators. For instance, even with the assist of Network Address Translation (NAT) that greatly reduced pressure for IP numbers and hierarchical addressing via IP prefixes, the IPv4 routing table entries continue to increase (from reaching the 50k mark in late 90s to exceeding 400k around 2012), in fact exceeding the limit of 512k for many major old routers last year (dubbed as 512KDay).  It thus amazing to discover that this immense convolution of zetabytes of packet traffic – provided and consumed by profiteering computer networks constrained by government regulations – are actually mediated by select routers that operate on a fairly simple rule – the Border Gateway Protocol (BGP, see this too.).

For someone like me who is just learning about computer networks and who had just been recently exposed to the protocol suite, BGP came as a shock, a rude discovery of how complicated the real Internet works. And so, it took me a while before I had some appropriate grasp of its concept. But in actuality, BGP may actually be simpler than most routing protocols in terms of what it does. The complication lies in the combinatorial explosion of possible configurations it can take because of shifting capitalistic interests of autonomous Internet Service Providers (ISP) competing and cooperating amongst each other. The complexity lies in the anarchy.

In a sense, if Internet is an economy, then BGP is the common currency everyone uses to pay for the price this anarchy. Interaction between routers running on different BGP configurations is almost like currency exchange – subject to existing commercial agreements and disagreements of the corporate giants operating those routers. The Autonomous Systems (ASes) that organize hosts into groups serve as countries that demarcate the market content providers tirelessly compete against each other to reach, and all foreign trade has to go through routers running BGP.

Balakrishnan was able to effectively discuss how BGP works, including the complications introduced by peering and transit. For my part, it has helped to know how BGP came to be. I tried to trace the evolution of routing protocols from ARPANET to NSFNET to the present day internet, in order to know how shifting political and economic consideration affected the network structure, and thus, the design and mission of routers. The rest of the essay thus explains the context which demanded for the emergence of something like the BGP, and why it fits so smugly with the private-sector driven model of Internet development. Finally, we offer some perspective using concepts from distributed algorithmic mechanism design (DAMD) in order to provide a plausible explanation of why BGP magically works so well in a context of autonomous ISPs with competing commercial interests. Continue reading “BGP: Paying the Price of Anarchy”

RFC 1958 and the Internet as an Evolutionary System

A Review of “Architectural Principles of the Internet” or RFC 1958

The RFC 1958 began its discussion by describing the Internet, and its architecture, as emerging in an “evolutionary fashion… rather than a Grand Plan”. But really, how much of the Internet is really by mere accumulation of accidents? How much of it is really just a logical consequence of its minimalist design?

When we speak of evolution, we naturally associate it with the Darwinian process of natural selection. Given an initial population of relatively homogenous organisms, we let them be exposed to mutation and other replication errors, and see if the deviation from these errors improves the survivability of the deviants. The survivability of course, is measured by how much the given organisms adapt to its surroundings and other organisms. If it does, then they slowly increase in population. This mechanism was supposed to have explained the staggering diversity of life forms and the relative stability of existing ecosystems. In some sense, just like the internet, heterogeneity and extremely large scaling are inevitable and supported by design[1].

Is it in the same sense then do we refer to when we say that the Internet has evolved in an evolutionary fashion? What would be the shared characteristics of natural selection with Internet development[2]? And what does this imply? Continue reading “RFC 1958 and the Internet as an Evolutionary System”

Smart vs. Democratic? Public vs. Private? – Political Economy of the “End-to-end” Internet

A Review of “Rethinking the design of the Internet: The end to end arguments vs. the brave new world” by David D. Clark

Of all possible adjectives, “dumb” may be the least likely one to be ascribed to the Internet. Yet a number of computer scientists believe that it is the apt description to an Internet defined by the “end-to-end” concept – dumb network, smart terminals (vis-à-vis the more intuitive notion of a smart network operated by less smart terminals). A number of those scientists then feel that we have to abandon the “end-to-end” concept, providing more and more services at lower and lower levels, towards a more trust-worthy and regulated Internet.

internetlab-article

 

David Clark (whose another article we talked about in an earlier blog post) is among those pushing for the end of the end to end model. He believes that changing user requirements – including what he thinks to be a need for more security and regulation – necessitates that we strengthen the core of the network. He notes that the shrinking of government’s “enabler” role and the increasing commercial use of the internet necessitate this change, together with a transition towards a paradigm where government is a regulator. He believes that this is consistent with similar developments in other industries such as conventional telecommunications.

On the one hand, with increasing computational capacity and improving hardware performance, it is difficult to maintain the concept of gateways and other network core components as simply transmitter and routing technologies. More and more of reliability and quality of service functions can be pushed “down” so to speak, in order to create a better internet. On the other hand, we also have to go beyond the technical question into the political, as we argue that the end-to-end model, wherein we push “up” functions as much as we towards the nodes and network edges because it is necessary for a democratic and politically free internet.

So how do we resolve this? Continue reading “Smart vs. Democratic? Public vs. Private? – Political Economy of the “End-to-end” Internet”

Internet: Cold War’s Brain Child

Review of “The Design Philosophy of the DARPA Internet Protocols” by David D. Clark

The human brain is a very robust and flexible machine. Studies have pointed out the brain’s capacity to retain its cognitive faculties even under severe stress or physical assault. There is also evidence of functional compensation of some parts of the brain in the event that other parts deteriorate or are permanently damaged. Neuroplasticity ensures an adult brain’s capacity to learn new streams of information, from disparate and even simultaneous sources. They are also shown capable of interpreting direct introduction of electronic signals, and thus had been able to control electromechanical appendages, in the cases of persons with disabilities.

Since the advent of modern neurosciences, much have been learned about human brains. We have, in fact, used abstract models based on its biological features to design artificial intelligence techniques such as neural networks that underpin much of the deep learning methods we have today. But it seems that we may have inadvertently mimicked the human brain a long time ago – the architecture and the design of the Internet itself as originally conceptualized by Defense Advanced Research Projects Agency (DARPA) seem to replicate the robustness, plasticity, and efficiency of the human brain.

The 1988 paper by MIT computer scientist David Clark, coming at the heels of further development and widespread adoption of TCP/IP (first presented fifteen years ago) and the subsequent rise of inter-networking, attempts to condense the features and the moving logic behind the network that started it all – the Advanced Research Projects Agency Network (ARPANET) – as well as the introduction of packet switching and later, the TCP/IP. He traces the goals and motivations of DARPA in designing ARPANET, and how it impacted and set the course of the Internet evolution. Continue reading “Internet: Cold War’s Brain Child”

One Protocol to Connect them All

A Review of “A Protocol for Packet Network Intercommunication” by Vinton G. Cerf and Robert E. Kahn

“You can resist an invading army; you cannot resist an idea whose time has come.”

– attributed to Voltaire[1]

Four decades seem to be too short for any one group of people to change the course of history. But that is exactly what Vinton Cerf of DARPA and his recruit, Robert Kahn of Stanford University, did. Their technical article “A Protocol for Packet Network Intercommunication” – originally published in 1974 and read by a small subset of engineers and computer scientists – laid the foundations of the world-wide Internet revolution and in the process triggered similar revolutions in almost all aspects of modern life: entertainment, education, economics, etcetera. Cerf and Kahn’s brainchild, what would later be known as the Transmission Control Protocol (TCP) and the Internet Protocol (IP), or simply TCP/IP, effectively inter-connected computer networks previously bounded by geography. In effect, Cerf and Kahn delivered humanity’s final death blow to physical distance[2].

Cerf and Kahn did this by proposing a system of interconnecting multiple networks, then operating via packet switching (courtesy of Paul Baran’s ideas in the 1950s), that were sprouting like mushrooms in universities and communication companies ever since the ARPANET began merely half a decade ago[3]. Their motivation is similar to those who created the packet switching networks they are trying to interconnect: the sharing of computer resources. Going beyond a system that only allows computers from one or two schools or buildings to communicate, Cerf and Kahn designed a common language that enable computers – connected on networks operating on disparate physical, media, and link layers – to pass and receive data among each other.

The problems facing Cerf and Kahn with regards to differential implementations of packet switching are very clear. Networks often have distinct addressing schemes. Various networks also accept data of different maximum sizes, which may force the adoption of the small maximum size as a common denominator. Time delays are also different, an important element in the transmission of data. There is also no common restoration algorithm in the case of errors. Routing and fault detection varies. How did Cerf and Kahn attacked all these? Continue reading “One Protocol to Connect them All”