Congress is started hearings on net Neutrality yesterday, and seems to generally be sympathetic to the neutrality doctrine.
From ZDNet News: Politicos divided on need for ‘net neutrality’ mandate
Sen. Ron Wyden, an Oregon Democrat, said at the hearing that he plans to introduce a bill that “will make sure all information (transmitted over broadband networks) is made available on the same terms so that no bit is better than another one.” The provisions would bar broadband providers from favoring one company’s site over another (for example, he said, J. Crew over L.L. Bean), from giving their own content preferential treatment and from creating “private networks that are superior to the Internet access they offer consumers generally.”
Also visibly troubled by the prospect of a so-called two-tiered Internet were two other Democrats, Sen. Barbara Boxer of California and Sen. Byron Dorgan of North Dakota.
Referring to a recent Washington Post report in which a Verizon executive said Google and others shouldn’t expect to enjoy a “free lunch” on its pipes, Dorgan said such reasoning was flawed. “It is not a free lunch…(broadband subscribers have) already paid the monthly toll…Those lines and that access is being paid for by the consumer.”
One of the more interesting aspects of these hearings was Vint Cerf’s statement (pdf) on net neutrality, where he lays out not only the meaning and importance of neutrality in general, but gives a rather good overview of structure of the internet itself.
I was fortunate to be involved in the earliest days of the “network of networks.†From that experience, I can attest to how the actual design of the Internet – the way its digital hardware and software protocols, including the TCP/IP suite, were put together — led to its remarkable economic and social success.
First, the layered nature of the Internet describes the “what,†or its overall structural architecture. The use of layering means that functional tasks are divided up and assigned to different software-based protocol layers. For example, the “physical†layers of the network govern how electrical signals are carried over a physical medium, such as copper wire or radio waves. The “transport†layers help route the user’s data packets to their correct destinations, while the application layers control how those packets are used by a consumer’s email program, web browser, or other computer application. This simple and flexible system creates a network of modular “building blocks,†where applications or protocols at higher layers can be developed or modified with no impact on lower layers, while lower layers can adopt new transmission and switching technologies without requiring changes to upper layers. Reliance on a layered system greatly facilitates the unimpeded delivery of packets from one point to another.
Second, the end-to-end design principle describes the “where,†or the place for network functions to reside in the layered protocol stack. With the Internet, decisions were made to allow the control and intelligence functions to reside largely with users at the “edges†of the network, rather than in the core of the network itself. For example, it is the user’s choice what security to use for his or her communications, what VOIP system to use in assembling digital bits into voice communications, or what web browser to adopt. This is precisely the opposite of the traditional telephony and cable networks, where control over permitted applications is handled in the core (in headends and central offices), away from the users at the edge. As a result, the power and functionality of the Internet is left in the hands of the end users.
Third, the design of the Internet Protocol, or the “how,†allows for the separation of the networks from the services that ride on top of them. IP was designed to be an open standard, so that anyone could use it to create new applications and new networks (by nature, IP is completely indifferent to both the underlying physical networks, and to the countless applications and devices using those networks). As it turns out, IP quickly became the ubiquitous bearer protocol at the center of the Internet. Thus, using IP, individuals are free to create new and innovative applications that they know will work on the network in predictable ways.
Finally, from these different yet related design components, one can see the overarching rationale — the “why†— that no central gatekeeper should exert control over the Internet. This governing principle allows for vibrant user activity and creativity to occur at the network edges. In such an environment, entrepreneurs need not worry about getting permission for their inventions will reach the end users. In ssence, the Internet has become a platform for innovation. One could think of it like the electric grid, where the ready availability of an open, standardized, and stable source of electricity allows anyone to build and use a myriad of different electric devices. This is a direct contrast to closed networks like the cable video system, where network owners control what the consumer can see and do.
In addition to this architectural design, the Internet has thrived because of an underlying regulatory framework that supported openness. Wisely, government has largely avoided regulating the Internet directly. Google firmly supports this deregulatory approach, which is supported by the openness and consumer choices available in this new medium. At the same time, the underlying network through which consumers access the Internet has rested on a telecommunications regulations that ensured openness – including a century’s-old tradition in American law that telephone companies are not allowed to tell consumers who they can call or what they can say. In the zone of governmental noninterference surrounding the Internet, one crucial exception had been the nondiscrimination requirements for the so-called last mile. Developed by the FCC over a decade before the commercial advent of the Internet, these “Computer Inquiry†safeguards required that the underlying providers of last-mile network facilities – the incumbent local telephone companies – allow end users to choose any ISP, and utilize any device, they desired. In turn, ISPs were allowed to purchase retail telecommunications services from the local carriers on nondiscriminatory rates, terms, and conditions.
The end result was, paradoxically, a regulatory safeguard applied to last-mile facilities that allowed the Internet itself to remain open and “unregulated†as originally designed. Indeed, it is hard to imagine the innovation and creativity of the commercial Internet in the 1990s ever occurring without those minimal but necessary safeguards already in place. By removing any possibility of ILEC barriers to entry, the FCC paved the way for an explosion in what some have called “innovation without permission.†A generation of innovators — like Tim Berners-Lee with the World Wide Web, Yair Goldfinger with Instant Messaging, David Filo and Jerry Yang with Yahoo!, Jeff Bezos with Amazon, and Larry Page and Sergey Brin with Google – were able to offer new applications and services to the world, without needing permission from network operators or paying exorbitant carrier rents to ensure that their services were seen online. And we all have benefited enormously from their inventions.
The Senate still sounds confused as to what exactly is entailed by ‘neutrality’; Cerf’s efforts to educate them on both how the Internet works, and how it differs from existing networks, is very much appreciated.