Jonathan Zittrain, The Future of the Internet — And How to Stop It
– The book can be viewed in an experimental html format courtesy of Yale University Press and the futureofthebook.org people. (The format is experimental; html is probably safely thought of as in full production as this point.) Each paragraph can be annotated: Visit html site.
– Amazon has enabled search-inside-the-book: Visit Amazon version.
– Still working on Google Books version.
This extraordinary book explains the engine that has catapulted the Internet from backwater to ubiquity—and reveals that it is sputtering precisely because of its runaway success. With the unwitting help of its users, the generative Internet is on a path to a lockdown, ending its cycle of innovation—and facilitating unsettling new kinds of control.
IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that can’t be easily modified by anyone except their vendors or selected partners. These “tethered appliances” have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly touted—but their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internet—its “generativity,” or innovative character—is at risk.
The Internet’s current trajectory is one of lost opportunity. Its salvation, Zittrain argues, lies in the hands of its millions of users. Drawing on generative technologies like Wikipedia that have so far survived their own successes, this book shows how to develop new technologies and social structures that allow users to work creatively and collaboratively, participate in solutions, and become true “netizens.”
On January 9, 2007, Steve Jobs introduced the iPhone to an eager audience crammed into San Francisco’s Moscone Center.1 A beautiful and brilliantly engineered device, the iPhone blended three products into one: an iPod, with the highest-quality screen Apple had ever produced; a phone, with cleverly integrated functionality, such as voicemail that came wrapped as separately accessible messages; and a device to access the Internet, with a smart and elegant browser, and with built-in map, weather, stock, and e-mail capabilities. It was a technical and design triumph for Jobs, bringing the company into a market with an extraordinary potential for growth, and pushing the industry to a new level of competition in ways to connect us to each other and to the Web.
This was not the first time Steve Jobs had launched a revolution. Thirty years earlier, at the First West Coast Computer Faire in nearly the same spot, the twenty-one-year-old Jobs, wearing his first suit, exhibited the Apple II personal computer to great buzz amidst “10,000 walking, talking computer freaks.”2 The Apple II was a machine for hobbyists who did not want to fuss with soldering irons: all the ingredients for a functioning PC were provided in a convenient molded plastic case.
It looked clunky, yet it could be at home on someone’s desk. Instead of puzzling
over bits of hardware or typing up punch cards to feed into someone else’s mainframe,
Apple owners faced only the hurdle of a cryptic blinking cursor in the upper
left corner of the screen: the PC awaited instructions. But the hurdle was not
high. Some owners were inspired to program the machines themselves, but true
beginners simply could load up software written and then shared or sold by their
more skilled or inspired counterparts. The Apple II was a blank slate, a bold departure
from previous technology that had been developed and marketed to perform
specific tasks from the first day of its sale to the last day of its use.
The Apple II quickly became popular. And when programmer and entrepreneur
Dan Bricklin introduced the first killer application for the Apple II in
1979—VisiCalc, the world’s first spreadsheet program—sales of the ungainly
but very cool machine took off dramatically.3 An Apple running VisiCalc
helped to convince a skeptical world that there was a place for the PC at everyone’s
desk and hence a market to build many, and to build them very fast.
Though these two inventions—iPhone and Apple II—were launched by
the same man, the revolutions that they inaugurated are radically different. For
the technology that each inaugurated is radically different. The Apple II was
quintessentially generative technology. It was a platform. It invited people to
tinker with it. Hobbyists wrote programs. Businesses began to plan on selling
software. Jobs (and Apple) had no clue how the machine would be used. They
had their hunches, but, fortunately for them, nothing constrained the PC to
the hunches of the founders. Apple did not even know that VisiCalc was on the
market when it noticed sales of the Apple II skyrocketing. The Apple II was designed
for surprises—some very good (VisiCalc), and some not so good (the
inevitable and frequent computer crashes).
The iPhone is the opposite. It is sterile. Rather than a platform that invites innovation,
the iPhone comes preprogrammed. You are not allowed to add programs
to the all-in-one device that Steve Jobs sells you. Its functionality is locked
in, though Apple can change it through remote updates. Indeed, to those who
managed to tinker with the code to enable the iPhone to support more or different
applications,4 Apple threatened (and then delivered on the threat) to transform
the iPhone into an iBrick.5 The machine was not to be generative beyond the innovations
that Apple (and its exclusive carrier, AT&T) wanted. Whereas the world
would innovate for the Apple II, only Apple would innovate for the iPhone. (A
promised software development kit may allow others to program the iPhone with
Jobs was not shy about these restrictions baked into the iPhone. As he said at
We define everything that is on the phone. . . . You don’t want your phone to be like
a PC. The last thing you want is to have loaded three apps on your phone and then
you go to make a call and it doesn’t work anymore. These are more like iPods than
they are like computers.6
No doubt, for a significant number of us, Jobs was exactly right. For in the
thirty years between the first flashing cursor on the Apple II and the gorgeous
iconized touch menu of the iPhone, we have grown weary not with the unexpected
cool stuff that the generative PC had produced, but instead with the
unexpected very uncool stuff that came along with it. Viruses, spam, identity
theft, crashes: all of these were the consequences of a certain freedom built into
the generative PC. As these problems grow worse, for many the promise of security
is enough reason to give up that freedom.
* * *
In the arc from the Apple II to the iPhone, we learn something important about
where the Internet has been, and something more important about where it is
going. The PC revolution was launched with PCs that invited innovation by
others. So too with the Internet. Both were generative: they were designed to
accept any contribution that followed a basic set of rules (either coded for a
particular operating system, or respecting the protocols of the Internet). Both
overwhelmed their respective proprietary, non-generative competitors, such as
the makers of stand-alone word processors and proprietary online services like
CompuServe and AOL. But the future unfolding right now is very different
from this past. The future is not one of generative PCs attached to a generative
network. It is instead one of sterile appliances tethered to a network of control.
These appliances take the innovations already created by Internet users and
package them neatly and compellingly, which is good—but only if the Internet
and PC can remain sufficiently central in the digital ecosystem to compete with
locked-down appliances and facilitate the next round of innovations. The balance
between the two spheres is precarious, and it is slipping toward the safer
appliance. For example, Microsoft’s Xbox 360 video game console is a powerful
computer, but, unlike Microsoft’s Windows operating system for PCs, it does
not allow just anyone to write software that can run on it. Bill Gates sees the
Xbox as at the center of the future digital ecosystem, rather than at its periphery:
“It is a general purpose computer. . . . [W]e wouldn’t have done it if it was
just a gaming device. We wouldn’t have gotten into the category at all. It was
about strategically being in the living room. . . . [T]his is not some big secret.
Sony says the same things.”7
It is not easy to imagine the PC going extinct, and taking with it the possibility
of allowing outside code to run—code that is the original source of so
much of what we find useful about the Internet. But along with the rise of information
appliances that package those useful activities without readily allowing
new ones, there is the increasing lockdown of the PC itself. PCs may not be
competing with information appliances so much as they are becoming them.
The trend is starting in schools, libraries, cyber cafés, and offices, where the
users of PCs are not their owners. The owners’ interests in maintaining stable
computing environments are naturally aligned with technologies that tame the
wildness of the Internet and PC, at the expense of valuable activities their users
might otherwise discover.
The need for stability is growing. Today’s viruses and spyware are not merely
annoyances to be ignored as one might tune out loud conversations at nearby
tables in a restaurant. They will not be fixed by some new round of patches to
bug-filled PC operating systems, or by abandoning now-ubiquitous Windows
for Mac. Rather, they pose a fundamental dilemma: as long as people control
the code that runs on their machines, they can make mistakes and be tricked
into running dangerous code. As more people use PCs and make them more
accessible to the outside world through broadband, the value of corrupting
these users’ decisions is increasing. That value is derived from stealing people’s
attention, PC processing cycles, network bandwidth, or online preferences.
And the fact that a Web page can be and often is rendered on the fly by drawing
upon hundreds of different sources scattered across the Net—a page may pull
in content from its owner, advertisements from a syndicate, and links from various
other feeds—means that bad code can infect huge swaths of the Web in a
If security problems worsen and fear spreads, rank-and-file users will not be
far behind in preferring some form of lockdown—and regulators will speed the
process along. In turn, that lockdown opens the door to new forms of regulatory
surveillance and control. We have some hints of what that can look like.
Enterprising law enforcement officers have been able to eavesdrop on occupants
of motor vehicles equipped with the latest travel assistance systems by
producing secret warrants and flicking a distant switch. They can turn a standard
mobile phone into a roving microphone—whether or not it is being used
for a call. As these opportunities arise in places under the rule of law—where
some might welcome them—they also arise within technology-embracing authoritarian
states, because the technology is exported.
A lockdown on PCs and a corresponding rise of tethered appliances will
eliminate what today we take for granted: a world where mainstream technology
can be influenced, even revolutionized, out of left field. Stopping this future
depends on some wisely developed and implemented locks, along with
new technologies and a community ethos that secures the keys to those locks
among groups with shared norms and a sense of public purpose, rather than in
the hands of a single gatekeeping entity, whether public or private.
The iPhone is a product of both fashion and fear. It boasts an undeniably attractive
aesthetic, and it bottles some of the best innovations from the PC and
Internet in a stable, controlled form. The PC and Internet were the engines of
those innovations, and if they can be saved, they will offer more. As time passes,
the brand names on each side will change. But the core battle will remain. It
will be fought through information appliances and Web 2.0 platforms like today’s
Facebook apps and Google Maps mash-ups. These are not just products
but also services, watched and updated according to the constant dictates of
their makers and those who can pressure them.
In this book I take up the question of what is likely to come next and what
we should do about it.