How a band of engineers anticipated the cloud and remade the internet

January 21, 2026
News Body
Image
Machine with CDs going in one side and electronic clouds going out the other side
Computer scientists at Princeton helped build a globally distributed system of servers that was instrumental in the development of cloud computing and streaming. Illustration by Kouzou Sakai, for the Office of Engineering Communications

By Julia Schwarz

On December 26, 2004, an earthquake in the Indian Ocean triggered a global tsunami that obliterated parts of Indonesia, wiped out villages in Thailand, and hammered the coastlines of Sri Lanka and India, leaving catastrophic damage in 17 countries. As the water retreated, televised broadcasts of the devastation shocked the world.

The disaster spurred international initiatives to better warn and prepare coastal communities. It also unexpectedly revealed a new mode of communication. 

Well before the television cameras arrived, a new lens into the tragedy appeared: amateur footage of the event, taken largely by tourists. Caught up in the disaster, they used digital cameras and camcorders — which had only recently become affordable for everyday use — to record footage of what they saw. The grainy videos revealed not only the destruction left behind but also surreal scenes of idyllic beaches being swallowed by an enormous wave.

Today, videos like these could be uploaded and distributed instantly across every channel of the internet. But in 2004, sharing video files was not simple. There was no YouTube or Dropbox, no Instagram, Twitter or Vimeo. Facebook was less than a year old. The iPhone would not appear for another two and half years.

The best way to share a video in 2004 was to put it on an individual website. But while some news outlets and large corporations had websites that could handle spikes in traffic, most sites were not that well equipped, according to Michael Freedman, a professor of computer science at Princeton.

Image
Michael Freedman
Michael Freedman in 2012. Photo by Frank Wojciechowski

"At the time, most websites were just a server running in somebody's closet," Freedman said. Even a modest spike in internet traffic could easily make a website crash. Freedman, then a graduate student at NYU, had been working on ways to solve this problem. In early 2004, he launched a free service that allowed anyone to store website content on a globally distributed system of servers. Using it was simple: you just added the suffix “nyud.net” to the URL domain of any website.

This system made it possible to share website content with a large audience for free. It was an early example of an internet service called a content delivery network. Within months of its launch, news aggregator websites like Slashdot, which often directed traffic to smaller websites, began using Freedman’s network to prevent those smaller sites from crashing.

By year’s end, thousands of people were using the network to distribute videos of the tsunami. Because the videos were hosted on a system of servers distributed around the world, the content remained accessible despite a major jump in internet traffic. Several years later, Freedman’s network would handle tens of millions of user requests per day.

Today, content is shared this way all the time: a video posted to YouTube or Instagram is hosted on a distributed system of servers and can be watched by millions. This is now called the cloud, and it is operated by companies like Amazon, Microsoft and Google. It powers almost everything we do online, from file sharing to video streaming to shopping.

But in 2004, the cloud did not exist. Freedman’s content delivery network was instead built on PlanetLab, a globally distributed system of servers run by a consortium of research universities and technology companies. The first of its kind, PlanetLab was developed by Princeton’s Larry Peterson and his collaborators as a shared laboratory: a way to test ideas and build new, innovative computer systems and networks.

“PlanetLab did not create the cloud, but it anticipated the cloud,” said Peterson, now a professor emeritus of computer science.

A shared computing infrastructure

In early 2002, two years before Freedman launched his content delivery network, Peterson was organizing a meeting in Berkeley, California with his colleague, David Culler.

Image
Larry Peterson
Larry Peterson in 2007. Photo by Frank Wojciechowski

The meeting, hosted at the offices of Intel Research, brought together 30 computer scientists who were all frustrated by a similar problem: They had no common way to run widely distributed network experiments at scale.

While the internet had made it possible to communicate across long distances, there was no corresponding way to share computing power. This meant that researchers were only able to test their ideas in the server rooms of their individual universities or companies, said Peterson.

Running tests across geographically distributed servers is critical to experimentation in systems and networking, said David Tennenhouse, former director of research at Intel.  Computer programs can behave differently when operated over long distances, he said, and researchers need to grapple with latency issues like speed-of-light delays and round-trip times.

It’s also important to run tests on shared infrastructure, Tennenhouse added, to be able to benchmark progress. A shared testbed for research substantially lowered the bar for doing these experiments and comparing the results.

Researchers were relying on networks of colleagues to run informal tests. “People would call up their friends and say, “Hey, could I have an account on your computer so I can run something on mine and yours at the same time?” Peterson said. “That approach didn’t scale. Even if three friends said yes, that still didn’t scale.”

So Peterson and Culler, with the support of Tennenhouse and Intel, organized the meeting in Berkeley to discuss a new idea: What if research universities pooled resources to create a shared computing infrastructure for experimentation?

Image
Map of the world with PlanetLab nodes marked in red
PlanetLab would grow to include 1353 server nodes at 717 sites spanning 48 countries. Image from planetlab.cs.princeton.edu

After the initial meeting, in March 2002, the idea grew quickly. By 2005, PlanetLab had 500 servers running at dozens of universities across North America, South America, Europe and Asia. A few years later, the consortium would grow to 1353 individual servers at 717 sites in 48 countries.

While most of the PlanetLab server nodes were hosted by universities, industry and government partners were essential. Intel donated the first 100 servers and deployed a team of engineers to manage operations. Hewlett Packard and Google also joined the project early on, and the National Science Foundation provided key funding.

A large company like Intel might have set up a distributed system on its own, said Tennenhouse, but the goal was to see if it was possible to create a common shared infrastructure. “We wanted to put this together for use by the broader research community,” he said. 

Three key innovations of PlanetLab

In 2004, the team at Princeton took over operations of PlanetLab from Intel. Building the tools to operate a global platform was itself a monumental research challenge, Peterson said.

It was no small feat to keep a system operating 24 hours a day, seven days a week, for years, said Tennenhouse. “People lose track of the miracle of having this thing run continuously. That was an unappreciated triumph. Larry Peterson and the team at Princeton deserve a lot of credit for that.”

Three central technical innovations came out of the PlanetLab. The first, Peterson said, was the construction of the system itself.

To operate PlanetLab, hundreds of physical servers had to be set up around the world and then virtualized, a process where multiple virtual machines are created on a single physical server. Then, thousands of researchers had to be given access to these virtual machines, which functioned as coordinated slices of computing power distributed around the world. These slices also needed to be kept secure, so that multiple researchers sharing servers wouldn’t have access to each other’s work.

Virtualization as an idea existed long before PlanetLab, and there are many technical approaches that can be used to create virtual machines. But PlanetLab was larger in scale than any distributed system that came before it. Building virtual machines at this scale in real-world conditions required engineering solutions that had not been tried before. Some of these engineering solutions, like a technique to deploy and package software, are still widely used in today’s cloud.

The lessons that were learned would turn out to be critical, Peterson said. “Virtual machines are the technology at the core of the cloud.”

Connecting servers all over the world resulted in the second technical innovation: measuring and monitoring the speed and performance of the internet. Doing this accurately, Peterson said, requires taking measurements close to the point of inquiry. “Having multiple points of computing presence is really important,” he said, “because the farther across the internet you go, the more factors interfere with your bandwidth.”

Image
Four researchers sitting together, smiling
Larry Peterson, far left, with KyoungSoo Park, Vivek Pai, and Marc Fiuczynski in 2007. Together they co-founded CoBlitz, a content delivery network built on PlanetLab. The startup was later acquired by a large telecommunications company. Photograph by Mark Czajkowski

Open a browser now and type, “how fast is my internet,” and most likely a link to Measurement Lab will appear. Click the link and it takes less than 10 seconds to determine the speed and efficiency of an internet connection. Measurement Lab, an open-source project supported by Google, was developed by researchers working on PlanetLab. It allows anyone — from the Federal Communications Commission to the average consumer — to check the quality of their internet. 

The third technical achievement of PlanetLab is the dozens of services that were prototyped on it. Innovations were made in peer-to-peer sharing, distributed infrastructure, file transferring and location services — all pillars of the modern internet.

Other content delivery networks were created on PlanetLab by researchers at Hebrew University, University of Wisconsin, UC-Irvine and UC-Berkeley. Peterson created one with Vivek Pai, a Princeton colleague, which was later acquired by a large telecommunications company.

PlanetLab changed the way videos are streamed on the internet. Amateur footage of the 2004 tsunami can still be watched today on YouTube, a service which relies on one of the largest content delivery networks ever built. Millions of people can now watch the same video at the same time with no buffering or delays.

Thousands of professors and graduate students would go on to use PlanetLab before the consortium officially shut down in 2020. The cumulative experience of these researchers likely had a greater impact than the published research. “The researchers that got experience on PlanetLab are the ones that went on to Google and Microsoft and Facebook and so on,” Peterson said, “then invent the cloud as we know it today.”