Virulent Word of Mouse

April 23, 2014

The Fault Lines Along Fast Lanes

highwayUntil recently, a fast lane from a broadband ISP was a remote possibility in the US. ISPs had to give data equal treatment, regardless of the source, and could not offer faster delivery for a higher price while giving slower service as a default.

Although fast lanes were allowed by regulators a few years ago in the wireless networks, the carriers hesitated to offer them. In December 2013, AT&T Wireless broke with the norm and announced just such a program. FCC regulations forbidding fast lanes at landline broadband ISPs had also prevented them, but a January 2014 US appeals courts struck down those regulations.

Is that a good or bad trend? The answer depends on who’s talking. Critics of government regulation despise the rules forbidding fast lanes, whereas net neutrality supporters view the presence of fast lanes as a nightmare come to life.

Legal and political aspects of this topic typically get most of the attention, as do the implications for the variety of speech online. Most reporters find these aspects interesting, and understand them. However, the economics of fast lanes receives less attention. That is a surprise, because the economics is not very difficult, and it’s worth understanding. It illuminates the fault lines between many different points of view.

Mirrors and servers

The public Internet has evolved considerably since the days when the design for packet networks presumed that the message did not have to arrive at an inbox immediately. Users today prefer and expect speedier services. That goes for more than just IP telephony and video chat, where users notice the smallest delay. It also holds true for video, such as YouTube and many online games. Many providers believe it also affects the bottom line—namely, that users switch services if they do not get fast delivery of data.

Long before fast lanes became a real possibility, many participants in the Internet made investments aimed at reducing delays. For example, for some time now, Akamai has sold a well-known approach to improving speed. Their service also defines the first fault line, so this is a good place to start the discussion. Opponents to net neutrality ask why Akamai can operate a business to speed up data delivery but a carrier cannot.

Akamai’s service supports servers inside ISPs, closer to households. Any seriously large Internet content firm must buy these services, and it is considered a cost of doing business online. Many ISPs like working with Akamai, because their customers experience better service without much investment from the ISP.

That is not the only method for speeding up data. For example, Google has bypassed Akamai’s charges in many locations by building its own data network to ISPs. Netflix has recently sought to do the same, though it is not quite done (because it has not successfully negotiated a presence with every US ISP). Any gathering of more than three Internet engineers will generate discussion of even more potential solutions in the cloud. Amazon built a content delivery network with enormous geographic range. Microsoft has similar investments and aspirations, as does IBM. The list goes on.

That leads to the deeper question. The last few years have witnessed robust experimentation among distinct approaches to functional improvement, and these might be either complements to, or substitutes for, each other. Accordingly, carriers have had two roles. They act as a firm whose users benefit from faster delivery, and they act as a supplier that could choose to cooperate—or refuse to cooperate—with solutions offered by others.

When a carrier had no investments in fast lanes, it had every reason to cooperate with solutions offered by others. Will that change if the carrier has its own fast lane?

The answer defines a fault line between points of view. Some observers label this a possibility that might never arise. They want a regulatory response only when a problem emerges, and otherwise they anticipate that a regulator will err. Net neutrality supporters think regulators have an obligation to protect the Internet. Advocates worry that introducing fast lanes messes with a system that already works well. They do not trust carriers to cooperate with solutions that might substitute for a fast lane business or threaten an investment in some way.

Competition and monopoly

The next fault line has to do with the role of money. Defenders of fast lanes expect them to become a cost of doing business for content firms, and forecast that fast lanes will be profitable and generate more investment. Opponents have the same forecast about profitability, but a different interpretation. They worry that fast lanes will lead to an Internet where only rich firms can deliver their content effectively.

This concern tends to get plenty of press, and a few rhetorical questions illuminate the fault line. Will the default speeds offered by ISPs be good enough for startups or for small specialty websites? One side believes that the defaults will be good enough, whereas the other believes that fast lanes will lead ISPs to neglect investing in their slow services.

One’s point of view about the state of competition for ISPs has a big role in interpreting the role of money. Some believe a competitive ISP market would melt away most problems. Others argue that belief about competitive ISP markets is a fantasy and masks many dangers.

The belief in competition is not a belief in magic, so it is worth examining. Rather, this side views competition as a painful process. In competitive markets, customers substitute into alternatives if they do not like what a supplier does. Suppliers hesitate to do things that make their users angry. In other words, ISPs would compete for customers by offering better fast lanes. In this view, users would get angry if they perceived that carriers were slowing down content from firms they cared about, and angry users would find another carrier.

Where is the fault line? Recognize the two key factors that make ideal competitive markets operate well—namely, transparency and the availability of many user options.

Just about everybody is in favor of transparency, but not necessarily more of it if rules require it. Those with faith in competitive processes tend to see the merits in nothing more than a few light-handed requirements, such as programs to facilitate measuring the speed of different ISPs. The other side asks for much more, such as the publication of all fast lane contracts (more on that later).

As for the second concern about options, consider the key open question: Do users have many options available to them, or do they face de facto monopoly ISP markets? Once again, there are different beliefs about the preponderance of competition and monopoly found throughout locales of the US. Those who presume that competition is inadequate lack sympathy for leaving ISPs alone (versus those who presume it is adequate).

That also leads to different interpretation about how lucrative fast lanes will be. Supporters of fast lanes say that ISPs should charge whatever the market will bear, and competition will discipline pricing. Opponents say that the monopolies emerged from granting public franchises and use of public rights of way, and characterize high prices as misuse of utility franchises.

A classic debate about government merger policy also arises. Net neutrality supporters argue that fast lanes give ISPs artificial incentives to consolidate in order to increase their bargaining leverage with content providers, thus concentrating economic power in ISPs. Net neutrality opponents do not see anything wrong with large ISPs. In a competitive market, size is irrelevant.

Mixed incentives

The foregoing leads into the last fault line in discussions about fast lanes—namely, views about mixed incentives at carriers. A mixed incentive arises when a carrier distributes a service that substitutes for one available on the public Internet.

Many broadband ISPs have a thriving broadband service and provide video on demand, and make a pretty good margin on both services. Will most cable firms want to sell a fast lane service to Netflix at a low price? If the carrier did not make money on video on demand, then a carrier’s price for a fast lane for Netflix would be lower, and the same goes for entrepreneurial firms offering video services. That also begins to suggest the intuition behind the concern that cable firms will tilt their other actions against online video to protect their existing businesses.

Mixed incentives also come up in discussions about scrutinizing carrier contracting practices. To put this fault line in perspective, consider a hypothetical scenario: What would happen after a carrier sells a fast lane to, say, ESPN? Can anyone else expect the same terms, even Netflix? Yet again, one side argues that competition will solve these issues, and the other sees a need for regulatory intervention to make terms of fast lane contracts public.

A mixed incentive also can emerge when a carrier has an economic incentive to protect its partner’s business in which it gets a cut. In other words, is it okay if ESPN gets a better deal than Fox Sports because an ISP made a deal with the local team who competes with something done by Fox Sports? The same fault line as just mentioned: should competition solve this question, or should governments intervene to publish fast lane contracts? Should ISPs be required to give the same terms to all takers?

To summarize, the fault lines between perspectives hinge crucially on several beliefs about the economics. Forecasts depend on whether the observer sees a preponderance of competitive or monopoly markets for ISP services. They also depend on whether transparency resolves potential problems.


Copyright held by IEEE. To view the original, see here.


January 30, 2014

Google and Motorola in the Wake of Nortel

googlemotorolaGoogle has announced a plan to sell Motorola to Lenovo for just under three billion dollars. Google paid more than twelve billion only two years ago, and many commentators have declared that this is Larry Page’s first big bet, and potentially his first big experiment to go sour.

Even the best reporters characterize the strategy incorrectly, however, and forget the motivation. The best recognize that the acquisition had several motives, but still use wishy-washy language to discuss the priorities. Here is the language of the New York Times, for example:

“The deal is not a total financial loss for the extremely wealthy Google, which retains patents worth billions of dollars, but it is a sign of the fits and starts the company is experiencing as it navigates business in the mobile age, which has upended technology companies of all types.

In addition to using Motorola’s patents to defend itself in the mobile patent wars, Google pledged to reinvent mobile hardware with Motorola’s new phones, and directly compete with Apple by owning both mobile hardware and software.”

I have a bone to pick here. Even the best reporters are not recalling the sequence of events. Public policy shares some of the blame, and viewed from that perspective, much of this looks like a waste of resources. Let’s get that interpretation on the table by doing a bit of a flashback, shall we? (more…)

December 31, 2013

End the broadband panic meme

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 9:22 am


It happens about every twelve months, maybe with more frequency recently. Another reporter writes about how the US is falling behind international rivals in the supply of broadband. I am growing very tired of this meme, and answering emails from friends wondering if it is so. There are serious issues to debate, but this standard meme takes attention away from them.


The latest version of this article came from the New York Times. It had the title “US Struggling to Keep Pace in Broadband Service,” and it brought out the usual concern that all US growth will fall behind if the US does not have the fastest broadband in the world. If you are curious, read this.


Why is this tiring? Let me count the ways.


First, while it is irritating to have slow service at home, US productivity does not depend much on that. Household broadband is less important for economic growth than the broadband to business. And what really matters for productivity? Speed to business. The number of minutes it takes a household to download Netflix is statistically irrelevant for productivity growth in comparison to the time it takes to download information to conduct business transactions with employees, suppliers, and customers. We get measures of broadband speed to homes because that is what we can easily measure, not because it really matters.


Is there any sense that US business Internet is too slow? Well, perhaps the speed of a household’s internet says something about the speed of business Internet, but I doubt it. In all the major cities of the US there is no crisis at all in the provision of broadband.  Broadband speeds in downtown Manhattan are extraordinary, as well as in Wall Street. The Silicon Valley firms who need fast speeds can get them. Same with the firms in Seattle. Hey, the experiments with Google Fiber in Kansas City raise questions about whether entrepreneurship will follow the installation of super high speeds, but that is an open question. It is an interesting question too, but not a crisis.


These issues do arise, however, in some small and medium cities in the US, and a few rural areas where there is no broadband. In some places satellite is the best available, or some fixed wireless solutions are available too. These can be ok but not great for many business needs, but it can also limit what a business can do. These issues also have been present for a while, so most of the businesses that really needed the speed simply left the areas where speeds were slow. As a country we just let it happen a many years ago, and, frankly, it will be hard to reverse at this point. (It made me sad at the time; I even spent some time doing research on the topic for a while, though I have stopped in the last few years.) Again, this is an interesting question, but only a crisis in the places where it matters, not a national level.


Second, as for household speeds, many people simply don’t want them and do not want to pay for them. There is plenty of evidence that those high speed Korean lines did not get used right away, and lots of fiber goes to waste. Having said that, there are some interesting open questions here as well, namely, what type of speeds are people willing to pay for at their homes? Let’s not get panicked over supply if there is little demand, ok?

The last serious study of the willingness to pay for speed was done at the end of 2009, as part of the national broadband plan. The study was definitive at the time, that only a few households were willing to pay for high speeds. But, of course, that was a while ago. What has changed since then? Well, arguably, demand for data-intensive stuff has risen. That is not coming from the growth in torrent. Recent data are pretty clear about that. It is coming from Netflix, YouTube, and Facebook. Once again, that is a great open question, but panic about speed does nothing to focus on that question. Instead, let’s study demand and whether it goes unsatisfied.


Third, if we study demand, can we all acknowledge that demand is very skewed in the US? 10% of the users account for far more than 50% of the data to households, and 20% of the users get most systems to more than 80% of the data use. And it is growing at levels from median to highest part of the skew, so there is good reason to think demand for data is growing for all major users. Will there be capacity to handle those intensive users of data? The answer is unclear.


That hints at an open question that is worth debating. Not everyone pays the same price because flat rate pricing has been so common across the US. The top 10% of users pay very low prices per megabit. Even if total expenditure per month for the biggest users is twice as expensive in the US in comparison to other countries, it is still pretty cheap. Just to be clear, I am not saying it is too high or too low, nor am I am not making any comment about whether markets are competitive enough in the US. I am just saying that the international comparisons are flawed for big users in the US.


That hints at an even more challenging question. For better or worse, it is these high-intensity users, especially many with young adults or teenagers, who seem to be the early users of new services. So US entrepreneurial edge might actually be coming from the low prices and high speeds our biggest users have enjoyed all these years. Are we in danger of ending that? That is the provocative question to ask, and it is not about the general speed in the country. It is about the highest speeds to select users.


Finally, and my last problem with this meme: it’s old and tired and potentially irrelevant. Maybe this concern about wireline is all a tempest in a teapot. Many observers believe wireless is the new frontier for innovative applications. Maybe five years from now everybody will look back on this panic and just shake their heads. How can we have an entire article about broadband speeds to households and not a peep about the experience most people have on a daily level, which is determined by wireless speeds?


Just something to think about.



September 27, 2013

Digital Public Goods

Precisely how does the online world provide public goods? That is the question for this DSC_1021column.

Public goods in the digital world contain some of the same features as those in the offline world. Yet, there are some key differences in the boundaries between public and private, and that shapes what arises and what does not.

That will need an explanation. (more…)

August 20, 2013

The economic policy of data caps

It is the one year anniversary of the Open Internet Advisory Committee (as noted earlier). Today the committee issued a report of its work over the last year. You can access it here. Today’s post discusses the report about data Caps, which was written by the Economic Impacts working group.

I am a member of the committee and the Economic Impacts Working Group, and I like theFCC-logo work we did. I chair the group. “Chair” is a misleading title for what I really do, which is take notes of the groups’ discussions and transcribe them. Every now and again, I do a little more. As one of the members without any stakes in the outcome, occasionally I offer a synthesis or compromise between distinct views.

The report aims to analyze data caps in the context of the Open Internet Report and Order. The Open Internet Report and Order discusses usage-based pricing (UBP), but does not expressly mention data caps except by implication in that data caps can be considered a form of UBP. The Order left open the possibility of many experiments in business models and pricing.

Moreover, the Internet had evolved over time, and the Order anticipated that the Internet would continue to evolve in unexpected ways. The Order set up the advisory group to consider whether aspects of the Order remain consistent in its effects on the Internet as the Internet evolves, and it is in that spirit that this conversation was undertaken. (more…)

July 14, 2013

The Open Internet Advisory Committee at year one

Today I would like to make a little shout-out for recent work at the FCC to improve policy making for the Internet. To do that I need to put my preferences front and center.grandstand

There are policy debates, and then there is actual policy making. The former grabs headlines on occasion, while the latter rarely does. Both need to take place in order to make progress, albeit, it is a rare person who has the patience and taste for both.

I have little patience for the grandstanding that goes with policy debates, and I do not take much pleasure from the staging and entertainment behind political posturing. I prefer policy making, especially the quieter and more challenging parts of it, and I love being engaged in challenging policy conversations that do not get much publicity.

Just so we are clear, this post will discuss policy making. Policy debate will largely remain in the background. That is unusual for most public discussions about policy for the open Internet, but it seems appropriate for today’s post.

FCC-logoIt is the one year anniversary of the Open Internet Advisory Committee. In approximately two weeks the committee will release its first big report, a kind of year-in-review. I am not a neutral observer of this committee. I am a member. I am especially impressed by what the committee did in its first year.

If you think I am biased, then you are right. That is the point of this blog post.

I have been happy to be part of this committee, and contribute to public policy discussions through participation. And whatever else the posturing political world says, I want to be the first to say loudly that this committee has done wonderful work to support policy making, and, until two weeks from now, largely out of the public’s eye. (more…)

January 9, 2013

The FTC and Google: Did Larry Learn his Lesson?

The FTC and Google settled their differences last week, putting the final touches on an agreement. Commentators began carping from all sides as soon as the announcement came. The most biting criticisms have accused the FTC of going too easy on Google. Frankly, I think the ftccommentators are only half right. Yes, it appears as if Google got off easy, but, IMHO, the FTC settled at about the right place.

More to the point, it is too soon to throw a harsh judgment at Google. This settlement might work just fine, and if it does, then society is better off than it would have been had some grandstanding prosecutor decided to go to trial.

Why? First, public confrontation is often a BIG expense for society. Second, as an organization Google is young and it occupies a market that also is young. The first big antitrust case for such a company in such a situation should substitute education for severe judgment.

Ah, this will take an explanation. (more…)

July 6, 2012

Tiered Broadband Pricing

Filed under: Broadband,Internet economics and communications policy — Shane Greenstein @ 5:21 pm
Tags: ,

Kellogg Insight’s Editor, Tim De Chant, and I sat down to discuss tiered pricing for broadband. It was a pretty interesting conversation, and Tim distilled it into a blog post. If you are curious to see the original post and other posts by Tim, see his blog, Expertly Wrapped. With Tim’s permission, here is a reposting:


Consumers and startups to be affected by metered broadband, By Tim De Chant

As more people are looking forward to a future overflowing with data—Facebook, Twitter, Netflix, YouTube, a seemingly limitless number of websites, and more—broadband providers are looking to limit the amount of data they provide. And with those limits will come new—and most likely higher—prices.

Data caps aren’t new—they have been widely implemented by wireless providers—but they haven’t been widely implemented among traditional broadband providers. Now, that seems to be changing. Many providers complain that their network is overloaded, and that the costs of upgrading it to handle the added traffic are prohibitive. To maintain a quality service for their customers, providers say they have to raise prices.

Unfortunately, it’s not that clear cut, said Shane Greenstein, a professor of management and strategy and expert on internet economics. There are a number of issues clouding the matter, one of which is time of day. Overuse “usually does not matter most of the day. It generally only matters between 7 PM and 10 PM, when use is highest,” he said. “The usual justification for usage based pricing (or caps, for that matter) appear quite weak outside the 7 to 10 PM window.” (more…)

June 17, 2012

What does the average surfer know about Creative Commons?

Filed under: Academic Research,Internet economics and communications policy — Shane Greenstein @ 9:47 pm

What do you know about Creative Commons, the legal frameworks that support many web-based activities, such as Wikipedia, Flickr, or YouTube? You probably do not know too much, if you are like most people. Most users do not know the legal details behind the web – and that is a fact, as you will see in a moment.

You might reasonably respond that it does not matter what users know. Knowing the legal details makes no difference to enjoying and using the services. Indeed, the general ignorance of the users shows just how sophisticated and easy-to-use many of the leading web services have become. You also might respond in a contrary fashion, that most users are inviting disaster by remaining ignorant. Knowing the details does not matter on 99 days of pain-free use, but someday there will come a day it matters, and not knowing will bite users hard.

Those two opposing responses are both reasonable answers, I believe, because the state of the discussion remains in flux. No good answer to these questions dominates the topic for now. At present it is enough to ask the question, and recognize that the answer is open.


April 8, 2012

The Craigslist Killer and Online Privacy

Let’s discuss the Craigslist killer, online privacy, and police procedures.

Why has this old case from 2009 gotten new attention? The murder itself was rather gruesome and unusual, and the events grabbed considerable attention at the time, especially in the Boston area where they took place. However, it all happened several years ago. Why remember them now? As it turns out, the Boston Police recently released a range of documents concerning the case (which is a good thing – kudos to the Boston Police for being transparent). A few reporters have looked closely at these documents. This has generated a series of online comments about how the police used information technology — ISPs, cell phones, Facebook, email — to connect the murder to the suspect.

Let’s bring the conversation to the attention of readers of this space. It shows how technical progress lowers the costs of performing new technical capabilities, which generate new possibilities for action. A big part of the online privacy debate concerns the simple policy question: how best can society use this new capability? The question is not new, to be sure, but it is hard to appreciate that question without understanding just what is possible. This example offers a good illustration about what online technology made very cheap and what police departments do with it.

On one level there is nothing shocking here. As it turns out, when Facebook receives a subpoena it complies. So do ISPs. So do cell phone companies. Anything anyone does from home leaves an online trace, and any determined police department can deploy subpoenas to associate that online trace with an individual. Police use this routinely when they have a good lead, and it can be useful in catching murderers.

More to the point, online privacy debates are best illustrated in the situations where the debate matters the least, such a successful criminal investigation of a murder. That is because these are the type of situations in which everyone cooperates. As the case illustrates, using comparatively routine processes to trace his actions online, police could take some impressive actions.

In brief, the case makes clear why police should have the ability to use these capabilities, and it makes clear how easy it is to do. The latter observation might be novel for many readers.

Recap and remark

In this instance, the murderer is called the Craigslist killer because he used Craigslist to find his victim. For our purposes, the case has one distinctive feature: Despite being a medical student at Boston University, which surely suggests he had some sort of brain on his shoulders, the Craigslist killer really did not understand how many online clues he was leaving for the police.

The facts of the case are straightforward, albeit gruesome. Back in the spring of 2009 a second year medical at Boston University medical school got into financial problems – due to gambling, it seems. He hatched a scheme to pay his debts through robbery. His potential victims were masseuses he solicited on Craigslist. They did not know him, and he contacted the victims with new email accounts and temporary cell phones. Once he met them, he would handcuff them at gunpoint and rob them. He did this three times before he was caught. The second of these went badly, and he shot the poor victim three times, murdering her in an upscale downtown Boston hotel. (If you want to know all the details about the Craigslist killer, read it here).

Reading this account I was reminded of a sardonic rule of thumb communicated to me by an old friend, who was a professional prosecutor: it is a good thing that most criminals are so stupid, otherwise they would never get caught. He meant the following: it is rather difficult for prosecutors to catch criminals, but many law-breakers make the task much easier by doing a range of things that connect them to the crime, namely, by NOT covering their tracks very smartly. From the prosecutor’s perspective, a thoughtful criminal only need take a small set of actions, and they are much harder to catch. Yet, most of them never think to do so.

The Craigslist killer’s actions illustrate a few such actions, especially on line. These are remarkable because of the contrast with other actions taken by the killer. He was smart enough to find vulnerable victims in Craigslist, and contact them in ways that made it challenging to identify him. He essentially did that by buying prepaid cell phones (which made it hard to trace to him in particular).

As example of one of the dumb things he did… after the murder he kept one of the cell phones at his residence (hidden, presumably, from his companion). But after the murders the police searched his residence and found it. Let’s just say it: such physical evidence is pretty damning, so it is pretty darn stupid to keep the phone at home. I am no expert — but, I dunno’ — it might have been a good idea to throw away the cell that contacted a victim.

Here is another example. Though the killer successfully committed his first robbery, he committed the second one (which led to the murder) in a hotel across the street on the next day. He also used exactly the same method, giving the police a pretty good clue they were dealing with the same individual (which made identifying him much easier). He committed the third one the following night 45 minutes away from Boston (again, using the same method), in spite of the massive publicity surround the murder (which, again, made identifying him much easier).

Anyway, all of this looks pretty stupid to the prosecutors. This guy took action to make his cell phone use anonymous, and then lost a lot of anonymity through his choice of time/place. A little spacing across police jurisdictions, and little patience, and he would have been much harder to find.

But, really, his email and Facebook behavior was clueless, so let’s focus on that. It did lead to the loss of anonymity, and that is worth understanding in detail.

The Craigslist killer acted in ways that tied him directly to his emails. The emails went between him and his victim. If anonymity is the goal – and clearly he had some inkling of its importance through his cellphone purchases – then why didn’t it extend to his email behavior?

He did not behave as if he realized what a trace he was leaving. For example, he acquired his email account the day before he used it to contact his victim, and did it from his home. From his home — whoa, that is stupid. Working from his home made it easy to trace. The email provider and ISP both have access to the same IP address, and the police used subpoenas to connect one with the other.

This association is one of the more remarkable details of the case precisely because the ISP was almost uncooperative. Here is what happened. The police sent a subpoena to the ISP asking for the address affiliated with the IP address they obtained from the email provider. The police got the email address from — no surprise — the victim. In this case, the email provider was Microsoft, and the firm seems to have complied comparatively quickly. In contrast, the ISP — Comcast, in this case — gave a somewhat more bureaucratic answer. They said, in effect, that it would be a couple weeks, unless the police gave them a good reason to be in a rush. Given the high profile of the case, the police had no problem doing that. Then the ISP made an exception to its default behavior, which is a slow answer, and complied quickly.

Notice how important was the online piece. Once the police had that address they could stake out the place. That eventually let them get an ID on the individual as well as fingerprints. They also were able to get photos (from Facebook, and from records at Boston University), which they could then show to the other victims. That allowed them to solve the case in less than a week.

Summing up

There is something deeper running throughout the recent release of documents. On one level, the documents illustrate something that has become almost a standard refrain among the more experience and sophisticated Internet research community, namely, there is less privacy online than in typical offline life. This so despite the attempts of many lawyers to make the online world less vulnerable to government snooping.

The case makes that refrain very apparent: with a search warrant, government prosecutors can find out quite a lot about just about any suspect who has an active online life.

The documents also illustrate another rule of thumb about privacy online. There are two kinds of surfers present, those who seem to behave as if they DO NOT comprehend the lack of privacy online, and those who are wary about whether the Internet will become big brother-ish. The Craigslist killer seems to have been the former.

Looking behind the surface, one other theme runs throughout this case. Nobody other than the killer did anything wrong. The police got it right. They followed proper civil procedure. The firms cooperated. A murder case got solved. The entire experience should make any sensible person want to say “Hurray for civil society.”

Yet, not trivially, the situation also showcases that the improvement in information technology in the last decade is not an unalloyed improvement. Indeed, less restrained governments and police forces can easily use information technology in ways that may have little to do with enforcing criminal law. Tracing emails to political dissidents should be easy. Censoring unwanted communication is no problem. Shutting down the leadership of an electronic communication network also appears comparatively trivial. I am no lawyer, but these events give me additional respect for the importance of subpoenas and other processes to ensure that police use them only when criminal behavior provides probable cause.

Next Page »

Blog at

%d bloggers like this: