Virulent Word of Mouse

February 2, 2012

What would you say to David Cameron about Google?

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 11:05 pm

Why was Google invented in the US and not the UK? Jonathan Haskell, Professor at Imperial College in London, asked that question in his most recent blog post. What motivated him to ask it? He got a little nudge from his Prime Minister, David Cameron, who asked the same question.

Haskell justifiably hesitates to put too much emphasis on one single factor. At the same time, he wants to use the example to suggest that aspects of the law for copyright play a role. In particular, he stresses that the US has a legal notion called “fair use” while the UK lacks such a notion.

The argument stresses that fair use eliminates the need  for contracting every time a new use or user builds an incremental innovation using a small part of copyrighted material. This matters for certain online innovations — such as innovative search tools. More generally, fair use reduces the costs of innovations that make use of lots of little bits of copyrighted material. In the absence of fair use the innovator would have to contract with every copyright holder, which can be cumbersome or prohibitively expensive. Haskell’s argument stresses the the equivalent UK notion is much narrower, which raises contracting costs, and, thus, disables experimentation in many online activities.

I do not have any reason to disagree with this insight. The characterization of US copyright law is reasonable for this argument. However, not being an expert on UK copyright law, all I can say is that Haskell’s argument sounds plausible to me.

I would like to add one observation and pose two questions.

The observation summarizes something I said in a prior post about Google’s early history. Summarizing that earlier post,  Google’s success did not arise from a single epiphany. It came from the accumulation of many innovations. Google’s success  had many fathers, including Google’s imitation of, and improvement over, innovations done by Overture. That was accomplished with multiple inventions, including page-rank, as well as investment in more speed and reliability. It also including further development of its second-price quality-weighted position auction.  NSF funding paid for initial advance, and Silicon Valley’s ecosystem played a big role too. The efforts of many clever computer scientists played a role, as did the efforts of many bloggers.

Second, let’s pose a question. Does the US law for safe harbor play a role? Does the UK have anything equivalent? The US laws largely were defined in the DMCA, which passed in 1998. While I do not think the US safe harbor rules played a role in Google’s early growth, these processes certainly played a role in YouTube’s experience. Their importance has been widely recognized too. It has come up prominently in recent issues about reforming copyright law in the face of piracy.

The argument for safe harbors — e.g., adopting and executing routine procedures for taking down copyrighted material limits the liability to a hosting site — goes something like this: a well defined procedure for no liability helps innovators by giving them legal certainty about what does and does not violate another copyright holder’s rights. Does the UK have something equivalent?

The second question concerns antitrust. Does the application or lack of application of antitrust law play any role in the difference between the US and UK experience?  I usually think of US antitrust law as friendly to innovators, and particularly focused on keeping channels open, which helps entrepreneurs. It also leads to deconcentration of ownership. How does that compare with the UK?

I also ask this question partially as a result of a recent court decision in France. Yes, France has nothing to do with the UK, but this example is just too weird to go without mention, so please forgive the lack of segue. The French court found Google violated antitrust law because it gave away its maps for free. A french map maker complained and won their suit, apparently, by convincing a judge that free maps violated France’s antitrust laws. I have seen some wacky court decisions over the years, but on the surface this one sure seems inexplicable. Is the UK as wacky as all this?

To summarize, Jonathan Haskell asks a great question, motivated by his Prime Minister’s question. Why did Google start in the US and not the UK? He and I agree that multiple factors ultimately played a role. Haskell also suggests the definition of fair use has something to do with it. I wonder if safe harbors and antitrust also play a role.

What do you think?

January 29, 2012

Invasion of the Internet Body snatchers

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 11:05 pm

If you have been musing about the misguided policies in SOPA and PIPA that generated protests, what do you make of misguided international governance of the Internet? This article in Politico raises an interesting possibility, that the ITU will assert itself into Internet governance, ostensibly to coordinate security and taxation across countries. As is well known, numerous countries would like to see this happen because it allows them to indirectly use the ITU to control pieces of the Internet.

I bet the same people who protested SOPA and PIPA would view this decision-making body with about the same paranoia as Donald Sutherland in the remake of “Invasion of the body snatchers.” Like Sutherland, they will want to stay awake forever, lest the aliens come in while they are asleep and steal the independence of the Internet.  (Alright, maybe that stretches the metaphor a tad, but you get the idea).

Of course, there is a key difference. The ITU is one of those international organizations that does not have to answer to anybody in particular. None of its decision makers have to stand for reelection. None of the leaders have much to fear from any web-based protest.

I do not know about you, but if the ITU sticks its nose into Internet governance I do not see this turning out well.

Don’t get me wrong. I have met several people from the ITU over the years. All of them have been very polite and thoughtful and well-spoken. But that is still not the same as being held accountable. 

How would the Internet community react to more international governance, such as from the ITU? If I had to guess — and this not going out on much of a limb — the same people who mistrust a few Hollywood lobbyists with the text of a law about piracy will trust the decisions of many non-US governments even less. Will they bend their behavior to abide by a directive that emerged from negotiations between a government in Paris and a government in Bejing or Moscow? How about, say, Kinshasa or Caracas? Ya, right.

I am just saying. The same instincts that led Sergey Brin and Larry Page to defy Bejing — and, mind you, at some financial loss to their firm — are the same instincts that fueled the SOPA and PIPA revolt. These sentiments exist widely.

It is nothing personal, nor foreign-phobic. These sentiments have been around for quite some time. For as long as I have been watching policy making in this space — which is approximately two decades — there has always been a big and vocal community who guards their independence. This community is thoughtful and a bit defiant, and, importantly, suspicious of any bottlenecks or concentration of authority.

As David Clark so succinctly and graciously summarized the sentiment in 1993:

We reject: kings, presidents and voting. We believe in: rough consensus and running code.

Sure, the venue for the recent protests is new, and so is the instrument for protesting. But read the online chatter about SOPA and PIPA. It has the same tone and sensibility, less revolution, more evolution in the target and means. The ITU would get as much revolt today as any other authority.

Here is what I mean. Over the years various firms and authorities have become the target for this sensibility. More than two decades ago (in Clark’s speech) the targets were the largest telephone companies, especially AT&T in New Jersey and the global standards bodies trying to coordinate technical developments across countries in the early 1990s. Among the many concerns at the time, there was deep suspicion against the way any one decision maker would impose their interests too strongly, ruining the accomplishments of the community.

These same instincts would resist the ITU, should it try to insert itself.  Different venue, but the same protest.

In the article Phil Weiser  gets it right on target, “Part of the challenge is to defend the bottom-up governance model.”

Donald Sutherland understood the problem with the defense in Invasion of the Body Snatchers. It means never going to sleep.

December 11, 2011

Platforms and a visit to Japan

Filed under: Academic Research,Internet economics and communications policy — Shane Greenstein @ 10:09 pm

During the first week of December I visited Tokyo, Japan, and spoke about platforms. This was my first visit to Japan.  Accordingly, this post mixes commentary with a bit of travelogue.

Platforms are reconfigurable base of components on which participants build applications. Platforms have a long history in computing and electronics, with examples going back to IBM, Microsoft and Intel, among many others. Google and Apple are recent practitioners, and their prominence has renewed interest in platform strategies. It is, however, not entirely transparent to a non-expert how the (newer) discussions about platforms relates to the (familiar) analyses of standardization. My talk pointed out some of those links.

Background to set the scene: I stayed at Hitotsubashi University (on the left), a lovely campus in a residential neighborhood a train ride out from downtown Tokyo. I traveled there at the invitation of Professor Reiko Aoki, a professor at the university, and a member of an advisory group for the government on technology policy. She arranged for a presentation at the university, and another at the Research Institute for Economy Trade and Industry (REITI), a part of METI, the government agency with many experts in industrial policy. Professor Aoki and I both share an interest in standards. Sadao Nagaoka, also from Hitotsubashi and an expert in technology policy, provided commentary. We are pictured together at REITI at METI (at the top).

(more…)

November 25, 2011

Mobile mergers and insider baseball conversations

Here is a fact. The FCC recently announced it would move to have a hearing about the AT&T and T-Mobile merger. In response, AT&T withdrew its application from the FCC, delaying the hearing indefinitely (or until AT&T resubmits the application).

What is that all about? At a procedural level it is just a detail — the FCC reviews mergers involving the transfer of licensing. The Department of Justice (DOJ) has a review process too, but just a different standard of review. The DOJ uses antitrust, while the FCC considers whether the merger is in the “public interest.” Even if the FCC delays its review, the FCC must continue to do its review. The first hearing in front of judge takes place in February.

Today’s post provides a little insider baseball about these reviews (The Wiktionary definition for insider baseball “Matters of interest only to insiders”), trying to explain the chess moves to a wider audience. Seemingly small procedural moves provide a window on the likely outcome of this merger. To paraphrase Robin Bienenstock and Craig Moffett of Bernstein Research, AT&T has not thrown in the towel, but they are acting like a firm who understands the odds of success are low. I prefer to think of it this way: economic substance does matter. This requires a brief explanation. (more…)

November 10, 2011

Limits to broadband diffusion?

Filed under: Broadband,Internet economics and communications policy,Short observations — Shane Greenstein @ 12:20 pm

The National Telecommunications Information Administration just published the findings from its latest survey about Internet use within US households. In case you missed it, here is a summary: broadband adoption among US households went up, but not by much.

Actually, that is not entirely fair. Viewed at short intervals, broadband adoption will appear to be a slow moving process. However, a little stepping back from the short run headlines reveals good news and bad news in this report. That is the point of this post. (more…)

October 21, 2011

US Broadband in Maps, Graphs, and some Bars

Filed under: Broadband,Internet economics and communications policy,Maps — Shane Greenstein @ 10:34 am

To be sure, most of us do not use government statistical reports as anything more than bedtime reading for inducing soporific reactions. It is cheaper than a sleeping pill.

But those expectations would be too harsh for the most recent broadband report from the FCC. It contains a great deal of data, and it is really quite informative. I would go even further. It is a useful vehicle for learning about the basic economics of broadband. For that purpose, however, it has one drawback: it is a wee bit too long, as in 88 pages.

This post will save you some time. Much of the key insights can be summarized in three pictures — a map, a graph and some bars. The post  will start with the map, then go to the graph, then end with the bars. (For those keeping score at home these pictures are taken from pages 62, 78 and 79 of the report.)

(more…)

September 26, 2011

Should Google go Back to Only Organic?

Filed under: Internet economics and communications policy — Shane Greenstein @ 10:10 pm
Tags: , ,

If you have a couple hours to burn on some political theater, go a watch the Senate hearings about Google. Here is a link.

Actually, as someone who foresaw the inevitability of this event, I was rather disappointed. This hearing was pretty anti-climactic. To find this interesting you have to be a serious junkie of antitrust policy in innovative industries.

There just were not many moments of drama. Rather, the hearing resembled the verbal equivalent of a tennis match that rarely left the baseline, volleying back and forth. There were long stretches between the high points, and those stretches did not contain much tension. Nothing kept an audience glued to their seat, as if they were concerned about missing some important moment.

Except once. But that moment came so unexpectedly, and after such a long stretch of nothing.

Actually, upon reflection, that moment illustrated what was wrong with the hearing. The hearing focused on the wrong issues. That is the point of this post. (more…)

September 20, 2011

Puzzling over big wireless carrier mergers: An Editorial

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 10:20 pm
Tags: , , ,

Let’s talk about AT&T proposal to merge with T-Mobile. Why do the parties involved still consider this merger viable?

Executives at AT&T seemed to think this merger was a good idea many months ago. For all I know, that might have been the right conclusion with the information they had then. But that was then, and this is now, and too much information has come to light to hold that conclusion any longer. Based on what we know now the proposal does not make business sense.

This blog post will argue what should be obvious to any close observer of events, and certainly to the management at AT&T. There is not a viable business case for this merger any longer.

This blog post also will argue that executives at T-Mobile should begin planning to run their business as a stand-alone entity. They always had a viable business,  but that holds even more so now, since they will get a reasonable infusion of cash from the break-up of this deal.

How did the executives at AT&T get into the present pickle? They took a strategic gamble with the US legal system and lost. In their own internal deliberations today they should be acknowledging the loss, and —  for lack of a better phrase — simply move on. That is what their business needs.

So I am puzzled. Why haven’t all the parties declared victory and gone home? This post will consider the question. (more…)

September 17, 2011

Smartphone patents and platform wars

Firms in the smartphone market have been suing one another over patent violations. I cannot recall any other platform war that involved as many intellectual property disputes.

Look, society grants patents as part of trade-off. A patent enhances the incentives to generate new invention by giving the inventor a temporary monopoly. That trade-off should never be far from the top of the discussion. Let me say that another way: Artificial monopolies are clearly bad for the economy. There is no reason to grant them unless society gets something in return, such as more invention.

It is easy to speculate that something is amiss. Was society still on the good side of this trade-off when a non-practicing entity sued RIM-Blackberry for hundreds of millions of dollars, even though the dispute involved patents invented by someone who never got close to putting them into a viable business? Was society on the right side of this trade-off when a consortium spent four billion dollars for patents in bankruptcy court from Nortel, a firm that made some very bad bets during the dot-com boom and had run itself into the ground? Was society on the same side of the trade-off when Google felt so cornered that it bought Motorola for its patents, and, after it was announced, very few analysts saw any reason to point that Google also received tens of thousands of talented engineers as part of the deal?

This is a way of introducing a recent article, “Owning the stack: The legal war to control the smartphone platform.” I recommend it. It appeared in Ars Technica. It brought considerable clarity to events by explaining the actions and motives of various players in the recent patent wars involving smartphones. It was written by James Gimmelman at NYU law school, and recommended to me by David Laskowski, a student from a prior class at Kellogg (Thanks David!).

This post passes on that recommendation and offers a few comments.

(more…)

August 10, 2011

An Honest Policy Wonk

Captured regulators routinely take the blame for the ills of regulatory policy in electricity, telephony, and broadcasting. “Captured regulator” has been a pejorative term in these industries for decades.

It’s hard to say when it happened exactly, but this conversation migrated into electronics and the commercial Internet in the past decade, as both industries melded with communications and media businesses. Pieces of the topic even show up in the net neutrality debate.

Quite a bit of nuance got lost in the migration. While many episodes in the history of telephony and broadcasting illustrate regulatory capture, even the theory’s proponents know about exceptions—namely, situations that ought to have been captured but were not. For example, consider the Internet’s birth. There were numerous opportunities for regulatory capture in the Internet’s transfer out of government hands, and, yet, capture theory only explains part of the events and not the entire outcome.

I will refer to other episodes below, but for now, take that example as motivation for modifying the popular theory of regulatory capture. When does the regulatory environment work despite the tendencies toward regulatory capture? As best I can tell, the explanation has something to do with the presence of an honest policy wonk.

(more…)

« Previous PageNext Page »

Blog at WordPress.com.

%d bloggers like this: