Virulent Word of Mouse

September 6, 2014

HuffPo and the Loss of Trust

Filed under: Editorial,Online behavior — Shane Greenstein @ 10:59 am

HuffingtonPost-LogoYou may not have noticed, but recently the Huffington Post has been the poster child for lack of journalistic integrity. The actual details may appear to be small to many people, but not to me. HuffPo has made a sloppy journalistic error, publishing a historically inaccurate story, and on a claim many experts have proven wrong. The organization does not seem willing to retract it. I will never trust this source again.

This post will get into the details in a moment, but this is a blog about digital economics, so let’s review the relevant economics. Let’s start with the economics of trust. Trust does not arise out of nowhere. Readers learn to trust a source of information. Said another way, trust arises because a new source invests in accuracy and quality. It is one of the greatest assets of a news source.

Trust is an unusual asset. It possesses an asymmetric property. It takes many little acts to build up its value, and very few bad acts to destroy it. Once lost it is also hard to regain.

As online news sources grabbed the attention of readers there has been concern about the loss of the type of quality reporting found in traditional news outlets. That is why many commentators have wondered whether online news sources like HuffPo could recreate the reputations of traditional newspapers and new magazines, who invested so heavily in journalists with deep knowledge about their topic. So went the adage: A high quality journalist could sniff out a lie or incomplete claim. A high quality reporter would defend the reputation of the news source. Readers trusted those organizations as a result of those investments.

That is also why journalistic integrity receives so much attention by managers in traditional newspapers. There are good reasons why newspapers react severely to ethical lapses and violations, such as plagiarism. Once trust is lost in a reporter, why would a reader trust that organization again? Why would a news organization put its trust further at risk by retaining that reporter? The asymmetries of trust motivate pretty harsh penalties.

So the concern went something like this: online news sources get much of their content for free or for very little money. That could be a problem because these sources do not have the resources to invest in quality reporting. How will they behave when quality suffers? Will readers punish them for lower quality material?

That is what gets us back to HuffPo’s behavior. Its reputation is on the line, but it is not acting as if it recognizes that it has lost my trust and the trust of several other readers. This behavior suggests it has not invested in quality, which aligns with the fears just expressed.

Now for the detail: HuffPo published a multipart history of email that is historically inaccurate. Yes, you read correctly. More specifically, a few of the details are correct, but those are placed next to some misleading facts, and these are embedded in a certifiably very misleading historical narrative. The whole account cannot be trusted.

The account comes from one guy, Shiva Ayyadurai, who did some great programming as a teenager. He claims to have invented electronic mail in 1978 when he was fourteen. He might have done some clever programming, but electronic mail already existed by the time he did his thing. Independent invention happens all the time in technological history, and Shiva is but another example, except for one thing. He had his ideas a little later than others, and the other ideas ended up being more influential on subsequent developments. Shiva can proudly join the long list of geeky teenagers who had some great technical skills at a young age, did some cool stuff, and basically had little impact on anybody else.

Except that Shiva won’t let it go. This looks like nothing more than Shiva’s ego getting in the way of an unbiased view.

Look, it is extremely well established that the email systems in use today descended from a set of inventors who built on each other’s inventions. They did their work prior to 1978. For example, it is well documented that the “@” in every email first showed up in 1971. Ray Tomlinson invented that. Others thought it was a good idea, and built on top of the @. We all have been doing it ever since. Moreover, this is not ancient history. Tomlinson has even written about his experiences, and lots of people know him. This is easy to confirm.

Though Ayyadurai’s shenanigans were exposed a few years ago, he persists. In the HuffPo piece yet again he pushes the story in which his inventions played a central place in the history of electronic mail. This time he has a slick infographic telling his version of things, and he managed to get others to act as shills for his story. He also now accuses others of fostering a conspiracy against his views in order to protect their place in history and deny him his. As if.  “A teenager invented electronic mail” might be a great headline, and it might sound like a great romantic tale, but this guy is delusional.

One teenager invented the fundamental insights that we all use today? No, no, and many times no. This is just wrong.

BTW, I have met some of these inventors, and interviewed some of them too (for a book I am writing), and, frankly, the true inventors deserve all the credit they can get. This guy, Ayyadurai, deserves credit for clever at a young age, and nothing more.

Look, if you do not believe me, then read the experts. Many careful historians have spent considerable time exposing the falsehoods in this lie. If you are curious, read this by Tom Haigh, a respected and established computer industry historian, or this and this and this by Mike Masnick, who writes the techdirt blog about various events in tech (such as Huffington Post). These two lay out the issues in a pretty clear way, and from different angles, so they cover the territory thoroughly.

Look at the dates of those posts. These falsehoods were exposed two years ago, and are online. This is not news. Because these two have done the hard work, it takes approximately fifteen to twenty minutes to figure out what happened here.

And that is where we are today. HuffPo published the BS about this guy, authored by a few shills. According to Masnick, who makes it his business to do this sort of thing, HuffPo has been informed of their error. Yet, HuffPo has done nothing to disavow their story.

If I had to guess, there simply is nobody at HuffPo with enough time or energy to check on the accuracy of a story. The staff probably has moved on to other things, and don’t want to be bothered with a little historical article. That is the thing about quality; it is costly to keep it up everywhere, even on articles few readers really care about.

At the end of the day, Huffington Post published another story, one among many, and on a topic – the history of electronic mail. Does HuffPo lose very much from publishing one historically inaccurate story? No, not really, only a few of us know the truth, and only a few of us are sufficiently disgusted and angry. HuffPo’s reputation will take a hit with only a few readers.

But I will never trust them again. They have lost my trust completely. It will be very difficult to earn back.

You probably guessed how this post would end, so here it is: I suggest that you should not trust HuffPo ever again. Maybe if enough people react to this stupidity, HuffPo will invest in some journalistic integrity. Or maybe they will just lose readers a little bit at a time on hundreds or thousands of stories, each with little issues, and die a slow death from their own carelessness. Maybe.

****************

1:22pm, 9/6/2014

Post script: Sometime after this was written Huffington Post took down the offending material. That raises an interesting question about whether I should trust them again.  On the one hand, I totally respect them for acting. Let’s give them credit. On the other hand, those posts have been up for several weeks. I admit that it will be hard to lose this sense of skepticism. You can make up your own mind. SG

mouseonmouse

January 30, 2014

Google and Motorola in the Wake of Nortel

googlemotorolaGoogle has announced a plan to sell Motorola to Lenovo for just under three billion dollars. Google paid more than twelve billion only two years ago, and many commentators have declared that this is Larry Page’s first big bet, and potentially his first big experiment to go sour.

Even the best reporters characterize the strategy incorrectly, however, and forget the motivation. The best recognize that the acquisition had several motives, but still use wishy-washy language to discuss the priorities. Here is the language of the New York Times, for example:

“The deal is not a total financial loss for the extremely wealthy Google, which retains patents worth billions of dollars, but it is a sign of the fits and starts the company is experiencing as it navigates business in the mobile age, which has upended technology companies of all types.

In addition to using Motorola’s patents to defend itself in the mobile patent wars, Google pledged to reinvent mobile hardware with Motorola’s new phones, and directly compete with Apple by owning both mobile hardware and software.”

I have a bone to pick here. Even the best reporters are not recalling the sequence of events. Public policy shares some of the blame, and viewed from that perspective, much of this looks like a waste of resources. Let’s get that interpretation on the table by doing a bit of a flashback, shall we? (more…)

December 31, 2013

End the broadband panic meme

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 9:22 am

 

It happens about every twelve months, maybe with more frequency recently. Another reporter writes about how the US is falling behind international rivals in the supply of broadband. I am growing very tired of this meme, and answering emails from friends wondering if it is so. There are serious issues to debate, but this standard meme takes attention away from them.

 

The latest version of this article came from the New York Times. It had the title “US Struggling to Keep Pace in Broadband Service,” and it brought out the usual concern that all US growth will fall behind if the US does not have the fastest broadband in the world. If you are curious, read this.

 

Why is this tiring? Let me count the ways.

 

First, while it is irritating to have slow service at home, US productivity does not depend much on that. Household broadband is less important for economic growth than the broadband to business. And what really matters for productivity? Speed to business. The number of minutes it takes a household to download Netflix is statistically irrelevant for productivity growth in comparison to the time it takes to download information to conduct business transactions with employees, suppliers, and customers. We get measures of broadband speed to homes because that is what we can easily measure, not because it really matters.

 

Is there any sense that US business Internet is too slow? Well, perhaps the speed of a household’s internet says something about the speed of business Internet, but I doubt it. In all the major cities of the US there is no crisis at all in the provision of broadband.  Broadband speeds in downtown Manhattan are extraordinary, as well as in Wall Street. The Silicon Valley firms who need fast speeds can get them. Same with the firms in Seattle. Hey, the experiments with Google Fiber in Kansas City raise questions about whether entrepreneurship will follow the installation of super high speeds, but that is an open question. It is an interesting question too, but not a crisis.

 

These issues do arise, however, in some small and medium cities in the US, and a few rural areas where there is no broadband. In some places satellite is the best available, or some fixed wireless solutions are available too. These can be ok but not great for many business needs, but it can also limit what a business can do. These issues also have been present for a while, so most of the businesses that really needed the speed simply left the areas where speeds were slow. As a country we just let it happen a many years ago, and, frankly, it will be hard to reverse at this point. (It made me sad at the time; I even spent some time doing research on the topic for a while, though I have stopped in the last few years.) Again, this is an interesting question, but only a crisis in the places where it matters, not a national level.

 

Second, as for household speeds, many people simply don’t want them and do not want to pay for them. There is plenty of evidence that those high speed Korean lines did not get used right away, and lots of fiber goes to waste. Having said that, there are some interesting open questions here as well, namely, what type of speeds are people willing to pay for at their homes? Let’s not get panicked over supply if there is little demand, ok?

The last serious study of the willingness to pay for speed was done at the end of 2009, as part of the national broadband plan. The study was definitive at the time, that only a few households were willing to pay for high speeds. But, of course, that was a while ago. What has changed since then? Well, arguably, demand for data-intensive stuff has risen. That is not coming from the growth in torrent. Recent data are pretty clear about that. It is coming from Netflix, YouTube, and Facebook. Once again, that is a great open question, but panic about speed does nothing to focus on that question. Instead, let’s study demand and whether it goes unsatisfied.

 

Third, if we study demand, can we all acknowledge that demand is very skewed in the US? 10% of the users account for far more than 50% of the data to households, and 20% of the users get most systems to more than 80% of the data use. And it is growing at levels from median to highest part of the skew, so there is good reason to think demand for data is growing for all major users. Will there be capacity to handle those intensive users of data? The answer is unclear.

 

That hints at an open question that is worth debating. Not everyone pays the same price because flat rate pricing has been so common across the US. The top 10% of users pay very low prices per megabit. Even if total expenditure per month for the biggest users is twice as expensive in the US in comparison to other countries, it is still pretty cheap. Just to be clear, I am not saying it is too high or too low, nor am I am not making any comment about whether markets are competitive enough in the US. I am just saying that the international comparisons are flawed for big users in the US.

 

That hints at an even more challenging question. For better or worse, it is these high-intensity users, especially many with young adults or teenagers, who seem to be the early users of new services. So US entrepreneurial edge might actually be coming from the low prices and high speeds our biggest users have enjoyed all these years. Are we in danger of ending that? That is the provocative question to ask, and it is not about the general speed in the country. It is about the highest speeds to select users.

 

Finally, and my last problem with this meme: it’s old and tired and potentially irrelevant. Maybe this concern about wireline is all a tempest in a teapot. Many observers believe wireless is the new frontier for innovative applications. Maybe five years from now everybody will look back on this panic and just shake their heads. How can we have an entire article about broadband speeds to households and not a peep about the experience most people have on a daily level, which is determined by wireless speeds?

 

Just something to think about.

mouseonmouse

 

April 21, 2013

Crowd-Sourcing and Crowd-Hunting and the Boston Marathon Bomb Brothers.

Filed under: Editorial,Uncategorized — Shane Greenstein @ 9:03 pm

How did the Boston Marathon Bombing brothers get caught? The release of videos played a key role. This decision to release this video has been called many things – a risky decision, a calculated bet, a crucial turning point, and a fortunate use of crowd-sourcing.Tamerlan-Tsarnaev-and-Dzhokhar-A-Tsarnaev-at-the-Boston-Marathon-10-20-minutes-before-the-blasts-1844790

Let’s not get sloppy with the use of modern lingo. The release of the video might have been risky and calculated, and it even might have been crucial, but let’s not get carried away.

Crowd-sourcing had little to do with what happened. Collective intelligence comes in many different sizes and flavors, but let’s not give it credit when it does not deserve it.

Crowd-hunting is a more appropriate term. This will take a minute to explain.

Look, this is partly a reaction to a lovely article in the Sunday New York Times, which contained a wonderful recounting of this decision (written by Michael S Schmidt and Erik Schmitt). “Manhunt’s Turning Point Came in the Decision to Release Suspect’s Images” said the headline.

Paragraph six contains one sentence. Here is a partial quote…”The decision….was one the most crucial turning points in a remarkable crowd-sourcing manhunt for the plotters of a bombing that killed three people and wounded more than 170.”

Remarkable? Yes. Crowd-sourcing? No.

Boston Marathon BombingAccording to the online version of Merriam and Webster’s Dictionary, Crowd Sourcing is “the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers.”

In practice it is also a cooperative activity. Usually a person or firm poses the problem, solicits and manages the help provided by the crowd, and takes care of the other details, such as making the contest rules, if any. Sometimes there are explicit awards and sometimes not.

Such as it was, the crowd was cooperative in Boston, to be sure. Everyone wanted to help if they could. Many sent in their videos of the finish line and tried to help the investigation.

But there were crucial differences between what happened after the Boston Marathon Bombing and crowd-sourcing.

• Most crucially, the cooperation only went so far. The suspects did not want to be found. The definition for crowd sourcing includes nothing about the “solution” putting up active resistance.

• Here is another difference. There also was (sort of) a leader soliciting ideas and managing the contributions, but it was hardly well Boston Marathon Bombings Tourniquet organized. To be sure, the feds and the state of Massachusetts and the city of Boston cooperated in some news conferences, and in the strategies to release video and photos. Every participant described this as chaotic. Not because anybody wanted it that way; that is just how things are in a major event.

• Also, more trivially, only a small part of this employed online methods and communications. The news media had a huge role, not just one web site releasing details and collecting suggestions. And it was not just CNN prattling away on NEW-YORK-POST-570every little detail. Some of the media was perfectly happy to amplify any little thing, even false rumors. For example, the New York Post ran a headline “Bag Men” with a circle around the picture of some poor guy who had nothing to do with the bombing. The competitive dynamic between the various news outlets played a key role in blowing many facts out of proportion, and setting the crowd off in the right and wrong direction.

• There is also this little problem: the actual facts don’t fit the label of successful crowd-sourcing. After all, the big break came when the brothers hijacked a car, and released the owner after driving with him for a while. Not killing the car-owner showed that the brothers still had some measure of humanity in them, but releasing him also shows they were not thinking clearly. They had talked about the bombing in front of the car-owner. Once he was released he called 911, and police put out an all-points-bulletin. The owner gave lots of details about his own car. The police spotted it a few minutes later, and that directly led to the death of the older brother.

• Facts get in the way again on the second big break. After the shooting on Thursday and the chase, the governor asked everyone to Boston_bomb_suspect_captured__brotherstay inside on Friday. This was supposed to help the police locate the second brother. This draconian measure was lifted after an entire day because law-enforcement concluded it failed. They had no clue emerged as to the second-brother’s whereabouts. Ten minutes later the owner of a boat in Watertown went outside to get a breather and found the injured brother in the boat in his backyard. In other words, this success was a byproduct of giving up on lock-down, not a strategic or deliberate use of crowds at all. The police were no longer using sourcing. Sourcing had not been allowed to work all day on Friday, since everyone stayed had been asked to stay inside, which is quite the opposite.

The most we can say is that there was an attempt to use sourcing to gather information in order to identify the suspects. The release of the photo did yield many useful clues, and set events in motion. It also probably played a role in the events at MIT, which led to the tragic death of a police officer. In other words, crowd-sourcing acted as a catalyst, but it did not play much of a role beyond that.

Crowd-hunting is a more appropriate term to describe what transpired in Boston. A working definition might be the following: “The practice of obtaining needed services, ideas, or content related to an unsolved crime by soliciting contributions from a large group of people, often involving one or more government actors, typically using a variety of media to communicate needs and relay updated information to the public.”

mouseonmouse

January 9, 2013

The FTC and Google: Did Larry Learn his Lesson?

The FTC and Google settled their differences last week, putting the final touches on an agreement. Commentators began carping from all sides as soon as the announcement came. The most biting criticisms have accused the FTC of going too easy on Google. Frankly, I think the ftccommentators are only half right. Yes, it appears as if Google got off easy, but, IMHO, the FTC settled at about the right place.

More to the point, it is too soon to throw a harsh judgment at Google. This settlement might work just fine, and if it does, then society is better off than it would have been had some grandstanding prosecutor decided to go to trial.

Why? First, public confrontation is often a BIG expense for society. Second, as an organization Google is young and it occupies a market that also is young. The first big antitrust case for such a company in such a situation should substitute education for severe judgment.

Ah, this will take an explanation. (more…)

May 20, 2012

A dumb compromise to save the ACS and Economic Census

Filed under: Editorial,Uncategorized — Shane Greenstein @ 9:02 pm

Last week I commented in this space about the Tea Party’s desire to make a symbolic cut in government by eliminating the American Community Survey and the Economic Census at the US Census. This would change economic statistics in the US, upending a system that has been in place since the end of World War II. And it really makes no sense for pro-business Republicans to be leading the charge, since business is one of the primary beneficiary of all this data about the US population and business.

Over the weekend, the economic correspondent for the New York Times wrote an opinion piece. She pointed out how many businesses had come out against this change, including the United States Chamber of Commerce, the National Retail Federation and the National Association of Home Builders.

The article did give a hint about what might actually be going on. To quote the article:

“Republicans may hope that when the Senate and House bills go to a conference committee, a final compromise will keep the survey, but make participation in it voluntary. Under current law, participation is mandatory.”

That observation is rather amazing, since there is no mystery to the answer. That question has been studied. Let me quote from the summary of a report on the consequences from imposing voluntary participation:

* “A dramatic decrease occurred in mail response when the survey was voluntary. The mail cooperation rate fell by over 20 percentage points and the final response rate after all three modes of data collection was about four percentage points lower…
* The estimated annual cost of implementing the ACS would increase by at least 38 percent if the survey was voluntary and the survey maintained the current reliability levels.
* The use of voluntary collection methods had a negative impact on traditionally low response areas that will compromise our ability to produce reliable data for these areas and for small population groups such as Blacks, Hispanics, Asians, and American Indians and Alaska Natives.

Lower reliability and higher cost seem like a dumb thing to aspire to produce. Like I said last week, this proposal is just stupid.

May 13, 2012

Do not cut the American Community Survey: an editorial

Filed under: Editorial — Shane Greenstein @ 10:16 pm

The House Republicans recently voted to remove funding from the US Census. According to news reports, this action was motivated by a mix of Tea-Party symbolism and the legacy of a long-standing fight with the Census.

This post will present a short editorial. While I have sympathy for part of the motivation for this action – namely, the desire of every household to be left alone – it seems overwhelmed by everything in the other direction.

Let me put it this way. Though I tend to be a man of moderate language, removing funding looks very stupid. In this case I can bring many professional and personal observations to the topic.

(more…)

February 2, 2012

What would you say to David Cameron about Google?

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 11:05 pm

Why was Google invented in the US and not the UK? Jonathan Haskell, Professor at Imperial College in London, asked that question in his most recent blog post. What motivated him to ask it? He got a little nudge from his Prime Minister, David Cameron, who asked the same question.

Haskell justifiably hesitates to put too much emphasis on one single factor. At the same time, he wants to use the example to suggest that aspects of the law for copyright play a role. In particular, he stresses that the US has a legal notion called “fair use” while the UK lacks such a notion.

The argument stresses that fair use eliminates the need  for contracting every time a new use or user builds an incremental innovation using a small part of copyrighted material. This matters for certain online innovations — such as innovative search tools. More generally, fair use reduces the costs of innovations that make use of lots of little bits of copyrighted material. In the absence of fair use the innovator would have to contract with every copyright holder, which can be cumbersome or prohibitively expensive. Haskell’s argument stresses the the equivalent UK notion is much narrower, which raises contracting costs, and, thus, disables experimentation in many online activities.

I do not have any reason to disagree with this insight. The characterization of US copyright law is reasonable for this argument. However, not being an expert on UK copyright law, all I can say is that Haskell’s argument sounds plausible to me.

I would like to add one observation and pose two questions.

The observation summarizes something I said in a prior post about Google’s early history. Summarizing that earlier post,  Google’s success did not arise from a single epiphany. It came from the accumulation of many innovations. Google’s success  had many fathers, including Google’s imitation of, and improvement over, innovations done by Overture. That was accomplished with multiple inventions, including page-rank, as well as investment in more speed and reliability. It also including further development of its second-price quality-weighted position auction.  NSF funding paid for initial advance, and Silicon Valley’s ecosystem played a big role too. The efforts of many clever computer scientists played a role, as did the efforts of many bloggers.

Second, let’s pose a question. Does the US law for safe harbor play a role? Does the UK have anything equivalent? The US laws largely were defined in the DMCA, which passed in 1998. While I do not think the US safe harbor rules played a role in Google’s early growth, these processes certainly played a role in YouTube’s experience. Their importance has been widely recognized too. It has come up prominently in recent issues about reforming copyright law in the face of piracy.

The argument for safe harbors — e.g., adopting and executing routine procedures for taking down copyrighted material limits the liability to a hosting site — goes something like this: a well defined procedure for no liability helps innovators by giving them legal certainty about what does and does not violate another copyright holder’s rights. Does the UK have something equivalent?

The second question concerns antitrust. Does the application or lack of application of antitrust law play any role in the difference between the US and UK experience?  I usually think of US antitrust law as friendly to innovators, and particularly focused on keeping channels open, which helps entrepreneurs. It also leads to deconcentration of ownership. How does that compare with the UK?

I also ask this question partially as a result of a recent court decision in France. Yes, France has nothing to do with the UK, but this example is just too weird to go without mention, so please forgive the lack of segue. The French court found Google violated antitrust law because it gave away its maps for free. A french map maker complained and won their suit, apparently, by convincing a judge that free maps violated France’s antitrust laws. I have seen some wacky court decisions over the years, but on the surface this one sure seems inexplicable. Is the UK as wacky as all this?

To summarize, Jonathan Haskell asks a great question, motivated by his Prime Minister’s question. Why did Google start in the US and not the UK? He and I agree that multiple factors ultimately played a role. Haskell also suggests the definition of fair use has something to do with it. I wonder if safe harbors and antitrust also play a role.

What do you think?

January 29, 2012

Invasion of the Internet Body snatchers

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 11:05 pm

If you have been musing about the misguided policies in SOPA and PIPA that generated protests, what do you make of misguided international governance of the Internet? This article in Politico raises an interesting possibility, that the ITU will assert itself into Internet governance, ostensibly to coordinate security and taxation across countries. As is well known, numerous countries would like to see this happen because it allows them to indirectly use the ITU to control pieces of the Internet.

I bet the same people who protested SOPA and PIPA would view this decision-making body with about the same paranoia as Donald Sutherland in the remake of “Invasion of the body snatchers.” Like Sutherland, they will want to stay awake forever, lest the aliens come in while they are asleep and steal the independence of the Internet.  (Alright, maybe that stretches the metaphor a tad, but you get the idea).

Of course, there is a key difference. The ITU is one of those international organizations that does not have to answer to anybody in particular. None of its decision makers have to stand for reelection. None of the leaders have much to fear from any web-based protest.

I do not know about you, but if the ITU sticks its nose into Internet governance I do not see this turning out well.

Don’t get me wrong. I have met several people from the ITU over the years. All of them have been very polite and thoughtful and well-spoken. But that is still not the same as being held accountable. 

How would the Internet community react to more international governance, such as from the ITU? If I had to guess — and this not going out on much of a limb — the same people who mistrust a few Hollywood lobbyists with the text of a law about piracy will trust the decisions of many non-US governments even less. Will they bend their behavior to abide by a directive that emerged from negotiations between a government in Paris and a government in Bejing or Moscow? How about, say, Kinshasa or Caracas? Ya, right.

I am just saying. The same instincts that led Sergey Brin and Larry Page to defy Bejing — and, mind you, at some financial loss to their firm — are the same instincts that fueled the SOPA and PIPA revolt. These sentiments exist widely.

It is nothing personal, nor foreign-phobic. These sentiments have been around for quite some time. For as long as I have been watching policy making in this space — which is approximately two decades — there has always been a big and vocal community who guards their independence. This community is thoughtful and a bit defiant, and, importantly, suspicious of any bottlenecks or concentration of authority.

As David Clark so succinctly and graciously summarized the sentiment in 1993:

We reject: kings, presidents and voting. We believe in: rough consensus and running code.

Sure, the venue for the recent protests is new, and so is the instrument for protesting. But read the online chatter about SOPA and PIPA. It has the same tone and sensibility, less revolution, more evolution in the target and means. The ITU would get as much revolt today as any other authority.

Here is what I mean. Over the years various firms and authorities have become the target for this sensibility. More than two decades ago (in Clark’s speech) the targets were the largest telephone companies, especially AT&T in New Jersey and the global standards bodies trying to coordinate technical developments across countries in the early 1990s. Among the many concerns at the time, there was deep suspicion against the way any one decision maker would impose their interests too strongly, ruining the accomplishments of the community.

These same instincts would resist the ITU, should it try to insert itself.  Different venue, but the same protest.

In the article Phil Weiser  gets it right on target, “Part of the challenge is to defend the bottom-up governance model.”

Donald Sutherland understood the problem with the defense in Invasion of the Body Snatchers. It means never going to sleep.

September 20, 2011

Puzzling over big wireless carrier mergers: An Editorial

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 10:20 pm
Tags: , , ,

Let’s talk about AT&T proposal to merge with T-Mobile. Why do the parties involved still consider this merger viable?

Executives at AT&T seemed to think this merger was a good idea many months ago. For all I know, that might have been the right conclusion with the information they had then. But that was then, and this is now, and too much information has come to light to hold that conclusion any longer. Based on what we know now the proposal does not make business sense.

This blog post will argue what should be obvious to any close observer of events, and certainly to the management at AT&T. There is not a viable business case for this merger any longer.

This blog post also will argue that executives at T-Mobile should begin planning to run their business as a stand-alone entity. They always had a viable business,  but that holds even more so now, since they will get a reasonable infusion of cash from the break-up of this deal.

How did the executives at AT&T get into the present pickle? They took a strategic gamble with the US legal system and lost. In their own internal deliberations today they should be acknowledging the loss, and —  for lack of a better phrase — simply move on. That is what their business needs.

So I am puzzled. Why haven’t all the parties declared victory and gone home? This post will consider the question. (more…)

Next Page »

Blog at WordPress.com.