Virulent Word of Mouse

September 6, 2014

HuffPo and the Loss of Trust

Filed under: Editorial,Online behavior — Shane Greenstein @ 10:59 am

HuffingtonPost-LogoYou may not have noticed, but recently the Huffington Post has been the poster child for lack of journalistic integrity. The actual details may appear to be small to many people, but not to me. HuffPo has made a sloppy journalistic error, publishing a historically inaccurate story, and on a claim many experts have proven wrong. The organization does not seem willing to retract it. I will never trust this source again.

This post will get into the details in a moment, but this is a blog about digital economics, so let’s review the relevant economics. Let’s start with the economics of trust. Trust does not arise out of nowhere. Readers learn to trust a source of information. Said another way, trust arises because a new source invests in accuracy and quality. It is one of the greatest assets of a news source.

Trust is an unusual asset. It possesses an asymmetric property. It takes many little acts to build up its value, and very few bad acts to destroy it. Once lost it is also hard to regain.

As online news sources grabbed the attention of readers there has been concern about the loss of the type of quality reporting found in traditional news outlets. That is why many commentators have wondered whether online news sources like HuffPo could recreate the reputations of traditional newspapers and new magazines, who invested so heavily in journalists with deep knowledge about their topic. So went the adage: A high quality journalist could sniff out a lie or incomplete claim. A high quality reporter would defend the reputation of the news source. Readers trusted those organizations as a result of those investments.

That is also why journalistic integrity receives so much attention by managers in traditional newspapers. There are good reasons why newspapers react severely to ethical lapses and violations, such as plagiarism. Once trust is lost in a reporter, why would a reader trust that organization again? Why would a news organization put its trust further at risk by retaining that reporter? The asymmetries of trust motivate pretty harsh penalties.

So the concern went something like this: online news sources get much of their content for free or for very little money. That could be a problem because these sources do not have the resources to invest in quality reporting. How will they behave when quality suffers? Will readers punish them for lower quality material?

That is what gets us back to HuffPo’s behavior. Its reputation is on the line, but it is not acting as if it recognizes that it has lost my trust and the trust of several other readers. This behavior suggests it has not invested in quality, which aligns with the fears just expressed.

Now for the detail: HuffPo published a multipart history of email that is historically inaccurate. Yes, you read correctly. More specifically, a few of the details are correct, but those are placed next to some misleading facts, and these are embedded in a certifiably very misleading historical narrative. The whole account cannot be trusted.

The account comes from one guy, Shiva Ayyadurai, who did some great programming as a teenager. He claims to have invented electronic mail in 1978 when he was fourteen. He might have done some clever programming, but electronic mail already existed by the time he did his thing. Independent invention happens all the time in technological history, and Shiva is but another example, except for one thing. He had his ideas a little later than others, and the other ideas ended up being more influential on subsequent developments. Shiva can proudly join the long list of geeky teenagers who had some great technical skills at a young age, did some cool stuff, and basically had little impact on anybody else.

Except that Shiva won’t let it go. This looks like nothing more than Shiva’s ego getting in the way of an unbiased view.

Look, it is extremely well established that the email systems in use today descended from a set of inventors who built on each other’s inventions. They did their work prior to 1978. For example, it is well documented that the “@” in every email first showed up in 1971. Ray Tomlinson invented that. Others thought it was a good idea, and built on top of the @. We all have been doing it ever since. Moreover, this is not ancient history. Tomlinson has even written about his experiences, and lots of people know him. This is easy to confirm.

Though Ayyadurai’s shenanigans were exposed a few years ago, he persists. In the HuffPo piece yet again he pushes the story in which his inventions played a central place in the history of electronic mail. This time he has a slick infographic telling his version of things, and he managed to get others to act as shills for his story. He also now accuses others of fostering a conspiracy against his views in order to protect their place in history and deny him his. As if.  “A teenager invented electronic mail” might be a great headline, and it might sound like a great romantic tale, but this guy is delusional.

One teenager invented the fundamental insights that we all use today? No, no, and many times no. This is just wrong.

BTW, I have met some of these inventors, and interviewed some of them too (for a book I am writing), and, frankly, the true inventors deserve all the credit they can get. This guy, Ayyadurai, deserves credit for clever at a young age, and nothing more.

Look, if you do not believe me, then read the experts. Many careful historians have spent considerable time exposing the falsehoods in this lie. If you are curious, read this by Tom Haigh, a respected and established computer industry historian, or this and this and this by Mike Masnick, who writes the techdirt blog about various events in tech (such as Huffington Post). These two lay out the issues in a pretty clear way, and from different angles, so they cover the territory thoroughly.

Look at the dates of those posts. These falsehoods were exposed two years ago, and are online. This is not news. Because these two have done the hard work, it takes approximately fifteen to twenty minutes to figure out what happened here.

And that is where we are today. HuffPo published the BS about this guy, authored by a few shills. According to Masnick, who makes it his business to do this sort of thing, HuffPo has been informed of their error. Yet, HuffPo has done nothing to disavow their story.

If I had to guess, there simply is nobody at HuffPo with enough time or energy to check on the accuracy of a story. The staff probably has moved on to other things, and don’t want to be bothered with a little historical article. That is the thing about quality; it is costly to keep it up everywhere, even on articles few readers really care about.

At the end of the day, Huffington Post published another story, one among many, and on a topic – the history of electronic mail. Does HuffPo lose very much from publishing one historically inaccurate story? No, not really, only a few of us know the truth, and only a few of us are sufficiently disgusted and angry. HuffPo’s reputation will take a hit with only a few readers.

But I will never trust them again. They have lost my trust completely. It will be very difficult to earn back.

You probably guessed how this post would end, so here it is: I suggest that you should not trust HuffPo ever again. Maybe if enough people react to this stupidity, HuffPo will invest in some journalistic integrity. Or maybe they will just lose readers a little bit at a time on hundreds or thousands of stories, each with little issues, and die a slow death from their own carelessness. Maybe.

****************

1:22pm, 9/6/2014

Post script: Sometime after this was written Huffington Post took down the offending material. That raises an interesting question about whether I should trust them again.  On the one hand, I totally respect them for acting. Let’s give them credit. On the other hand, those posts have been up for several weeks. I admit that it will be hard to lose this sense of skepticism. You can make up your own mind. SG

mouseonmouse

August 28, 2014

Baking the Data Layer

chocolate-chip-cookieThe cookie turned 20 just the other day. More than a tasty morsel of technology, two decades of experimentation have created considerable value around its use.

The cookie originated with the ninth employee of Netscape, Lou Montulli. Fresh out of college in June 1994, Montulli sought to embed a user’s history in a browser’s functions. He added a simple tool, keeping track of the locations users visited. He called his tool a “cookie” to relate it to an earlier era of computing, when systems would exchange data back and forth in what programmers would call “magic cookies.” Every browser maker has included cookies ever since.

The cookie had an obvious virtue over many alternatives: It saved users time, and provided functionality that helped complete online transactions with greater ease. All these years later, very few users delete them (to the disappointment of many privacy experts), even in the browsers designed to make it easy to do so.

Montulli’s invention baked into the Web many questions that show up in online advertising, music, and location-based services. Generating new uses for information requires cooperation between many participants, and that should not be taken for granted.

The cookie’s evolution

Although cookies had been designed to let one firm track one user at a time, in the 1990s many different firms experimented with coordinating across websites in order to develop profiles of users. Tracking users across multiple sites held promise; it let somebody aggregate insights and achieve a survey of a user’s preferences. Knowing a user’s preferences held the promise of more effective targeting of ads and sales opportunities.DoubleClick

DoubleClick was among the first firms to make major headway into such targeting based on observation at multiple websites. Yet, even its efforts faced difficult challenges. For quite a few years nobody ever targeted users with any precision, and overpromises fueled the first half-decade of experiments.

The implementation of pay-per-click and the invention of the keyword auction—located next to an effective search engine—brought about the next great jump in precision. That, too, took a while to ripen, and, as is well known, Google largely figured out the system after the turn of the millennium.

Today we are awash in firms involved in the value chain to sell advertising against keyword auctions. Scores stir the soup at any one time, some using data from cookies and some using a lot more than just that. Firms track a user’s IP addresses, and the user’s Mac address, and some add additional information from outside sources. Increasingly, the ads know about the smartphone’s longitude and latitude, as well as an enormous amount about a user’s history.

nsaAll the information goes into instantaneous statistical programs that would make any analyst at the National Security Agency salivate. The common process today calculates how alike one individual is to another, assesses whether the latest action alters the probability the user will respond to a type of ad, and makes a prediction about the next action.

Let’s not overstate things. Humans are not mechanical. Although it is possible to know plenty about a household’s history of surfing, such data can make general predictions about broad categories of users, at best. The most sophisticated statistical software cannot accurately predict much about a specific household’s online purchase, such as the size of expenditure, its timing, or the branding.

Online ads also are still pretty crude. Recently I went online and bought flowers for my wedding anniversary and forgot to turn off the cookies. Not an hour later, a bunch of ads for flowers turned up in every online session. Not only were those ads too late to matter, but they flashed later in the evening after my wife returned home and began to browse, ruining what was left of the romantic surprise.

Awash in metadata

Viewed at a systemic level, the cookie plays a role in a long chain of operations. Online ads are just one use in a sizable data-brokerage industry. It also shapes plenty of the marketing emails a typical user receives, as well as plenty of offline activities, too.

To see how unique that is, contrast today’s situation with the not-so-distant past.telephone

Consider landline telephone systems. Metadata arises as a byproduct of executing normal business processes. Telephone companies needed the information for billing purposes—for example, the start and stop time for a call, area codes and prefix to indicate originating and ending destination, and so on. It has limited value outside of the stated purpose to just about everyone except, perhaps, the police and the NSA.

Now contrast with a value chain involving more than one firm, again from communications, specifically, cellular phones. Cell phone calls also generate a lot of information for their operations. The first generation of cell phones had to triangulate between multiple towers to hand off a call, and that process required the towers to generate a lot of information about the caller’s location, the time of the call, and so on.

Today’s smartphones do better, providing the user’s longitude and latitude. Many users enable their smartphone’s GPS because a little moving dot on an electronic map can be very handy in an unfamiliar location (for example). That is far from the only use for GPS.

Cellular metadata has acquired many secondary values, and achieving that value involves coordination of many firms, albeit not yet at an instantaneous scale suggestive of Internet ad auctions. For example, cell phone data provides information about the flow of traffic in specific locations. Navteq, which is owned by the part of Nokia not purchased by Microsoft, is one of many firms that make a business from collecting that data. The data provide logistics companies with predictable traffic patterns for their planning.

Think of the modern situation this way: One purpose motivated collecting metadata, and another motivated repurposing the metadata. The open problem focuses on how to create value by using the data for something other than its primary purpose.

Metadata as a source of value

Try one more contrast. Consider a situation without a happy ending.

itunes_logo150New technologies have created new metadata in music, and at multiple firms. Important information comes from any number of commercial participants—ratings sites, online ticket sales, Twitter feeds, social networks, YouTube plays, Spotify requests, and Pandora playlists, not to mention iTunes sales, label sales, and radio play, to name a few.

The music market faces the modern problem. This metadata has created a great opportunity. The data has enormous value to a band manager making choices in real time, for example. Yet, the entire industry has not gotten together to coordinate use of metadata, or even to coordinate on standard reporting norms.

There are several explanations for the chaos. Some observers want to blame Apple, as it has been very deliberate about which metadata from iTunes it shares, and which it does not. However, that is unfair to Apple. First, they are not entirely closed, and some iTunes data does make it into general use. Moreover, Apple does not seem far out of step with industry practices for protecting one’s own self-interest, which points to the underlying issue, I think.

There is a long history of many well-meaning efforts being derailed by narrow-minded selfishness. For decades, merely sampling another performer’s song in any significant length led to a seemingly trivial copyright violation that should have been easy to resolve. Instead, the industry has moved to a poor default solution, requiring samplers to give up a quarter of royalties. With those type of practices, there is very little sampling. That seems suboptimal for a creative industry.

Composers and performers also have had tussles for control over royalties for decades, and some historical blowups took on bitter proportions. The system for sharing royalties in the US today is not some great grand arrangement in which all parties diplomatically compromised to achieve the greater good. Rather, the system was put there as a consent decree after settling an antitrust suit.

If this industry had a history of not sharing before the Internet, who thought the main participants would share metadata? metatagsWho would have expected the participants to agree on how to aggregate those distinct data flows into something useful and valuable? Only the most naive analyst would expect a well-functioning system to ever emerge out of an industry with this history of squabbling.

More generally, any situation involving more than a few participants is ripe for coordination issues, conflict, and missed opportunity. It can be breathtaking when cooperation emerges, as in the online advertising value chain. That is not a foregone conclusion. Some markets will fall into the category of “deals waiting to be done.”

 

The systems are complicated, but the message is simple. Twenty years after the birth of the cookie, we see models for how to generate value from metadata, as well as how not to. Value chains can emerge, but should not be taken for granted.

More to the point, many opportunities still exist to whip up a recipe for making value from the new data layer, if only the value chain gets organized. On occasion, that goal lends itself to the efforts of a well-managed firm or public efforts, but it can just as easily get neglected by a squabbling set of entrepreneurs and independently minded organizations, acting like too many cooks.

Copyright held by IEEE. To view the original, see here.

mouseonmouse

May 26, 2014

Did the Internet Prevent all Invention from Moving to one Place?

The diffusion of the internet has had varying effects on the location of economic activity, leading to both increases and decreases in geographic concentration. In an invited column at VoxEU, Chris Forman, Avi Goldfarb and I presents evidence that the internet worked against increasing concentration in invention. This relationship is particularly strong for inventions with more than one inventor, and when inventors live in different cities. Check out the post here.

mouseonmouse

 

April 23, 2014

The Fault Lines Along Fast Lanes

highwayUntil recently, a fast lane from a broadband ISP was a remote possibility in the US. ISPs had to give data equal treatment, regardless of the source, and could not offer faster delivery for a higher price while giving slower service as a default.

Although fast lanes were allowed by regulators a few years ago in the wireless networks, the carriers hesitated to offer them. In December 2013, AT&T Wireless broke with the norm and announced just such a program. FCC regulations forbidding fast lanes at landline broadband ISPs had also prevented them, but a January 2014 US appeals courts struck down those regulations.

Is that a good or bad trend? The answer depends on who’s talking. Critics of government regulation despise the rules forbidding fast lanes, whereas net neutrality supporters view the presence of fast lanes as a nightmare come to life.

Legal and political aspects of this topic typically get most of the attention, as do the implications for the variety of speech online. Most reporters find these aspects interesting, and understand them. However, the economics of fast lanes receives less attention. That is a surprise, because the economics is not very difficult, and it’s worth understanding. It illuminates the fault lines between many different points of view.

Mirrors and servers

The public Internet has evolved considerably since the days when the design for packet networks presumed that the message did not have to arrive at an inbox immediately. Users today prefer and expect speedier services. That goes for more than just IP telephony and video chat, where users notice the smallest delay. It also holds true for video, such as YouTube and many online games. Many providers believe it also affects the bottom line—namely, that users switch services if they do not get fast delivery of data.

Long before fast lanes became a real possibility, many participants in the Internet made investments aimed at reducing delays. For example, for some time now, Akamai has sold a well-known approach to improving speed. Their service also defines the first fault line, so this is a good place to start the discussion. Opponents to net neutrality ask why Akamai can operate a business to speed up data delivery but a carrier cannot.

Akamai’s service supports servers inside ISPs, closer to households. Any seriously large Internet content firm must buy these services, and it is considered a cost of doing business online. Many ISPs like working with Akamai, because their customers experience better service without much investment from the ISP.

That is not the only method for speeding up data. For example, Google has bypassed Akamai’s charges in many locations by building its own data network to ISPs. Netflix has recently sought to do the same, though it is not quite done (because it has not successfully negotiated a presence with every US ISP). Any gathering of more than three Internet engineers will generate discussion of even more potential solutions in the cloud. Amazon built a content delivery network with enormous geographic range. Microsoft has similar investments and aspirations, as does IBM. The list goes on.

That leads to the deeper question. The last few years have witnessed robust experimentation among distinct approaches to functional improvement, and these might be either complements to, or substitutes for, each other. Accordingly, carriers have had two roles. They act as a firm whose users benefit from faster delivery, and they act as a supplier that could choose to cooperate—or refuse to cooperate—with solutions offered by others.

When a carrier had no investments in fast lanes, it had every reason to cooperate with solutions offered by others. Will that change if the carrier has its own fast lane?

The answer defines a fault line between points of view. Some observers label this a possibility that might never arise. They want a regulatory response only when a problem emerges, and otherwise they anticipate that a regulator will err. Net neutrality supporters think regulators have an obligation to protect the Internet. Advocates worry that introducing fast lanes messes with a system that already works well. They do not trust carriers to cooperate with solutions that might substitute for a fast lane business or threaten an investment in some way.

Competition and monopoly

The next fault line has to do with the role of money. Defenders of fast lanes expect them to become a cost of doing business for content firms, and forecast that fast lanes will be profitable and generate more investment. Opponents have the same forecast about profitability, but a different interpretation. They worry that fast lanes will lead to an Internet where only rich firms can deliver their content effectively.

This concern tends to get plenty of press, and a few rhetorical questions illuminate the fault line. Will the default speeds offered by ISPs be good enough for startups or for small specialty websites? One side believes that the defaults will be good enough, whereas the other believes that fast lanes will lead ISPs to neglect investing in their slow services.

One’s point of view about the state of competition for ISPs has a big role in interpreting the role of money. Some believe a competitive ISP market would melt away most problems. Others argue that belief about competitive ISP markets is a fantasy and masks many dangers.

The belief in competition is not a belief in magic, so it is worth examining. Rather, this side views competition as a painful process. In competitive markets, customers substitute into alternatives if they do not like what a supplier does. Suppliers hesitate to do things that make their users angry. In other words, ISPs would compete for customers by offering better fast lanes. In this view, users would get angry if they perceived that carriers were slowing down content from firms they cared about, and angry users would find another carrier.

Where is the fault line? Recognize the two key factors that make ideal competitive markets operate well—namely, transparency and the availability of many user options.

Just about everybody is in favor of transparency, but not necessarily more of it if rules require it. Those with faith in competitive processes tend to see the merits in nothing more than a few light-handed requirements, such as programs to facilitate measuring the speed of different ISPs. The other side asks for much more, such as the publication of all fast lane contracts (more on that later).

As for the second concern about options, consider the key open question: Do users have many options available to them, or do they face de facto monopoly ISP markets? Once again, there are different beliefs about the preponderance of competition and monopoly found throughout locales of the US. Those who presume that competition is inadequate lack sympathy for leaving ISPs alone (versus those who presume it is adequate).

That also leads to different interpretation about how lucrative fast lanes will be. Supporters of fast lanes say that ISPs should charge whatever the market will bear, and competition will discipline pricing. Opponents say that the monopolies emerged from granting public franchises and use of public rights of way, and characterize high prices as misuse of utility franchises.

A classic debate about government merger policy also arises. Net neutrality supporters argue that fast lanes give ISPs artificial incentives to consolidate in order to increase their bargaining leverage with content providers, thus concentrating economic power in ISPs. Net neutrality opponents do not see anything wrong with large ISPs. In a competitive market, size is irrelevant.

Mixed incentives

The foregoing leads into the last fault line in discussions about fast lanes—namely, views about mixed incentives at carriers. A mixed incentive arises when a carrier distributes a service that substitutes for one available on the public Internet.

Many broadband ISPs have a thriving broadband service and provide video on demand, and make a pretty good margin on both services. Will most cable firms want to sell a fast lane service to Netflix at a low price? If the carrier did not make money on video on demand, then a carrier’s price for a fast lane for Netflix would be lower, and the same goes for entrepreneurial firms offering video services. That also begins to suggest the intuition behind the concern that cable firms will tilt their other actions against online video to protect their existing businesses.

Mixed incentives also come up in discussions about scrutinizing carrier contracting practices. To put this fault line in perspective, consider a hypothetical scenario: What would happen after a carrier sells a fast lane to, say, ESPN? Can anyone else expect the same terms, even Netflix? Yet again, one side argues that competition will solve these issues, and the other sees a need for regulatory intervention to make terms of fast lane contracts public.

A mixed incentive also can emerge when a carrier has an economic incentive to protect its partner’s business in which it gets a cut. In other words, is it okay if ESPN gets a better deal than Fox Sports because an ISP made a deal with the local team who competes with something done by Fox Sports? The same fault line as just mentioned: should competition solve this question, or should governments intervene to publish fast lane contracts? Should ISPs be required to give the same terms to all takers?

To summarize, the fault lines between perspectives hinge crucially on several beliefs about the economics. Forecasts depend on whether the observer sees a preponderance of competitive or monopoly markets for ISP services. They also depend on whether transparency resolves potential problems.

 

Copyright held by IEEE. To view the original, see here.

mouseonmouse

March 22, 2014

USPTO public hearing on Attributable Ownership.

Filed under: Announcements — Shane Greenstein @ 12:12 pm

Attributable Ownership Public Hearing in San Francisco on March 26, 2014: Testimony and Written Comments Invited

The USPTO announces a public hearing on Wednesday, March 26, 2014 at U.C. Hastings College of Law in San Francisco from 9 a.m. until noon to receive feedback about proposed rules concerning the ownership of patents and applications (aka “attributable ownership proposed rules”). The public is invited to attend the hearing in person or via Webcast. Additionally, the public is invited to give testimony in person at the hearing and/or to submit written comments about the proposed rules.

To request to give testimony, please send an email to: aohearingrequest@uspto.gov. To submit written comments, please email: AC90.comments@uspto.gov.

The attributable ownership proposed rules require that the attributable owner, including the ultimate parent entity, be identified during the pendency of a patent application and at specified times during the life of a patent. The goal of the proposed rules is to increase the transparency of patent ownership rights. More details about the attributable ownership proposed rules are available here: http://www.gpo.gov/fdsys/pkg/FR-2014-01-24/pdf/2014-01195.pdf

Hearing Logistics:
• Wednesday, March 26, 2014, from 9 a.m. until noon (PT)
U.C. Hastings College of the Law
Louis B. Mayer Lounge
198 McAllister Street
San Francisco, CA 94102

LiveStream Access Information:
https://new.livestream.com/uspto/usptopublichearing
An agenda for the hearing is available here: http://www.uspto.gov/patents/init_events/ao_agenda_san_francisco_3-26-2014.pdf

mouseonmouse

March 11, 2014

Podcast about bias and slant on Wikipedia

Filed under: Academic Research — Shane Greenstein @ 9:13 pm

The web site, Surprisingly Free, organized a podcast about my recent paper, Collective Intelligence and Neutral Point of View: The Case of Wikipedia, coauthored with Harvard assistant professor Feng Zhu. Click here.wikipedia

The paper takes a look at whether Linus’ Law applies to Wikipedia articles. Do Wikipedia articles have a slant or bias? If so, how can we measure it? And, do articles become less biased over time, as more contributors become involved?

Jerry Brito conducts the interview. This is sponsored by the Mercatus Center at George Mason University. In the podcast we discuss the findings of the research.

Click here.

mouseonmouse

March 7, 2014

The Irony of Public Funding

Misunderstandings and misstatements perennially pervade any debate about public funding of research and development. That must be so for any topic involving public money, almost by definition, but arguments about funding for scientific research and development contain a unique and special irony.apache-logo

Well-working government funding is, by definition, difficult to assess, because of two criteria common to subsidies for R&D at virtually all western governments: specifically, governments seek to fund activities yielding large benefits, and these activities should be actions not otherwise undertaken by the private sector.

The first action leads government funders to avoid funding scientific research with low rates of return. That sounds good because it avoids wasting money. However, combining it with the second criteria does some funny things. If private firms only fund scientific R&D, where the rate of return can be measured precisely, government funding tends to fund activities where returns are imprecisely measured.

That is the irony of government funding of science. Governments tend to fund scientific research in precisely the areas where the returns are believed to be high, but where there is little data to confirm or refute the belief.

ApacheThis month’s column will illustrate, with a little example, the server software Apache. As explained in a prior column (“How Much Apache?”), Apache was borne and invented with government funding. Today, it is rather large and taken for granted. But how valuable is it? What was the rate of return on this publically funded invention? It has been difficult to measure.

(more…)

January 30, 2014

Google and Motorola in the Wake of Nortel

googlemotorolaGoogle has announced a plan to sell Motorola to Lenovo for just under three billion dollars. Google paid more than twelve billion only two years ago, and many commentators have declared that this is Larry Page’s first big bet, and potentially his first big experiment to go sour.

Even the best reporters characterize the strategy incorrectly, however, and forget the motivation. The best recognize that the acquisition had several motives, but still use wishy-washy language to discuss the priorities. Here is the language of the New York Times, for example:

“The deal is not a total financial loss for the extremely wealthy Google, which retains patents worth billions of dollars, but it is a sign of the fits and starts the company is experiencing as it navigates business in the mobile age, which has upended technology companies of all types.

In addition to using Motorola’s patents to defend itself in the mobile patent wars, Google pledged to reinvent mobile hardware with Motorola’s new phones, and directly compete with Apple by owning both mobile hardware and software.”

I have a bone to pick here. Even the best reporters are not recalling the sequence of events. Public policy shares some of the blame, and viewed from that perspective, much of this looks like a waste of resources. Let’s get that interpretation on the table by doing a bit of a flashback, shall we? (more…)

January 12, 2014

How Much Apache?

Filed under: Academic Research,Essays,Internet economics — Shane Greenstein @ 4:48 pm

Apache-software-FoundationAlmost with inexorable momentum, the Internet hurls itself into new territory. Some time ago, more than two billion humans had adopted at least one Internet-enabled device in some form, and nobody doubts that another two billion will accrue soon. New webpages increasingly find ways to inform readers, as more information in a variety of formats continues to be layered on the basic system of data internetworking.

That growth has been measured in a variety of dimensions. Today I would like to report on some research to measure one aspect of the Web’s growth, which I did with Frank Nagle, a doctoral student at Harvard Business School. We sought to figure out how much Apache served web surfers in the United States.Apache

That is not a misprint. Apache is the name for the most popular webserver in the world. It is believed to be the second most popular open source project after Linux.

Why do this? Measuring Apache is a key step in understanding the underlying economics. Because it’s free, Apache’s value is easy to mismeasure, and that makes its economics easy to misunderstand. (more…)

December 31, 2013

End the broadband panic meme

Filed under: Editorial,Internet economics and communications policy — Shane Greenstein @ 9:22 am

 

It happens about every twelve months, maybe with more frequency recently. Another reporter writes about how the US is falling behind international rivals in the supply of broadband. I am growing very tired of this meme, and answering emails from friends wondering if it is so. There are serious issues to debate, but this standard meme takes attention away from them.

 

The latest version of this article came from the New York Times. It had the title “US Struggling to Keep Pace in Broadband Service,” and it brought out the usual concern that all US growth will fall behind if the US does not have the fastest broadband in the world. If you are curious, read this.

 

Why is this tiring? Let me count the ways.

 

First, while it is irritating to have slow service at home, US productivity does not depend much on that. Household broadband is less important for economic growth than the broadband to business. And what really matters for productivity? Speed to business. The number of minutes it takes a household to download Netflix is statistically irrelevant for productivity growth in comparison to the time it takes to download information to conduct business transactions with employees, suppliers, and customers. We get measures of broadband speed to homes because that is what we can easily measure, not because it really matters.

 

Is there any sense that US business Internet is too slow? Well, perhaps the speed of a household’s internet says something about the speed of business Internet, but I doubt it. In all the major cities of the US there is no crisis at all in the provision of broadband.  Broadband speeds in downtown Manhattan are extraordinary, as well as in Wall Street. The Silicon Valley firms who need fast speeds can get them. Same with the firms in Seattle. Hey, the experiments with Google Fiber in Kansas City raise questions about whether entrepreneurship will follow the installation of super high speeds, but that is an open question. It is an interesting question too, but not a crisis.

 

These issues do arise, however, in some small and medium cities in the US, and a few rural areas where there is no broadband. In some places satellite is the best available, or some fixed wireless solutions are available too. These can be ok but not great for many business needs, but it can also limit what a business can do. These issues also have been present for a while, so most of the businesses that really needed the speed simply left the areas where speeds were slow. As a country we just let it happen a many years ago, and, frankly, it will be hard to reverse at this point. (It made me sad at the time; I even spent some time doing research on the topic for a while, though I have stopped in the last few years.) Again, this is an interesting question, but only a crisis in the places where it matters, not a national level.

 

Second, as for household speeds, many people simply don’t want them and do not want to pay for them. There is plenty of evidence that those high speed Korean lines did not get used right away, and lots of fiber goes to waste. Having said that, there are some interesting open questions here as well, namely, what type of speeds are people willing to pay for at their homes? Let’s not get panicked over supply if there is little demand, ok?

The last serious study of the willingness to pay for speed was done at the end of 2009, as part of the national broadband plan. The study was definitive at the time, that only a few households were willing to pay for high speeds. But, of course, that was a while ago. What has changed since then? Well, arguably, demand for data-intensive stuff has risen. That is not coming from the growth in torrent. Recent data are pretty clear about that. It is coming from Netflix, YouTube, and Facebook. Once again, that is a great open question, but panic about speed does nothing to focus on that question. Instead, let’s study demand and whether it goes unsatisfied.

 

Third, if we study demand, can we all acknowledge that demand is very skewed in the US? 10% of the users account for far more than 50% of the data to households, and 20% of the users get most systems to more than 80% of the data use. And it is growing at levels from median to highest part of the skew, so there is good reason to think demand for data is growing for all major users. Will there be capacity to handle those intensive users of data? The answer is unclear.

 

That hints at an open question that is worth debating. Not everyone pays the same price because flat rate pricing has been so common across the US. The top 10% of users pay very low prices per megabit. Even if total expenditure per month for the biggest users is twice as expensive in the US in comparison to other countries, it is still pretty cheap. Just to be clear, I am not saying it is too high or too low, nor am I am not making any comment about whether markets are competitive enough in the US. I am just saying that the international comparisons are flawed for big users in the US.

 

That hints at an even more challenging question. For better or worse, it is these high-intensity users, especially many with young adults or teenagers, who seem to be the early users of new services. So US entrepreneurial edge might actually be coming from the low prices and high speeds our biggest users have enjoyed all these years. Are we in danger of ending that? That is the provocative question to ask, and it is not about the general speed in the country. It is about the highest speeds to select users.

 

Finally, and my last problem with this meme: it’s old and tired and potentially irrelevant. Maybe this concern about wireline is all a tempest in a teapot. Many observers believe wireless is the new frontier for innovative applications. Maybe five years from now everybody will look back on this panic and just shake their heads. How can we have an entire article about broadband speeds to households and not a peep about the experience most people have on a daily level, which is determined by wireless speeds?

 

Just something to think about.

mouseonmouse

 

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 60 other followers

%d bloggers like this: