Virulent Word of Mouse

August 28, 2014

Baking the Data Layer

chocolate-chip-cookieThe cookie turned 20 just the other day. More than a tasty morsel of technology, two decades of experimentation have created considerable value around its use.

The cookie originated with the ninth employee of Netscape, Lou Montulli. Fresh out of college in June 1994, Montulli sought to embed a user’s history in a browser’s functions. He added a simple tool, keeping track of the locations users visited. He called his tool a “cookie” to relate it to an earlier era of computing, when systems would exchange data back and forth in what programmers would call “magic cookies.” Every browser maker has included cookies ever since.

The cookie had an obvious virtue over many alternatives: It saved users time, and provided functionality that helped complete online transactions with greater ease. All these years later, very few users delete them (to the disappointment of many privacy experts), even in the browsers designed to make it easy to do so.

Montulli’s invention baked into the Web many questions that show up in online advertising, music, and location-based services. Generating new uses for information requires cooperation between many participants, and that should not be taken for granted.

The cookie’s evolution

Although cookies had been designed to let one firm track one user at a time, in the 1990s many different firms experimented with coordinating across websites in order to develop profiles of users. Tracking users across multiple sites held promise; it let somebody aggregate insights and achieve a survey of a user’s preferences. Knowing a user’s preferences held the promise of more effective targeting of ads and sales opportunities.DoubleClick

DoubleClick was among the first firms to make major headway into such targeting based on observation at multiple websites. Yet, even its efforts faced difficult challenges. For quite a few years nobody ever targeted users with any precision, and overpromises fueled the first half-decade of experiments.

The implementation of pay-per-click and the invention of the keyword auction—located next to an effective search engine—brought about the next great jump in precision. That, too, took a while to ripen, and, as is well known, Google largely figured out the system after the turn of the millennium.

Today we are awash in firms involved in the value chain to sell advertising against keyword auctions. Scores stir the soup at any one time, some using data from cookies and some using a lot more than just that. Firms track a user’s IP addresses, and the user’s Mac address, and some add additional information from outside sources. Increasingly, the ads know about the smartphone’s longitude and latitude, as well as an enormous amount about a user’s history.

nsaAll the information goes into instantaneous statistical programs that would make any analyst at the National Security Agency salivate. The common process today calculates how alike one individual is to another, assesses whether the latest action alters the probability the user will respond to a type of ad, and makes a prediction about the next action.

Let’s not overstate things. Humans are not mechanical. Although it is possible to know plenty about a household’s history of surfing, such data can make general predictions about broad categories of users, at best. The most sophisticated statistical software cannot accurately predict much about a specific household’s online purchase, such as the size of expenditure, its timing, or the branding.

Online ads also are still pretty crude. Recently I went online and bought flowers for my wedding anniversary and forgot to turn off the cookies. Not an hour later, a bunch of ads for flowers turned up in every online session. Not only were those ads too late to matter, but they flashed later in the evening after my wife returned home and began to browse, ruining what was left of the romantic surprise.

Awash in metadata

Viewed at a systemic level, the cookie plays a role in a long chain of operations. Online ads are just one use in a sizable data-brokerage industry. It also shapes plenty of the marketing emails a typical user receives, as well as plenty of offline activities, too.

To see how unique that is, contrast today’s situation with the not-so-distant past.telephone

Consider landline telephone systems. Metadata arises as a byproduct of executing normal business processes. Telephone companies needed the information for billing purposes—for example, the start and stop time for a call, area codes and prefix to indicate originating and ending destination, and so on. It has limited value outside of the stated purpose to just about everyone except, perhaps, the police and the NSA.

Now contrast with a value chain involving more than one firm, again from communications, specifically, cellular phones. Cell phone calls also generate a lot of information for their operations. The first generation of cell phones had to triangulate between multiple towers to hand off a call, and that process required the towers to generate a lot of information about the caller’s location, the time of the call, and so on.

Today’s smartphones do better, providing the user’s longitude and latitude. Many users enable their smartphone’s GPS because a little moving dot on an electronic map can be very handy in an unfamiliar location (for example). That is far from the only use for GPS.

Cellular metadata has acquired many secondary values, and achieving that value involves coordination of many firms, albeit not yet at an instantaneous scale suggestive of Internet ad auctions. For example, cell phone data provides information about the flow of traffic in specific locations. Navteq, which is owned by the part of Nokia not purchased by Microsoft, is one of many firms that make a business from collecting that data. The data provide logistics companies with predictable traffic patterns for their planning.

Think of the modern situation this way: One purpose motivated collecting metadata, and another motivated repurposing the metadata. The open problem focuses on how to create value by using the data for something other than its primary purpose.

Metadata as a source of value

Try one more contrast. Consider a situation without a happy ending.

itunes_logo150New technologies have created new metadata in music, and at multiple firms. Important information comes from any number of commercial participants—ratings sites, online ticket sales, Twitter feeds, social networks, YouTube plays, Spotify requests, and Pandora playlists, not to mention iTunes sales, label sales, and radio play, to name a few.

The music market faces the modern problem. This metadata has created a great opportunity. The data has enormous value to a band manager making choices in real time, for example. Yet, the entire industry has not gotten together to coordinate use of metadata, or even to coordinate on standard reporting norms.

There are several explanations for the chaos. Some observers want to blame Apple, as it has been very deliberate about which metadata from iTunes it shares, and which it does not. However, that is unfair to Apple. First, they are not entirely closed, and some iTunes data does make it into general use. Moreover, Apple does not seem far out of step with industry practices for protecting one’s own self-interest, which points to the underlying issue, I think.

There is a long history of many well-meaning efforts being derailed by narrow-minded selfishness. For decades, merely sampling another performer’s song in any significant length led to a seemingly trivial copyright violation that should have been easy to resolve. Instead, the industry has moved to a poor default solution, requiring samplers to give up a quarter of royalties. With those type of practices, there is very little sampling. That seems suboptimal for a creative industry.

Composers and performers also have had tussles for control over royalties for decades, and some historical blowups took on bitter proportions. The system for sharing royalties in the US today is not some great grand arrangement in which all parties diplomatically compromised to achieve the greater good. Rather, the system was put there as a consent decree after settling an antitrust suit.

If this industry had a history of not sharing before the Internet, who thought the main participants would share metadata? metatagsWho would have expected the participants to agree on how to aggregate those distinct data flows into something useful and valuable? Only the most naive analyst would expect a well-functioning system to ever emerge out of an industry with this history of squabbling.

More generally, any situation involving more than a few participants is ripe for coordination issues, conflict, and missed opportunity. It can be breathtaking when cooperation emerges, as in the online advertising value chain. That is not a foregone conclusion. Some markets will fall into the category of “deals waiting to be done.”

 

The systems are complicated, but the message is simple. Twenty years after the birth of the cookie, we see models for how to generate value from metadata, as well as how not to. Value chains can emerge, but should not be taken for granted.

More to the point, many opportunities still exist to whip up a recipe for making value from the new data layer, if only the value chain gets organized. On occasion, that goal lends itself to the efforts of a well-managed firm or public efforts, but it can just as easily get neglected by a squabbling set of entrepreneurs and independently minded organizations, acting like too many cooks.

Copyright held by IEEE. To view the original, see here.

mouseonmouse

April 23, 2014

The Fault Lines Along Fast Lanes

highwayUntil recently, a fast lane from a broadband ISP was a remote possibility in the US. ISPs had to give data equal treatment, regardless of the source, and could not offer faster delivery for a higher price while giving slower service as a default.

Although fast lanes were allowed by regulators a few years ago in the wireless networks, the carriers hesitated to offer them. In December 2013, AT&T Wireless broke with the norm and announced just such a program. FCC regulations forbidding fast lanes at landline broadband ISPs had also prevented them, but a January 2014 US appeals courts struck down those regulations.

Is that a good or bad trend? The answer depends on who’s talking. Critics of government regulation despise the rules forbidding fast lanes, whereas net neutrality supporters view the presence of fast lanes as a nightmare come to life.

Legal and political aspects of this topic typically get most of the attention, as do the implications for the variety of speech online. Most reporters find these aspects interesting, and understand them. However, the economics of fast lanes receives less attention. That is a surprise, because the economics is not very difficult, and it’s worth understanding. It illuminates the fault lines between many different points of view.

Mirrors and servers

The public Internet has evolved considerably since the days when the design for packet networks presumed that the message did not have to arrive at an inbox immediately. Users today prefer and expect speedier services. That goes for more than just IP telephony and video chat, where users notice the smallest delay. It also holds true for video, such as YouTube and many online games. Many providers believe it also affects the bottom line—namely, that users switch services if they do not get fast delivery of data.

Long before fast lanes became a real possibility, many participants in the Internet made investments aimed at reducing delays. For example, for some time now, Akamai has sold a well-known approach to improving speed. Their service also defines the first fault line, so this is a good place to start the discussion. Opponents to net neutrality ask why Akamai can operate a business to speed up data delivery but a carrier cannot.

Akamai’s service supports servers inside ISPs, closer to households. Any seriously large Internet content firm must buy these services, and it is considered a cost of doing business online. Many ISPs like working with Akamai, because their customers experience better service without much investment from the ISP.

That is not the only method for speeding up data. For example, Google has bypassed Akamai’s charges in many locations by building its own data network to ISPs. Netflix has recently sought to do the same, though it is not quite done (because it has not successfully negotiated a presence with every US ISP). Any gathering of more than three Internet engineers will generate discussion of even more potential solutions in the cloud. Amazon built a content delivery network with enormous geographic range. Microsoft has similar investments and aspirations, as does IBM. The list goes on.

That leads to the deeper question. The last few years have witnessed robust experimentation among distinct approaches to functional improvement, and these might be either complements to, or substitutes for, each other. Accordingly, carriers have had two roles. They act as a firm whose users benefit from faster delivery, and they act as a supplier that could choose to cooperate—or refuse to cooperate—with solutions offered by others.

When a carrier had no investments in fast lanes, it had every reason to cooperate with solutions offered by others. Will that change if the carrier has its own fast lane?

The answer defines a fault line between points of view. Some observers label this a possibility that might never arise. They want a regulatory response only when a problem emerges, and otherwise they anticipate that a regulator will err. Net neutrality supporters think regulators have an obligation to protect the Internet. Advocates worry that introducing fast lanes messes with a system that already works well. They do not trust carriers to cooperate with solutions that might substitute for a fast lane business or threaten an investment in some way.

Competition and monopoly

The next fault line has to do with the role of money. Defenders of fast lanes expect them to become a cost of doing business for content firms, and forecast that fast lanes will be profitable and generate more investment. Opponents have the same forecast about profitability, but a different interpretation. They worry that fast lanes will lead to an Internet where only rich firms can deliver their content effectively.

This concern tends to get plenty of press, and a few rhetorical questions illuminate the fault line. Will the default speeds offered by ISPs be good enough for startups or for small specialty websites? One side believes that the defaults will be good enough, whereas the other believes that fast lanes will lead ISPs to neglect investing in their slow services.

One’s point of view about the state of competition for ISPs has a big role in interpreting the role of money. Some believe a competitive ISP market would melt away most problems. Others argue that belief about competitive ISP markets is a fantasy and masks many dangers.

The belief in competition is not a belief in magic, so it is worth examining. Rather, this side views competition as a painful process. In competitive markets, customers substitute into alternatives if they do not like what a supplier does. Suppliers hesitate to do things that make their users angry. In other words, ISPs would compete for customers by offering better fast lanes. In this view, users would get angry if they perceived that carriers were slowing down content from firms they cared about, and angry users would find another carrier.

Where is the fault line? Recognize the two key factors that make ideal competitive markets operate well—namely, transparency and the availability of many user options.

Just about everybody is in favor of transparency, but not necessarily more of it if rules require it. Those with faith in competitive processes tend to see the merits in nothing more than a few light-handed requirements, such as programs to facilitate measuring the speed of different ISPs. The other side asks for much more, such as the publication of all fast lane contracts (more on that later).

As for the second concern about options, consider the key open question: Do users have many options available to them, or do they face de facto monopoly ISP markets? Once again, there are different beliefs about the preponderance of competition and monopoly found throughout locales of the US. Those who presume that competition is inadequate lack sympathy for leaving ISPs alone (versus those who presume it is adequate).

That also leads to different interpretation about how lucrative fast lanes will be. Supporters of fast lanes say that ISPs should charge whatever the market will bear, and competition will discipline pricing. Opponents say that the monopolies emerged from granting public franchises and use of public rights of way, and characterize high prices as misuse of utility franchises.

A classic debate about government merger policy also arises. Net neutrality supporters argue that fast lanes give ISPs artificial incentives to consolidate in order to increase their bargaining leverage with content providers, thus concentrating economic power in ISPs. Net neutrality opponents do not see anything wrong with large ISPs. In a competitive market, size is irrelevant.

Mixed incentives

The foregoing leads into the last fault line in discussions about fast lanes—namely, views about mixed incentives at carriers. A mixed incentive arises when a carrier distributes a service that substitutes for one available on the public Internet.

Many broadband ISPs have a thriving broadband service and provide video on demand, and make a pretty good margin on both services. Will most cable firms want to sell a fast lane service to Netflix at a low price? If the carrier did not make money on video on demand, then a carrier’s price for a fast lane for Netflix would be lower, and the same goes for entrepreneurial firms offering video services. That also begins to suggest the intuition behind the concern that cable firms will tilt their other actions against online video to protect their existing businesses.

Mixed incentives also come up in discussions about scrutinizing carrier contracting practices. To put this fault line in perspective, consider a hypothetical scenario: What would happen after a carrier sells a fast lane to, say, ESPN? Can anyone else expect the same terms, even Netflix? Yet again, one side argues that competition will solve these issues, and the other sees a need for regulatory intervention to make terms of fast lane contracts public.

A mixed incentive also can emerge when a carrier has an economic incentive to protect its partner’s business in which it gets a cut. In other words, is it okay if ESPN gets a better deal than Fox Sports because an ISP made a deal with the local team who competes with something done by Fox Sports? The same fault line as just mentioned: should competition solve this question, or should governments intervene to publish fast lane contracts? Should ISPs be required to give the same terms to all takers?

To summarize, the fault lines between perspectives hinge crucially on several beliefs about the economics. Forecasts depend on whether the observer sees a preponderance of competitive or monopoly markets for ISP services. They also depend on whether transparency resolves potential problems.

 

Copyright held by IEEE. To view the original, see here.

mouseonmouse

January 30, 2014

Google and Motorola in the Wake of Nortel

googlemotorolaGoogle has announced a plan to sell Motorola to Lenovo for just under three billion dollars. Google paid more than twelve billion only two years ago, and many commentators have declared that this is Larry Page’s first big bet, and potentially his first big experiment to go sour.

Even the best reporters characterize the strategy incorrectly, however, and forget the motivation. The best recognize that the acquisition had several motives, but still use wishy-washy language to discuss the priorities. Here is the language of the New York Times, for example:

“The deal is not a total financial loss for the extremely wealthy Google, which retains patents worth billions of dollars, but it is a sign of the fits and starts the company is experiencing as it navigates business in the mobile age, which has upended technology companies of all types.

In addition to using Motorola’s patents to defend itself in the mobile patent wars, Google pledged to reinvent mobile hardware with Motorola’s new phones, and directly compete with Apple by owning both mobile hardware and software.”

I have a bone to pick here. Even the best reporters are not recalling the sequence of events. Public policy shares some of the blame, and viewed from that perspective, much of this looks like a waste of resources. Let’s get that interpretation on the table by doing a bit of a flashback, shall we? (more…)

August 20, 2013

The economic policy of data caps

It is the one year anniversary of the Open Internet Advisory Committee (as noted earlier). Today the committee issued a report of its work over the last year. You can access it here. Today’s post discusses the report about data Caps, which was written by the Economic Impacts working group.

I am a member of the committee and the Economic Impacts Working Group, and I like theFCC-logo work we did. I chair the group. “Chair” is a misleading title for what I really do, which is take notes of the groups’ discussions and transcribe them. Every now and again, I do a little more. As one of the members without any stakes in the outcome, occasionally I offer a synthesis or compromise between distinct views.

The report aims to analyze data caps in the context of the Open Internet Report and Order. The Open Internet Report and Order discusses usage-based pricing (UBP), but does not expressly mention data caps except by implication in that data caps can be considered a form of UBP. The Order left open the possibility of many experiments in business models and pricing.

Moreover, the Internet had evolved over time, and the Order anticipated that the Internet would continue to evolve in unexpected ways. The Order set up the advisory group to consider whether aspects of the Order remain consistent in its effects on the Internet as the Internet evolves, and it is in that spirit that this conversation was undertaken. (more…)

July 14, 2013

The Open Internet Advisory Committee at year one

Today I would like to make a little shout-out for recent work at the FCC to improve policy making for the Internet. To do that I need to put my preferences front and center.grandstand

There are policy debates, and then there is actual policy making. The former grabs headlines on occasion, while the latter rarely does. Both need to take place in order to make progress, albeit, it is a rare person who has the patience and taste for both.

I have little patience for the grandstanding that goes with policy debates, and I do not take much pleasure from the staging and entertainment behind political posturing. I prefer policy making, especially the quieter and more challenging parts of it, and I love being engaged in challenging policy conversations that do not get much publicity.

Just so we are clear, this post will discuss policy making. Policy debate will largely remain in the background. That is unusual for most public discussions about policy for the open Internet, but it seems appropriate for today’s post.

FCC-logoIt is the one year anniversary of the Open Internet Advisory Committee. In approximately two weeks the committee will release its first big report, a kind of year-in-review. I am not a neutral observer of this committee. I am a member. I am especially impressed by what the committee did in its first year.

If you think I am biased, then you are right. That is the point of this blog post.

I have been happy to be part of this committee, and contribute to public policy discussions through participation. And whatever else the posturing political world says, I want to be the first to say loudly that this committee has done wonderful work to support policy making, and, until two weeks from now, largely out of the public’s eye. (more…)

June 20, 2013

Differentiated Platforms

Differentiation is a standard concept for analyzing competition. It describes a common situation, where one firm develops the ability to serve one type of customer in a market—say, buyers who will pay a lot to save time—while a competing firm serves another—say, budget-conscious buyers who are patient.

differentiated microchipsDifferentiation can describe common competitive behavior in technology markets. A chip firm might develop particular attributes—say, faster, energy-hungry electronics for a particular purpose—while their rival might specialize in slower chips that use little energy. This differentiation can earn each firm loyalty from buyers with different preferences.

That motivates today’s question: Can platforms differentiate? Platforms have played an increasingly important role in technology markets in the last decade—in mobile devices, in web services, you name it. A mix of standards composes a platform, complementing many other firms who build services upon the standards.windows evolution

At first blush the answer appears to be yes. Think of attributes associated with common platforms, such as Windows, Android, Linux, Facebook, or the iPhone. These platforms differ from one another in the marketplace and set themselves apart from near rivals, in ways that earn the loyalty of particular users.

That first impression makes it worth a deeper look. There is more here than meets the eye, and smart firms shape their strategies with subtle thoughtfulness. (more…)

April 4, 2013

The On Line Honesty Box

Filed under: Considering topical questions,Essays,Internet economics — Shane Greenstein @ 8:43 pm

Many vendors give away free services, but usually there is a catch. For example, while Google has given away search services for moreRadiohead
than a decade, no user has any illusions as to why. Advertising buys space and tries to reach readers. As another example, for many years US cellular carriers came close to giving away handsets to customers (until expensive smartphones reduced the practice). Buyers knew these subsidies came with two-year commitments, and buyers could anticipate giving the carrier high service fees.

Free services without any apparent catches are rare, but it seems to happen with “honesty boxes.” It has always been so with street musicians. A listener can walk away or give any amount into an open hat—from nothing to any denomination of bill. Public campsites cards-against-humanityhave relied on honesty boxes for years, letting campers fill out their permits, paying on their
honor. Office coffee pools frequently use honesty boxes as well.

What about the online world? There have been experiments with online honesty boxes. The lessons are quirky, but too interesting to ignore. Today’s column describes two—one from Radiohead, and another from Cards Against Humanity.

(more…)

February 15, 2013

Gaming Structure

For several years, commentators have forecast that the rise in smartphones and tablets, as well as Facebook, would upend the structure of the gaming market. A variety of novel adroit aliens and irascible animals symbolically represent the new order, while new companies from new genres alter the identities of suppliers. mobile-application-development1

Methinks that all the talk of restructuring is exaggerated. The names have changed, but the same factors still matter for market leadership. The old structure had a number of economic determinants that haven’t gone away. For example, ongoing product development by independent firms continues apace, and all parties must manage the unknowable. Today, as in the past, independent firms cooperate with established publishers when it suits both parties.

If you ask me, we’re transitioning to the same structure with (at most) a new set of players. That’s because two factors used to matter most in gaming—uncertainty and market frictions—and they still do.

(more…)

January 9, 2013

The FTC and Google: Did Larry Learn his Lesson?

The FTC and Google settled their differences last week, putting the final touches on an agreement. Commentators began carping from all sides as soon as the announcement came. The most biting criticisms have accused the FTC of going too easy on Google. Frankly, I think the ftccommentators are only half right. Yes, it appears as if Google got off easy, but, IMHO, the FTC settled at about the right place.

More to the point, it is too soon to throw a harsh judgment at Google. This settlement might work just fine, and if it does, then society is better off than it would have been had some grandstanding prosecutor decided to go to trial.

Why? First, public confrontation is often a BIG expense for society. Second, as an organization Google is young and it occupies a market that also is young. The first big antitrust case for such a company in such a situation should substitute education for severe judgment.

Ah, this will take an explanation. (more…)

December 27, 2012

Technology Awards for 2012

Filed under: Amusing diversions,Considering topical questions — Shane Greenstein @ 10:23 pm

It is time to end this year by giving out technology awards! This post contains a baker’s dozen. They go to firms and managers who took notable actions in technology markets in 2012.

There are no fixed categories of awards. Some categories are recycled from last year’s awards, but somesally-field you really like me are new. Just like last year’s awards, there are three criteria. The winner had to do something in 2012. The action had to involve information and communications technology. It had to be notable.

The awards come with plenty of sarcasm and it does not come with a statue. The prize is a virtual badge called a “Sally,” affectionately named for Sally Field, famous for her flying nun and her cry at the Oscars, “You like me, you really like me!”

If you do not like this year’s awards, please use the commentary section to make additional suggestions.

Also, one last note: None of this should be taken seriously. Most of these awards are given with tongue firmly in cheek. The exceptions come near the end, in awards 11 and 12, which contain a preachy tone. Sorry, but not all of life is fun. (more…)

Next Page »

Blog at WordPress.com.

%d bloggers like this: