The following publication has been lightly reedited for spelling, grammar, and style to provide better searchability and an improved reading experience. No substantive changes impacting the data, analysis, or conclusions have been made. A PDF of the originally published version is available here.
As technology rapidly transforms many traditional sectors, the Fed and other public service providers are increasingly concerned about standards. The process for standards setting is complex with competing players and perspectives, and the fundamental economic principles are not clear cut.
On May 13–14, 2004, the Federal Reserve Bank of Chicago and Northwestern University cosponsored a conference titled “Standards and Public Policy.” The conference brought together about 40 experts in public policy on standards, including economists from academia, the Federal Reserve System, and industry. This Chicago Fed Letter summarizes the conference presentations, which focused on the economics of standards competition, committees and standards organizations, compatibility and standards policy, and governmental approaches to standards policy. All papers presented at the conference can be accessed at: www.chicagofed.org/news_and_conferences/conferences_and_events/2004_standards_emerging_payments_agenda.cfm.
In his introductory remarks, Michael Moskow, president and chief executive officer at the Chicago Fed, emphasized that as technology rapidly transforms many traditional sectors, the Fed and other public service providers are increasingly concerned about standards. Moskow noted that often the process for standards setting is an uncertain and complex activity with competing players and perspectives. Furthermore, the fundamental economic principles are not clear cut and have just recently been gaining significant attention. He pointed out that the payments market, in particular, has seen a transition from checks to new electronic payment instruments, highlighting the need for common interoperability standards. Therefore, the Fed is confronted with the particularly difficult task of finding the proper public policy position in the payments market, while balancing its dual roles as public service provider and market participant. Because the research presented at the conference balances real-world concerns with critical academic thinking, it may provide helpful insights to those interested in making informed public policy decisions on standards.
The economics of standards competition
In the first session, Timothy F. Bresnahan, Stanford University, presented a paper, coauthored with Pai-Ling Yin of Harvard Business School, that studied both economic and technical forces affecting the diffusion of web browsers. The authors focused their analysis on the late 1990s, when both personal computer (PC) sales and web use exploded.
Bresnahan noted that people were “rationally ignorant” when it came to selecting browser software: By and large, they used the browser that came with their computer. Hence, the primary force behind the adoption of new browser versions was the diffusion of new PCs, rather than any improvements in browsers themselves. He suggested that looking at the diffusion process, rather than just the determinants of a shift in technology, adds significant insight to our understanding of the overall economics of technical change.
Marc Rysman, Boston University, presented a joint paper with Shane Greenstein, Northwestern University, focusing on the early 56K-modem market, to highlight the coordination costs of resolving a standards war. The standards war in the 56K-modem market involved two very similar network technologies. The standard-setting organization (SSO), the International Telecommunications Union (ITU), was apparently helpful in resolving the conflict between the technologies by establishing a focal point for the industry. However, the development of focal points carries costs: in this case, membership, meeting, submission, and negotiation costs associated with the standard-setting process, as well as implicit costs that can make it difficult to reach an effective consensus. For example, when participants have imperfect information, confusion and misunderstandings may delay the process. The voting environment also has implications for the resolution process. The ITU uses a consensus voting system. Since all firms in the market are members, each firm can delay the process if its own concerns are not met.
Rysman concluded that ITU acted in a way that produced net benefits. In his view, it is unlikely that the alternatives of regulation or the market would have overcome the social costs of coordination any more easily.
Next, Richard Langlois, University of Connecticut, discussed his research on institutional structure as a competitive weapon. Specifically, he looked at the U.S. cluster tool industry, which manufactures the equipment used to produce semiconductors.
Competition for these tools is divided between a large vertically integrated firm, Applied Materials, which manufactures according to its own specifications, and a fragmented fringe of smaller, more specialized competitors. The fringe has responded to the competition from Applied Materials by "creating a common set of technical interface standards."
Rather than calling this a standards battle, Langlois noted that it is better thought of as a battle of alternative development paths: the closed systemic approach of Applied Materials versus the open modular system of the competitive fringe. He analyzed the trade-off between the benefits of system innovation and internal economies of scale and scope on the one hand, and the benefits of modular innovation and external economies of standardization on the other.
While this case provides an interesting example of an industry where diverse approaches to standardization may coexist, the industry is starting to change. Langlois noted that the industry may see a transformation to a more common structure, where "several larger firms adhere to common standards and become broadly compatible systems integrators which outsource manufacturing to specialized suppliers of subsystems."
Neil Gandal, Tel Aviv University and Michigan State University, presented joint research with Nataly Gantman, Tel Aviv University, and David Genesove, Hebrew University and Center for Economic Policy Research, that focused on the interaction between patenting and standardization committee participation in the U.S. modem industry. Gandal explained that network effects are inherent in the modem market because internet users and internet service providers benefit as more adopt compatible technology; furthermore, interoperability is crucial for the seamless transmission of data.
Gandal noted that while over 200 companies in this market attended standardization meetings from 1990 to 1999, and around the same number received patents from 1976 to 1999, only 45 firms did both. Using statistical tests, they show that while patenting is predicted by participation in earlier standardization meetings, meetings participation is not predicted by earlier patenting. One possible explanation for this finding is that firms with pending, but not yet granted, patents attend the committee to have the standard incorporate their intellectual property. There are, of course, other possible explanations as well, which the authors are continuing to explore.
Charles Steinfield, Michigan State University, presented a joint study with Rolf Wigand, University of Arkansas, M. Lynne Markus, Bentley College, and Gabe Minton, Mortgage Bankers Association of America, focused on vertical information systems (IS) standards in the U.S. mortgage industry. These standards may address product identification, data definitions, standardized business documents, and/or business process sequences.
Their case study identifies three processes as important in this standard-setting environment: the way that the standardization process is structured to facilitate participation and consensus, the approaches used to promote adoption of open standards, and the steps taken to ensure the ongoing maintenance and integrity of the standard. The authors’ results emphasize the importance of company and individual incentives to contribute to the process, formal and informal governance mechanisms used to minimize conflict and reach consensus, inclusive and proactive policies regarding membership, limited scope of standardization activities, explicit intellectual property rights policy, and efforts to institutionalize the entire standardization process into a formal structure.
Committees and standards organizations
In the following session, Joel West, San Jose State University, presented a paper on open standards. West defined a standard as “open” if the “rights to the standard [are] made available to economic actors other than [the] sponsor.” He indicated that this transfer can occur if rights are waived or conceded, licensed to other organizations, or are unprotectable. He pointed out that while open product compatibility standards are often viewed as socially optimal, the reality is that not all open standards are really open. His paper defines different measures for openness and their implications for adoption, competition, and public policy.
West argued that it is important to determine who has access to the standard, including customers, complementers, and competitors. Next, it is necessary to decipher what rights are made available to those who have access to the standard, such as creating the specification and using the specification in development and in practice. Overall, access to the standard can be limited through membership requirements on the creator side or use rights on the user side.
West suggested that policymakers could address the deficiencies in openness in several ways, including direct regulation, procurement, intellectual property law, and competition policy.
Josh Lerner, Harvard Business School, presented a joint paper with Jean Tirole, Massachusetts Institute of Technology and Industrial Economic Institute, focusing on how sponsors of a standard choose which SSO will certify their technology. Lerner posed a situation in which certifiers (SSOs) vary in terms of “toughness,” where tougher SSOs are less likely to ratify the standard; ratification increases consumer adoption, which benefits sponsors. This toughness may be offset, however, by the extent to which an SSO is attuned to the sponsor’s interests. Sponsors’ choices among SSOs are affected by these factors, as well as the inherent strength or quality of the standard.
Lerner said that in general, sponsors of weaker standards will choose more credible SSOs. Stronger standards allow the sponsor to have more control over the certified standard, that is, make fewer concessions. In general, users benefit when the sponsor has a stronger downstream presence. Lerner also suggested that standards competition induces sponsors to apply to more credible SSOs. In addition, his research model shows that regulation cannot improve on private choices in the case of mildly strong standards, and that partial regulation reduces social welfare when standards are strong.
Tim Simcoe, University of California, Berkeley, presented his research examining the time it takes SSOs to reach consensus on new standards. He studied the Internet Engineering Task Force (IETF)—the organization that issues the technical standards used to operate the internet. The period of his analysis, between 1992 and 2000, is interesting because “rapid commercialization of the internet led to some dramatic changes in its size, structure, and demographic composition.”
Simcoe examined the relationship between the composition of IETF committees and the time to reach consensus. He described several factors that influence the time it takes to reach consensus, including the number of participants on a committee of the underlying technology and its interdependency with other standards, the set of design alternatives available to the committee, the economic significance of the specification, and the rules governing the consensus decision-making process.
Simcoe showed that there was a significant slowdown in IETF standard setting between 1992 and 2000. Over this period, the median time from first draft to final specification more than doubled, growing from seven to 15 months. He stated that cross-sectional variation in size, complexity, and indicators of distributional conflict for individual working groups or proposals explains only a small portion of the overall slowdown. He attributes the remaining increases to “changes in IETF-wide culture and bottlenecks in the later stages of the review process.”
Then, Carl Cargill, Sun Microsystems, presented a paper coauthored with Sherri Bolin, The Bolin Group, on recent trends in the organization and performance of SSOs. Cargill noted that over the last decade, the standard-setting process as embodied by many SSOs has become dysfunctional. He noted two sources for this deterioration. First, it is too easy for private-sector entities to form SSOs and “stack” them with members from organizations with similar interests. This results in over proliferation of SSOs organized by competing interests, which is not much better than market-based competition between standards and may even be worse. Second, Cargill noted that the U.S. government has been remiss in not defining clear jurisdictional and procedural rules for SSOs.
Cargill suggested a policy remedy for the latter problem in particular. He recommended that the government establish clearer and more open rules for membership and participation in SSOs. He feels that such a change will reduce the incentives for overproliferation in SSO formation.
Compatibility and standards policy
In the next conference session, Jeff MacKie-Mason, University of Michigan, and Janet Netz, ApplEcon, L.L.C., presented a paper on using interface standards as an anticompetitive strategy at the component level. These authors presented a new strategy called “one-way” standards and discussed the conditions under which it can be anticompetitive.
While economists often assume that standards reduce barriers to entry, consortia can create entry barriers through a number of avenues: delaying publication of the standard to gain a first-mover advantage, manipulating standards to require other firms to use royalty-bearing intellectual property, and creating “one-way” standards. The last barrier drew the most attention from the authors.
In this strategy, a consortium creates an extra technology layer or a “translator.” If the consortium publishes the information necessary to manufacture compliant components on only one side of the translator, it can move the boundary separating systems away from mix-and-match competition and exclude competition on the private side—while appearing open by enabling component competition on the public side.
Joe Farrell, University of California, Berkeley, discussed the appropriateness of government policies that force compatibility between competing systems or standards. Such compatibility shifts the level of competition from the “system” to the individual components that comprise the standard.
Farrell noted that despite considerable attention devoted to this issue, the overall benefits of shifting compatibility to the component level remain ambiguous. In many cases, systems competition can be quite beneficial. Owners of systems often have strong incentives to pursue aggressive pricing policies that benefit consumers. System competition can also intensify price competition relative to component competition. Despite these possibilities, Farrell expressed the opinion that component competition is more often more efficient than system competition. This is largely because component competition increases consumers’ choices.
From a policy standpoint, Farrell noted further difficulties. He mentioned that even if policymakers favor component competition, implementing policies to dissolve systems can be problematic. In summary, Farrell said that many key issues involving compatibility policy remain unresolved.
Government approaches to standards policy
Luis Cabral, New York University, presented a joint study with Tobias Kretschmer, London School of Economics. In this paper, the authors focused on a policymaker’s choice between competing standards. They also considered the timing of intervention.
Cabral said that policymakers may be impatient, caring solely about the welfare of current adopters of a standard, or patient, caring exclusively about the welfare of future adopters. In this paper, the authors assumed that the policymaker can affect the system in terms of what standard is chosen. When a policymaker is very impatient, the authors showed that it is optimal to act promptly and support the leading standard. It is better for a patient policymaker, however, to delay intervention and eventually support the lagging standard.
Cabral noted that their model is only appropriate for extreme specifications of policymakers’ preferences. In reality, policymakers typically fall somewhere in between completely impatient and perfectly patient. Furthermore, Cabral suggested that policymakers do not always choose the superior standard given their preferences.
Marco Ottaviani, London Business School, presented a paper coauthored with Jerome Adda, University College London, which models the public policy issues surrounding the transition from the analog to digital television standards in the U.K. This case presents a typical coordination problem; that is, viewers’ adoption depends on broadcasters’ and manufacturers’ support for the digital platform, which in turn depends on viewer adoption. Ottaviani stated that policymakers can affect the speed of digital television diffusion through controlling the quality of the signals and the content of public services broadcasters, providing subsidies to various users in the digital equipment market, or setting a firm switch-off date for the analog signal.
Ottaviani’s model used survey data with stated preferences for television characteristics by U.K. consumers. The U.K. government would prefer to switch off analog services sometime between 2006 and 2010. By that date, most consumers would need digital TVs or digital set-top boxes. The results from the estimated model showed that more than 95% of viewers would adopt digital technology before a perceived switch-off date, if they viewed it to be inevitable. In general, consumers face high costs in switching from analog to digital; however, their preference for television is very strong. So, Ottaviani noted, according to his model, if the U.K. government were to issue a credible date commitment for the termination of analog services, it would likely meet its objective for the transition; however, the government has not done so.
Conclusion
Standards have become increasingly important as a way of spurring technological change, enabling better products and services to be produced in the marketplace. As a whole, the case studies and research presented at this conference suggested that the operations of markets in the current antitrust environment were successful in producing beneficial standards; that is, no broadly recognized difficulties were identified. Some participants further supported this conclusion by highlighting that it would not be easy to change this system or design a new regime that would work better than the current one. There have been some cases, however, where standards were manipulated to the advantage of certain market participants, often in areas where rights to intellectual property were not well defined. Taken together, the work presented at this conference lays a foundation for those wishing to better understand the processes governing the evolution of standards.1
Notes
1 The concluding remarks represent the authors’ interpretations and do not necessarily reflect the views of all the conference participants.