Computational & Mathematical Organization Theory 6:3 (2000): 227–247 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands °
A Computational Model of Technological Innovation at the Firm Level DANIEL TEITELBAUM Department of Engineering and Public Policy, Carnegie-Mellon University, Pittsburgh, PA, 15213, USA; Bios Group, Inc., 317 Paseo de Peralta, Santa Fe, NM 87501, USA email:
[email protected] HADI DOWLATABADI Department of Engineering and Public Policy, Carnegie-Mellon University, Pittsburgh, PA, 15213, USA email:
[email protected]
Abstract The factors which speed and slow technological innovation have been of interest to policy makers since at least the mid 1960’s. Since that time, many theoretical models of innovation at the firm level and at the industry level have been proposed. Due to limitations in computational complexity, nearly all of these models have assumed a single, representative firm type. Very few have systematically investigated the implications of markets with a variety of firm types. With increases in computing power and the advent of agent-based modeling, interactions between agent types can now be explored. In this paper, a computational model of innovative firms in competitive markets is presented. Firms devote resources to R&D which can lead to new, improved products allowing firms to steal market share from their competitors. Two types of firms, differentiated by the strategies they use in pursuing new innovations, are allowed to coexist. One type pursues exclusively radical innovations, while the other pursues exclusively incremental innovations. It will be demonstrated that under certain conditions, a synergy exists between firms of different types which allows heterogeneous populations of firms to earn more than homogeneous ones. Keywords: agent-based model, technical change, endogenous growth, simulation
Introduction An improved understanding of innovation and technological change is important to policy makers for at least two reasons. First, innovation has been shown to affect standards of living. Griliches (1986) showed that innovation contributes to U.S. productivity growth, and productivity growth has been shown to contribute to real wages. Baily and Chakrabarti (1988, p. 39) write, “The bottom line, therefore, is that R&D spending by industry and the federal government together contributed between 0.53 and 1.20 percent a year to multifactor productivity growth in the 1960s. This represents between 30 and 69 percent of the total.” Since, for most Americans, real wages are the most important factor in determining the standard of living, government should take an active role in the promotion of innovation wherever it can affordably and effectively do so. Second, an understanding of innovation is necessary for improving the accuracy of economic forecasting tools. A large part of the early inaccuracy of economic forecasts of the
228
TEITELBAUM AND DOWLATABADI
future availability of food, fuel, and a host of natural resources and other factors was due, in large part, to an inadequate understanding of the economics of technological change and innovation. Technological change is still not well understood and there remains a great deal of uncertainty in predicting the future availability of these factors. In this study, an agent-based model of firm behavior in competitive markets is presented. Firms compete against one another by creating new products and processes which allow the innovative firms to steal market share from other firms. Because this model uses agentbased simulations of firms, experiments can be performed by systematically varying one variable of interest while holding all other variables constant—a luxury that is impossible in the real world. The purpose of these experiments is to explore the effects of firm level decision making on technological progress at the industry level and to explore interactions between firms with different predispositions towards innovative strategy. Literature Review Orthodox Analysis Modern economic analysis of technological change begins with Schumpeter (1942), who argued that technological progress was the engine driving the U.S. economy and that it had largely been neglected in economic analysis. Schumpeter coined the phrase “Creative Destruction” to describe the way that new methods of production drive out old ones, and new innovations make old products obsolete in capitalist economies. Solow (1957) followed Schumpeter’s work by taking standard production function analyses of economic growth and calling the residual “technical change”. In Solow’s formulation, technology grows over time, exogenously to economic output which depends only on capital and labor. Schmookler (1966) argued that technological change should not be completely exogenous to production functions; and that, in fact, firms themselves influence the rate and direction of technological change. Since Schmookler, the field has mushroomed, with a host of empirical and theoretical economists analyzing the factors that facilitate and hinder technological progress. More recently, economists have begun to explicitly model the creative destruction outlined by Schumpeter. A model of endogenous growth through product innovation by Romer (1990) explicitly incorporates the number of product designs. These new designs (i.e. horizontal innovations) are never close substitutes for existing goods, which, Romer himself points out, precludes Schumpeterian destruction. Five of the studies that have built on Romer’s work, adding product obsolescence, are Segestrom et al. (1990), Segestrom (1991), Aghion and Howitt (1992), and Grossman and Helpman (1991a, 1991b). Each of these papers advances models in which firms compete against one another through vertical product innovations. Grossman and Helpman coin the phrase “quality ladder” to describe the stages in a product’s life: undiscovered to discovered cutting edge product to obsolete due to newly discovered products. Interestingly, in adding obsolescence to Romer’s framework, these models have abandoned horizontal innovation altogether, although Grossman and Helpman (1991b) show that their vertical innovation model shares an identical reduced form with Romer’s horizontal innovation model for some variants.
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
229
When a firm wins an R&D race (innovates), it achieves a monopoly (either temporary or permanent) and accrues the associated monopoly rents. Segestrom assumes that all firms pursue the same innovation simultaneously, until one firm reaches it. Then, all firms proceed together to the next innovation. Thus each innovation is pursued by the entire economy in sequence. Aghion and Hewitt assume that an innovator gains monopoly power in all industries simultaneously. Innovation is the source of economic growth and is the result of research performed by firms. In each model, an equilibrium is reached in which the rate of innovation is constant. While equilibria offer modelers analytical tractability, the notion of steady state rates of innovative progress is intuitively unappealing. Indeed, work by Stein (1997) suggests that innovations tend to come in waves. The models by Grossman and Helpman (1991a, 1991b), Segerstrom (1991) and Segerstrom et al. (1990) closely examine North-South issues in product innovation. Only industrialized countries, the North, have the knowledge capital necessary for product R&D. Less industrialized countries, the South, can then duplicate these innovations and compete through lower labor costs. Pepall (1995) uses a game-theoretic model to investigate the effect of expanding the late entrant firm’s capabilities from simple, ‘copy-cat’ duplication to imitation. Firms can vary the similarity of their challenger product to the incumbent product. The more similar is the challenger to the incumbent, the lower the development costs, but the greater the price competition, also. Pepall finds that close imitation is more likely in markets that are large (wealthy), less likely in markets that are small (poor).
Agent-Based Analysis One of the earliest successful applications of agent-based modeling to social science problems was accomplished by Thomas Schelling. His work is nicely summarized in his now classic book, “Micromotives and Macrobehavior” (1978). Some of the earliest applications of evolutionary models to technological innovation were done by Winter (1964) and Nelson and Winter (1982). Nelson and Winter characterize firms not as production functions— optimally turning inputs into outputs—but as collections of rules for determining which economic actions to take when faced with various situations. In this way, Nelson and Winter emancipate themselves from many of the constraints of orthodox theory. Firms cease to be profit maximizing and become profit seeking. Neo-classical production functions apply only in equilibrium. Decision rules can apply universally. Due to computational complexity, orthodox models can rarely accommodate interactions between heterogeneous firm types.
Model Description1 The model presented here consists, conceptually, of four objects: products, consumers, firms, and an environment.
230
TEITELBAUM AND DOWLATABADI
Products Each product is represented as a binary vector. Each bit in the vector represents both a product attribute (e.g., driver side air bag, low gas mileage, turbo charger, etc.) and the engineering skills needed to produce it (e.g., safety engineering, aerodynamics, engine design, etc.). A ‘one’ bit in a products skill vector signifies that the product contains a particular attribute which the firm hopes will be attractive to consumers and that the firm required a corresponding engineering skill to produce that attribute. All skills which are not part of a given innovation are denoted by ‘zeros’ in their respective bits. Thus, each unique product is characterized by a genetic code that denotes the skills that were required to build it and the attributes from which consumers derive utility. Characterizing goods as a collection of attributes in this way dates back to Lancaster (1966) who broke away from the traditional approach that goods are atomic objects of utility. Innovations then, are products with genetic strings which are different from any other product on the market. Consequently, the total space of all product innovations is defined, a priori, by the modeler as all the possible permutations of 0’s and 1’s in the product’s skill vector. The genetic strings in each of the experiments in this study are all twelve bits long. The total number of possible innovations is 212 − 1 (since the zero vector can not be considered an innovation and the product vector used in these experiments is of length twelve). The process of innovation, therefore, refers simply to trying new genetic strings. Products can enter the marketplace in one of two ways: as radical innovations, or as incremental innovations. Products which are identical to extant products are not allowed. There is general agreement that radical and incremental innovations are sufficiently different in nature to merit distinct analyses. Freeman and Perez (in Dosi et al, pg. 46) write “Although their combined effect is extremely important in the growth of productivity, no single incremental innovation has dramatic effects, and they may sometimes pass unnoticed and unrecorded.” Incremental innovations lead to products which compete with extant products, i.e. substitutes. Radical products, on the other hand, induce changes in demand. These changes in demand create new markets, which creates an incentive for firms to pursue incremental innovations in the newly created market. Radical innovations are more difficult to achieve but offer greater rewards because they create new markets for themselves. The number of markets in the simulation, then, grows endogenously over time. The rate at which the number grows is determined by the rate at which radical products are tried and the rate with which radical products are accepted as marketable (See Table 2). Radical innovations are extremely different from anything on the market at their birth. Thus, they are unlikely to be accepted by consumers. The metric for measuring the degree of similarity between products is Hamming2 distance. An innovation’s chance of acceptance in the marketplace is inversely proportional to its Hamming distance from its nearest competitor raised to the power of 1.28.3 An incremental innovation, on the other hand, must share its market with other, similar products. The probability of a product being an incremental innovation decreases with Hamming distance. The probability of a product being a radical innovation increases with Hamming distance. The algorithm for determining marketability works as follows.
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
231
if (number of extant products == 0) { //the distance from the p = 14 ∗ number of skills in product vector; zero vector r = rand() % 100; if (r<= p) then marketable; } else // at least one product { p = 20 / hamming distance to nearest product; // goes down as distance goes up r = rand() % 100; if (r<= p) then marketable; } if (marketable) { pb = 14∗ hamming distance to nearest product; // goes up as distance goes up r = rand() % 100; if (r<= p) breakthrough product; else incremental product; The following example is presented to help clarify the two types of innovations. Consider a market place in which there is one extant product of the following form.4 00110010100 The following product would have a low chance of acceptance in the market; but, should it be accepted, it would have a high chance of being a radical product because it is genetically dissimilar from the extant product: (Hamming distance = 8) 110001100000 The following product, on the other hand, has a much higher chance of marketability, but would most likely be an incremental product (Hamming distance = 1): 100110010100. Each time a firm tries unsuccessfully to market a specific bit string, that string is saved to a list of failures. A failure may not later become a success: should any other firm try to market the identical bit string, it must also fail. This has two effects. First, the technological frontier shrinks over time as the list of failures grows. Second, firms can not simply market and remarket the same bit string. This forces firms to hire new engineers.
232
TEITELBAUM AND DOWLATABADI
Once a product is marketable, in subsequent periods it continues to generate revenues for the firm that created it until the firm dies or the simulation ends. The size of these revenues are determined by the number of other products which share the same market, and thus are subject to change over time. Consumers Each consumer has a per-period endowment of wealth, W, that she spends on the products produced by firms. This per-period endowment is proportional to the number of “needs” (radical innovations) that have been successfully marketed in all past periods. So, for instance, if there has been only one radical innovation, each consumer would divide W dollars each period between the radical product and all incremental products. The instant a new radical innovation is marketed, consumers spend 2W dollars each period—spending W on the radical product which has created its own market and dividing the remaining W between products in the existing market.5 This model of growth is similar in spirit to others in which growth results exclusively from endogenous technological progress which results from firms performing R&D. Within any particular market, consumers allocate their wealth according to Cobb-Douglas preferences, always preferring more recent products to older ones. There is nothing inherently superior about a given product vector; more recent innovations are assumed to be superior to extant products. For simplicity, there is no uncertainty in consumers’ behavior. They are completely deterministic. However, their preferences are not known to the firms. As an example, consider the case where one firm has been successfully marketing a radical product with no competition in the radical market until an innovation to this first product is successfully marketed. Consumer spending in that market goes from 100% spending on the one extant product, to 33% spending on the first product and 67% spending on the innovation. The logic behind this can be thought of in two ways. Either 67% of the consumers buy the new product and 33% stick with the old product, or in order to compete with the innovation, the older, inferior product must lower its price until its duopoly profits are 33% of its monopoly profits. As new products enter the market, an extant product’s market share gradually diminishes until, eventually, the market share reaches zero, and the product generates no revenue. The schedule for depreciation of market share for a sequence of similar products is shown in Table 1. The share garnered by the most recent product is shown in gray. Thus, this model incorporates both horizontal and vertical product innovations. Radical innovations, which do not compete with extant products, are horizontally distinct from extant products a la Romer (1990). Incremental (vertical) innovations do compete with extant products, sending older products into obsolescence. Firms Each firm starts with some positive endowment of money,6 no salable products and no engineers. Each firm must pay some fixed costs at the beginning of each period. This cost is referred to as the firm’s cost of living. The only way the firms can earn profits is by
233
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION Table 1.
Number of similar products.
Share of 1st product
0
1
1.000
0.333
2nd product 3rd product 4th product 5th product 6th product 7th product 8th product
0.667
2 0.2
3
4
5
6
7
0.143
0.111
0.091
0.077
0.067
0.267
0.171
0.127
0.101
0.084
0.072
0.533
0.229
0.152
0.115
0.093
0.078
0.457
0.203
0.139
0.107
0.087
0.406
0.185
0.128
0.099
0.369
0.17
0.119
0.341
0.159 0.318
marketing products. With no salable products, each firm begins its life with no immediate means of earning a profit. Because of the cost of living, any firm that fails to earn a profit is doomed to eventual bankruptcy. This assures that only firms that can create salable products can survive. In order to create marketable products, firms must hire engineers. Firms may hire no more than one engineer per turn. For the purposes of the experiments described here, each engineer has one skill only which is known to all firms. Thus, if a firm whose skill vector looks like this: 100000100000 decides that it would like to produce a product which requires a ‘one’ in the second (from the right) bit, it spends the necessary money to hire an engineer and it’s skill vector becomes: 100000100010 Furthermore, an infinite number of engineers with each skill are assumed so that no skill is in short supply and relatively more valuable than any other. This assumption makes it impossible to protect an innovation by cornering the market on engineers with a given skill. The issues of human capital, labor markets, and creation of skills through pursuit of radical innovations can be explored when other assumptions and dynamics are assumed for engineers. Firm Decision Making Strategies. Firms are faced with several decisions at each turn. First, they must decide whether to hire a new engineer. If they decide to hire an engineer, they must decide which skill she should possess. In order to determine this, they must decide what kind of product they intend to market—radical or incremental—based on what they know about the products that are already on the market. Once the firms skills are determined for the turn, the firm must decide which subset of its skills to use in creating a product. On any given turn, a firm may be able to choose from a portfolio of product vectors to market. Firms can then select the degree to which the product vector they do attempt to market
234
TEITELBAUM AND DOWLATABADI
actually differs from extant products. The firm’s last decision—degree of difference from extant products—is quite similar to the one described in Pepall (1995). It is important to point out that even products which the firm intends to be radical have a non-zero chance of becoming incremental products (the products distance to nearest similar product is great, but consumers still decide that the product is not different enough to be a breakthrough); and that products which are intended to be incremental have a non-zero chance of becoming radical innovations. There is no direct cost associated with marketing a product. There are indirect costs because in order to create products, a firm must employ engineers, which costs money, but the act of attempting to market a product is costless, so firms always try to market a product. Firms make these decisions based on their internal rule structures which are often called ‘strategies’. Simple examples of strategies available to firms for determining resource allocation include: “always allocate all available resources to new product innovations”, or “divide resources equally between pollution abatement, advertising, and new product innovations.” Even the statement “allocate resources to new product innovation” is more complicated than it may appear at first glance. Firms can attempt to market two kinds of products, as discussed earlier, thus each firm must have a strategy which determines the type of innovation it is likely to pursue. Firms may, for instance, pursue simple incremental innovations exclusively, or pursue some mixture of both types. Environment The environment is the medium—designed by the modeler—through which all transactions take place. One such transaction is the transfer of products from firms to consumers in exchange for money. Another such transaction is the hiring of engineers. An engineer sits in a “soup” with all the other engineers where she waits until she is hired by a firms seeking her particular skills. Firms then pay the appropriate salary, and the engineer’s skills are added to the firm’s skills. The environment also serves as the space through which firms gather information about the success and failure of earlier products, and the innovation strategies pursued by other firms. The environment may also be used to impose conditions on firm behavior, such as taxes and regulations. This aspect of the environment is not explored in this paper. Frequently in agent-based simulations, the environment is a lattice7 on which agents have a specific location, and a distance from other agents. In the environment used here, there are no spatial considerations (although the Hamming distances between products are relevant). Each firm interacts with the consumer and every other firm. On each turn in the simulation, a new firm is randomly selected to act first, so that order of play within the firms is not a consideration.
Virtual Experiments Using the model described above, three virtual experiments will be performed. These are 1) Ideal strategy distribution, 2) Effect of Spillovers, 3) Adaptive Firms.
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
235
Ideal Strategy Distribution In this experiment, the world is divided exclusively into two types of firms: radicalist and incrementalist. Radicalist firms pursue exclusively radical innovations; and incrementalist firms pursue exclusively incremental innovations. Recall that radical innovations are much less likely to be marketable than incremental innovations, but they also have a much higher payoff for the innovator, at least before rival firms market similar products. If we were interested in studying first mover advantages (i.e. more established products retain more of the market due to name recognition) we would simply reverse the depreciation schedule in Table 1. When selecting which subset of their skillset to bring to market, radicalist firms choose the skillset which has the greatest Hamming distance from the nearest extant product. Incrementalist firms choose the skillset which has the minimum Hamming distance from its nearest rival. When selecting which skills are attractive among the available engineers they may hire, radicalist firms select the engineer whose skills will allow the firm to market the product that maximizes Hamming distance from the nearest competitor. Conversely, incrementalist firms choose skills which will allow the firm to market innovations which are as similar as possible to competitors products. The results will be presented in terms of the final size of the economy. The model has been designed to grow the economy whenever a radical product is successfully marketed. Successful marketing of incremental products is not rewarded with any consequent growth in the economy. Incrementalist firms do grow the economy through exploration of products at close Hamming distances which happen to be received as radical products by the market. The chances of such an event are much less than for radical firms, see Table 2. Thus, when comparisons are made between different compositions of the economy in the number of incremental and radical firms, the results may be biased in favor of radical firms. The first question we explored using the model was: How does the mixture of radicalist and incrementalist firms affect the total output, and total technological growth? Model runs Table 2.
Key parameters and their assigned values.
Parameter
Assigned value
Cost of living/turn
$12
Cost of engineer
$50
Consumer endowment Total endowment (all firms) Size of new market Turns Product vector length
$250 $15,300 $250 70 12
P (Incremental success) for hamming distance = 1
0.172
P (Radical success) for hamming distance = 1
0.028
Number of consumers Spillover
1 100%
236
TEITELBAUM AND DOWLATABADI
were performed for 4, 6, and 8 firms, and the percentage of firms of a given strategy was varied from 100% to 75% to 50% to 25% to 0% (In the 6 firm case, 100% to 66% to 50% to 33% to 0%). The total endowment, $15,300, divided equally between each firm, was held constant in each model run. See Table 2 for all relevant parameter settings. Notice that even in the 8 firm case, each firm starts with enough money and that it is extremely unlikely that it will go bankrupt. The purpose of this experiment is not to investigate changes in the populations of firms, but rather to investigate the effect of industry composition on total economic growth. The number of turns is set to 70 because if it were larger, the model would take too long to run (length of time to completion increases more than linearly with each additional turn); and, if it were smaller, there would not be enough time for firms to differentiate themselves. It takes time to see the effect of the firm’s strategy on the firm’s skillset. Similarly, the product vector must be long enough for the cubed Hamming distances to grow large enough to have a noticeable effect on probability of success. The probabilities of successful incremental innovation, successful radical innovation and consumer endowment were chosen such that a population of two incrementalist firms earned the same amount of money as a population of two radicalist firms. These values were not validated by comparison to real world data. However, the results obtained will not be dependent on the exact values of these parameters. General inferences can still be drawn concerning the effect of strategy distributions on firm performance by keeping these parameters constant while varying firm strategies. Sequence of Events. On each turn, the order in which firms act is set randomly. Figure 1 shows the sequence of events for each firm on each turn. Firms must first pay their fixed cost of living. If a firm’s savings are less than it’s cost of living, that firm goes bankrupt and is, for all intents and purposes, dead. Cost of living is the only expenditure than can bankrupt a firm. All other expenditures are voluntary, and a firm will not go bankrupt voluntarily. Also, for these experiments, each firm starts out with a large enough endowment that issues of risk aversion are not important. If the firm survives the cost of living, it then begins that turn’s R&D. First, it checks whether its current set of skills allow it to market any untried products. It does this by checking every subset of its skills against all the successful and failed products which have come before. If yes, it does not hire any new engineers. If no, it decides to hire an engineer. If the firm is radicalist, it will hire the engineer who will allow it to market the product which is most distant from it’s nearest competitor (a competitor is a successfully marketed product). If the firm is incrementalist, it will hire the engineer who will allow it to market the product which is most similar to it’s nearest competitor. Ties are broken by a random draw. Firms only consider the skills of products that have been tried, either successfully or unsuccessfully. They take no account of the skills that other firms possess when making decisions. The firm then markets the product which is most different or most similar to it’s nearest competitor, depending upon its orientation. Effect of Spillovers In the previous experiments, all firms were allowed complete access to information on the products marketed and failed attempts of rival firms. In the real world, firms take steps to
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
Figure 1.
Sequence of decisions for each firm on each turn.
237
238
TEITELBAUM AND DOWLATABADI
keep this information secret, thereby reducing the efficiency of rival firms R&D efforts. In this experiment, we explore the effect of limiting firms access to this information by adding a random spillover parameter, s, which determines the likelihood with which a firm can successfully access the genetic code of both failed products and successful products. s is varied from 0 to 100 percent while keeping all the parameters from the previous experiment constant. The number of firms is held constant as well at six. Adaptive Firms In the previous two experiments, no learning occurred either within firms or in the population as a whole. In this experiment, firms were allowed to mimic more successful firms. On any given turn, a firm could scan all the other firms, determine which was richest at that point, and switch its innovation strategy to match that of the richest firm. The question to be addressed was: what effect will these dynamic strategy changes have on firm behavior? Adaptation here is distinct from that described in Carley and Svoboda (1996) in two ways. First, their model investigates adaptation at both the firm and employee level. Here, employees are omitted from the model save for their role as skills in the firm skillset. As such, there is no role for employee adaptation. In addition, at the firm level, Carley and Svoboda use a simulated annealing8 model of adaptation, whereas here, firms simply mimic richer firms. Results and Discussion In presenting our results we generally rely on the size of the economy after 70 turns as the measure of relative success in product innovation and marketing. Our simulations result in final economies which are an order of magnitude larger than the initial condition. This comes about from the successful marketing of radical products through time. In general, for the economy to grow from its initial size of $15,000 to $150,000, roughly 15 radical products are marketed successfully. Along the way there would have been anywhere from 280 to 560 attempted product innovations depending on the number of firms. These rough estimates are provided to help the reader interpret the results presented below. Ideal Strategy Distribution In each of the three cases, mixed populations of radicalist and incrementalist firms outperformed homogeneous populations of either type. For each simulation, runs were performed until p values below 0.05 were achieved. In each of the three cases, firms sampled without bias across the skill vector. The following graphs show the total dollars accumulated by all the firms holding the starting endowment constant. In figure 2 we see that a population mixture of 75 percent incrementalist and 25 percent radicalist firms outperforms all other mixtures. In estimating the significance of these findings, a one-way classification fixed effects Monte Carlo simulation with five levels was used. Eighty runs per level were needed to achieve p < 0.05.
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
239
Figure 2. Total earnings for four firms with 80 runs used to estimate the range of outcomes for each level of strategy shares.
With six firms in figure 3, again, an interior maximum is found, this time at 66 percent incrementalist firms. A one-way classification fixed effects Monte Carlo simulation with five levels was used. Forty-eight runs per level were needed to achieve p < 0.05. With eight firms, in figure 4, an interior maximum occurs at 50 percent incrementalist firms. Forty-three runs per level were needed to achieve p < 0.05. For each of the three experiments (4, 6 and 8 firms), a mixture of radicalist and incrementalist firms out-earned either type alone. Why should this be the case? If we regard the set of all permutations of the product vector as the problem space, then firms can be said to be searching this space for marketable vectors. Incrementalist firms are, in essence, hill climbing. Once they find a successful string, they stay there, or as near to the successful vector as possible. Radicalist firms, on the other hand, search as far away as possible from the successful strings. When the two firm types search the space together, an effective mixture of hill climbing and near random jumping across the problem space is achieved. This is analogous to simulated annealing algorithms. Simulated annealing has been shown to outperform hill-climbing and random search for a wide variety of problems (Rutenbar, 1989). A mixture of between 50 and 75 percent incrementalist firms leads to the greatest economic growth.
240
TEITELBAUM AND DOWLATABADI
Figure 3. Total earnings for six firms with, 48 runs used to estimate the range of outcomes for each level of strategy shares.
This result, moreover, seems concurrent with stylized facts from the empirical literature. While no study has been able to classify firms by the type of R&D they perform and the types of innovations they market, firm strategy has been shown to correlate with firm size with large firms conducting mostly incremental R&D and small firms conducting more radical R&D (Cohen 1988). Combining this with the following two results: a) Cohen and Klepper (1991) write “Note that this greater diversity does not emerge from any superior creativity on the part of smaller firms. It is simply the result of having a greater number of firms in the industry choosing which approaches to innovation to pursue. . . .there is a tradeoff between the advantages associated with small firms and those associated with large firms. In order to have more approaches to innovation pursued, it is necessary to sacrifice some intensity of effort for each approach, and vice versa.” and b) The consensus then is that for achieving rapid technological progress, the ideal industry would contain a mixture of large and small firms. (Scherer and Ross, 1990; Jewkes et al., 1969) gives a result very similar to the one found here. One of the model’s strengths is also its most serious drawback. The firms in this model are not optimizing. An optimizing firm would choose whether to pursue a radical or an incremental innovation based on the expected profit stream from each multiplied by the respective
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
241
Figure 4. Total earnings for eight firms with, 43 runs used to estimate the range of outcomes for each level of strategy shares.
probability of success. In order to do this, a firm would need to scan all existing markets and determine the largest market share that a successful incremental innovation would capture. Here, firms innovation strategy is determined exogenously. Consequently, comparisons between the model’s results and the steady-state equilibrium results of Segerstrom, Romer, and Grossman and Helpman are impossible. Spillovers Again using a Monte Carlo simulation with the same experimental design as before, runs were performed holding the number of firms constant while varying firm’s the accessibility to product and failure information (spillovers). Runs were performed at 100%, 50%, and 0% spillover. The results are shown in figure 5. The results in the 100% spillover case are identical to the six firm results in the previous experiment. Forty-eight runs were used for all cases. Figure 4 shows that lowering the information spillover has two effects. It reduces the total money earned over the course of the simulation run and it changes the slope of the curve. There is no clear internal maximum in either the 50 percent spillover or the no spillover case.
242
Figure 5.
TEITELBAUM AND DOWLATABADI
Total earnings vs. strategy share at three levels of spillover for 6 firms.
In figure 5 we can observe three effects. First, the final size of the economy is greater the higher the level of spillover. Second, the spillover advantage is not uniform for different levels of firm strategy shares, the higher the ratio of incrementalist firms, the greater the influence of spillover on the size of the economy. Third, at lower spillovers, no internal maxima for strategy shares can be discerned. These results are discussed in more detail below. Spillover and Final size of the economy—Economic growth is proportional to the number of successful products marketed. Each firm has a chance to market one product per turn. In the 100% spillover case, no firm ever duplicates another firm’s historic failures. Each firm can check the list of all failures and know not to duplicate past failed attempts. As spillovers are reduced from 100%, there is a growing chance that a firm will spend a turn marketing a product which will duplicate an historic failure and therefore fail. While this clearly indicates the importance of spillovers in abetting technological growth, very little in the way of normative implications can be inferred from this. This is because firms here are not profit maximizing. Firms here are “hard-wired” to innovate as much as possible. If firms were profit maximizing, downstream improvements on their original innovations would make early innovating less attractive. In the model currently,
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
243
spillovers help downstream innovators while upstream innovators ignore this. The clear effect of reducing spillovers in this model, then, is to slow growth. Spillover are more valuable to incrementalist firms—The chances of a duplicating past failures are higher with incrementalist firms. They search for new products at close Hamming distances, and increasingly saturate the space around successfully marketed products. Hence, with 100% incrementalist firms, the value of the spillover is much greater than when radicalist firms are present. The chance that a radicalist firm would seek to market a product previously marketed unsuccessfully by another firm is much less likely, by construction. This phenomenon is visible in the graph as the difference between the distance of the 100% spillover case and the 0% spillover case when all of the firms are incrementalist and when none of the firms are incrementalist. When all firms are incrementalist, the difference is ∼$1.3 × 105 , when none of the firms are incrementalist the difference is about half at ∼$0.6 × 105 . Spillover Eliminate Internal Maxima—A third effect is that no clear internal maxima appear. In the complete spillover condition, mixtures of incrementalist and radicalist firms repeatedly prove to be optimal. When spillovers are introduced, the differential effect of spillovers on the different firm strategies serves to push up the performance of radicalist firms relative to incrementalist firms obscuring any clear maxima. An Adaptive Firm Figure 6 shows the mixture of radicalist and incrementalist firms over time for a single model run. The model was run for 130 turns. No attractor state (either all incrementalist or all radicalist) was reached. The percentage of radical firms oscillates over time. In nearly all the cases, adaptive firm behavior leads to all firms adopting the same innovation strategy. As the model progresses, firms scan the other firms, and if the other firm is richer, the scanning firm changes its innovation strategy. Because the firm which is richest at time t0 is the most likely to be the richest at any later time t>0 . Every so often however, situations arise where changes in firm strategies causes changes in market structures. For example, consider a case where the richest firm is incrementalist. It may happen that a radicalist firm switches to incrementalist, creates a product which competes with the incrementalists product and allows another radicalist firm to become the new richest firm. This is rare, but an example of this behavior is shown in figure 6. Each time the balance changes from 50% radicalist −50% incrementalist, the change allows a new firm to usurp the richest firm’s position. Then, the next time a firm does a scan, it finds the new richest firm and changes to it’s strategy, which returns the population to 50-50. Conclusions and Future Work There are three main findings worth reiterating. The first is that in this model formulation, a synergy exists between firms of different strategies and that these kinds of synergystic relationships have not been adequately explored in either the orthodox economic literature or the empirical technical change literature. In each case, the optima are between 50% and 75% incrementalist firms.
244
Figure 6.
TEITELBAUM AND DOWLATABADI
Percentage of radical firms over time (4 firms).
Some of the more xenophobic economists and policy-makers have described the relation between the United States and Japan as similar to the relation here between radicalist and incrementalist firms, claiming that U.S. firms are responsible for most of the breakthrough innovations and that Japanese firms then take those breakthrough innovations and improve upon them, gradually forcing U.S. firms out of the market. This has been the rational often used by those endorsing protectionist economic policies. An interesting follow-on question will be to see if there are conditions under which firms of one type can be made better off by the addition of firms of another type, e.g. can four incrementalist firms actually earn more money when a fifth, radicalist firm is allowed to compete against them. If the answer is “yes”, it would be an endorsement of those who support free trade. A second policy implication is that government, when possible, should foster diversity. It is not yet clear how this should be done. In future work, we intend to investigate the effects of various policy levers (e.g., research credits and subsidies) on the heterogeneity of the population of firms. One of the major limitations of the model is that firms are non-optimizing. This precludes meaningful comparison to orthodox models of Schumpeterian competition. This also precludes any meaningful investigation of innovation subsidies since firms currently innovate as much as possible. We intend in the near future to add optimizing firms which can attempt
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
245
to market any kind of product based on profit maximizing criteria. One questions of interest is: will homogeneous populations of optimizing firms outperform mixtures of radicalist and incrementalist firms. The second interesting finding is that this internal maximum disappears with less than complete spillover. Since, in the real world, firms in many industries use secrecy to protect their innovations, we would expect to find less evidence of synergy in industries where secrecy is an important protection mechanism. The third interesting finding is that radicalist firms outperform incrementalist firms in runs with less than complete spillover (see figure 5); whereas the opposite is true when spillovers are 100 percent. This would clearly suggest that the U.S. firms should foster secrecy in their competition with Japanese firms except that secrecy reduces the total economic growth of the entire population. Whether the radicalist firms without spillovers can be made better off than radicalist firms with spillovers remains to be tested. Notice that all three of these results derive from the fact that the model is agent-based. Agent-based modeling is a powerful new tool to add to the policy maker’s toolkit, which already includes cost-benefit analysis, Von Neumann-Morgenstern expected value theory, and utility maximization. In some ways however, the old tools are ill equipped to handle the types of problems that policy makers must routinely address. A particularly thorny example of such a policy problem is the threat of global climate change. The stakes of this problem are extremely high as are the uncertainties. The impacts of climate change will be felt by people all over the globe. These impacts will not affect all people equally. The magnitude—and even the sign—of the effect will vary from place to place, and people’s ability to adapt will vary widely from region to region, country to country, and generation to generation. The full effects of climate change may not be felt for hundreds of years, yet it may be that the first steps towards averting these problems must be taken now. Finally, the problem, and more so our understanding of the problem, may change over time. Technologies may evolve that may make us laugh that we ever worried about problems so trivial. Policy modelers have not been able to handle climate change and other such complicated issues in their full complexity. They have needed to simplify. This is well and good, as a solution which does not simplify is not much use. Friedman (1953, p. 39) writes “The reason is simple. A hypothesis is important if it explains much by little, that is if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid prediction on the basis of them alone.” There are times, however, when these simplifications are made not as an attempt to separate the explanatory from the superfluous, but because the modelers toolkit is ill equipped to handle the problem in its unsimplified state. One such complexity is heterogeneity of stakeholders and decision makers. Kandlikar and Morgan (1995) write: “Unfortunately, many climate/impact assessment models ignore the role of different individuals and collective values and report their results using a single economic variable, most frequently, the sum of impacts and abatement costs reported as a fraction of the
246
TEITELBAUM AND DOWLATABADI
global GDP (Nordhaus, 1992; Peck and Teisberg, 1992; Dowlatabadi and Morgan, 1993). In doing so, they assume that decisions will be made by a global “commoner” with robust monetary valuations for aggregate market and ecological impacts, as well as net costs of abatement policies. In fact, however, these decision will be made in a globally distributed manner by scores of national governments, millions of private and public sector managers, and billions of citizens. None of these actors can be expected to make decisions based on globally averaged values; instead, each will decide on the basis of the specific costs they incur and the choices they face.” Here, such simplifications smooth over important, policy-relevant results. Herein lies the role of agent-based models; the ease of handling agent heterogeneity is one of the strengths of the agent-based modeling approach. In summary, the strengths of the agent-based approach relative to the orthodox economic approach are: (1) the ease of incorporating heterogeneous populations of agents, (2) the ease of incorporating boundedly rational agents facing uncertainties instead of the omniscient profit maximizers one normally finds in economic models, (3) the control over outside variables (4) the ease of investigating out-of-equilibrium and unstable equilibrium phenomena, and (5) the accessibility of the results. Acknowledgments The authors wish to thank Kathleen Carley and Rob Axtell for numerous helpful comments and guidance. All remaining errors are due to the authors’ limitations. This research was made possible through support from the following grants and cooperative agreements (NSF, SBR-9711498, SBR-9521914; DOE, DEFG02-95ER62105). Notes 1. The model is written in C++ for MetroWerks CodeWarrior. The code is available from Daniel Teitelbaum (
[email protected]). 2. The number of bits in which two binary n-vectors differ is their Hamming distance. (Schneeweiss, p. 24) 3. Powers between one and two give the desired properties of increasing returns and decreasing marginal returns. 4. Note that this does not represent the binary number 6, but simply the skills used in building the product. 5. Note that even in the case where only one product is created, the simulated economy will still realize a small amount of growth. This happens because this one product is profitable. 6. The term ‘money’ here is used in the conventional sense. Because of the theoretical nature of this paper, any unit a value or wealth could be substituted. 7. An excellent example of this is Epstein and Axtell’s (1996) Sugarscape model. 8. See Ruternbar (1989) for a discussion of simulated annealing.
References Aghion, P. and P. Howitt (1992), “A Model of Growth Through Creative Destruction,” Econometrica, 60(2), 323–351. Baily, M. and A. Chakrabarti (1988), Innovation and the Productivity Crisis. The Brookings Institution Press, Washington, D.C.
A COMPUTATIONAL MODEL OF TECHNOLOGICAL INNOVATION
247
Carley, K.M. and D.M. Svoboda (1996), “Modeling Organization Adaptation as a Simulated Annealing Process,” Sociological Methods & Research, 25(1), 138–168. Cohen, W. (1995), “Empirical Studies of Innovative Activity and Performance,” in Stoneman (Ed.) Handbook of the Economics of Innovation and of Technological Change. Basil Blackwell Ltd, Oxford, U.K. Cohen, W. and S. Klepper (1992), “The Tradeoff Between Firm Size and Diversity for Technological Progress,” Journal of Small Business Economics, 4(1), 1–14. Epstein, J.M. and R. Axtell (1996), Growing Artificial Societies: Social Science from the Bottom Up. Brookings Institution Press, Washington, D.C; The MIT Press, Cambridge, Mass. and London, England. Friedman, M. (1953), Essays in Positive Economics. University of Chicago Press, Chicago. Griliches, Z. (1986), “Productivity, R&D, and Basic Research at the Firm Level in the 1970’s,” American Economic Review, 76(1), 92–116. Grossman, G.M. and E. Helpman (1991a), “Quality Ladders in the Theory of Growth,” Review of Economic Studies, 58, 43–61. Grossman, G.M. and E. Helpman (1991b), “Quality Ladders and Product Cycles,” Quarterly Journal of Economics, 106, 557–586. Jewkes, J., D. Sawers and R. Stillerman (1969), The Sources of Invention. W.W. Norton, New York. Kandlikar, M. and Morgan, G. (1995), “Addressing the Human Dimensions of Global Change: A Multi-Actor, Multi Metric Approach,” Human Dimensions of Global Change Quarterly, 1(3), 183–208. Lancaster, K.J. (1966), “A New Approach to Consumer Theory,” Journal of Political Economy, 74, 132–157. Nelson, R. and S. Winter (1982), An Evolutionary Theory of Economic Change. The Belknap Press of Harvard University Press, Cambridge, Mass. Pepall, L. (1995), “Imitative Competition and Product Innovation in a Duopoly Model,” Economica, 64, 265–269. Romer, P.M. (1990), “Endogenous Technological Change,” Journal of Political Economy, 98(5), S71–S102. Rutenbar, R. (1989), “Simulated Annealing Algorithms: An Overview,” IEEE Circuits and Devices Magazine, 5, 12–26. Schelling, T.C. (1978), Micromotives and Macrobehavior. W.W. Norton & Company, New York. Scherer, F. and D. Ross (1990), Industrial Market Structure and Economic Performance. Houghton Mifflin Company, Boston. Schmookler, J. (1962), “Economic Sources of Inventive Activity,” Journal of Economic History, 22(1), 1–20. Schneeweiss, W.G. (1989), Boolean Functions with Engineering Applications and Computer Programs. SpringerVerlag, Berlin, Germany. Schumpeter, J. (1942), Capitalism, Socialism, and Democracy. Harper and Row, New York. Segestrom, P.S. (1991), “Innovation, Imitation and Economic Growth,” Journal of Political Economy, 99, 807–827. Segestrom, P.S., T.C.A. Anant and E. Dinopoulos (1990), “A Schumpeterian Model of the Product Life Cycle,” American Economic Review, 80, 1077–1091. Solow, R. (1957), “Technical Change and the Aggregate Production Function,” Review of Economics and Statistics, 39, 312–320. Stein, J.C. (1997), “Waves of Creative Destruction: Firm-Specific Learning-by-Doing and the Dynamics of Innovation,” Review of Economic Studies, 64, 265–288. Winter, S. (1994), “Economic “Natural Selection” and the Theory of the Firm,” in P.E. Earl (Ed.) Behavioural Economics, Vol. 1. Elgar, Aldershot, U.K.; Gower, Brookfield, Vt., pp. 108–148. Schools of Thought in Economics Series, No. 6. Daniel Teitelbaum holds a B.S. in electrical engineering and a B.A. in painting from the University of Illinois. He holds a Ph.D. in Engineering and Public Policy from Carnegie Mellon University. His research has focussed on representation of firm level decision-making, technical change and pollution control using artificial adaptive agent techniques. He continues to use these techniques in application to innovation diffusion of consumer products. Hadi Dowlatabadi holds B.Sc. and Ph.D. degrees in physics from Edinburgh and Cambridge Universities respectively in the UK. Since 1996, he has directed the Center for Integrated Study of the Human Dimensions of Global Change—an NSF Center of Excellence at Carnegie Mellon University. The Center involves over 40 scientists from 18 institutions collaborating in interdisciplinary research on the interactions betwen managed and natural systems. Dowlatabadi’s research is geared towards developing more realistic representations of problems faced by humanity so that individual and institutional responses to a changing and uncertain environment can be improved.