Artificial Intelligence and Metaphor Making: Some Philosophic Considerations Harold D. Carrier
This article argues that the metaphors presently employed in describing artificial intelligence represent the use of personification and anthropomorphism. They attempt to develop an isomorphic relationship between the human mind and a computer's logic. It is suggested that an analogic metaphor is more appropriate in describing this relationship and is more epistemoiogically correct.
Artificial Intelligence (AI) and expert systems have not lived up to their initial expectations either as tools for m o d e m business or in some of the more esoteric experiments in visual and voice reproduction (McDermot, 1981; Sutherland, 1986). Part of this problem is the initial assumption that intelligence would be easily modeled and there would be some immediate benefit from AI research.The complexity of the problem, however, is staggering. As a result, expert systems, for example, have only a 70 percent reliability with the decisions of the experts whose heuristics they have attempted to capture (Dreyfus & Dreyfus, 1985). The intent of this article is to explore some of the philosophical foundations of AI and expert systems and to create n e w metaphors of understanding in the hope that a broader view, less mechanistic in outlook, can bring a fresh perspective. The first assumption is that artificial or machine intelligence is not intelligence at all, but rather a tool to be used to extend the productivity of human intelligence. This regrettable misnomer has led to misconceptions not only from the naive, but also to the extent of a profound anthropomorphization on the part of some developers of AI themDr. Carrier is an assistant professor of organizational behavior at the School of Management, Rensselaer Polytechnic Institute, Troy, New York 12180. His current research interests include the behavioral and philosophic aspects of artificial intelligence and expert computer systems in work situations, and the behavioral perceptions of decision processes in the workplace. Knowledgein SocieO/:The InternationalJournal of KnowledgeTransfer, Spring 1990, Vol. 3, No. 1, pp. 46-61.
Carrier
47
selves (Pylyshyn, 1981). The problem perhaps extends to the initial work in AI by Turning (1950), and his test or criterion of machine intelligence which may be a lesson in personification. The method chosen for this inquiry is the philosophy of knowledge; AI and expert systems are based on inherent logical and philosophical foundations that must be a part of the assumptions of these decision technologies. This analysis hopes to develop some insight into the philosophic a priori of AI and expert systems in the expectation of building a foundation which has been found to be lacking. The second assumption of this article is that as a result of the first, AI and expert systems are decision aids. Decision aid technology is involved in the process of knowing reality. Its purpose is to gather and interpret data and to present some facet of "reality" for the users of information. Just as telescopes are designed to extend the sensory capacity of humans, decision aids are designed to extend human cognitive capacity. This statement has two meanings: first, there is a cognitive theory, an epistemology, and a metaphysics implicit in all decision aid systems (Hirschheim, 1983). A theory of what comprises knowing, and hence what is reality, is implicit in all decision aid technologies. Second, it means that if a decision aid is to accurately extend human cognitive capacities, the assumptions about how we perceive and what reality is must correspond to the actual performance of human knowing itself. One of the tasks of a research agenda would be to render explicit the cognitive assumptions and their correspondence to current decision aid technology. This focus of research would bring a clarity to the literature that does not exist at present. Current decision technology is divided into three types: prescriptive, inferential, and the rule-based views (operations research, statistics, and expert systems), each different in modeling one aspect of the way humans come to understand and know reality. This division is rather artificial, since these three types of technologies are not independent and mutually exclusive, but instead are divided as a means of orientation. Additionally, these three technologies are not inclusive of all the technologies used in decision support systems, but represent three of the major areas of inquiry. The argument made here is for an integrated perspective of the cognitive assumptions put forth in these models in the belief that such a perspective is necessary to specify the value and limits of each model. In the absence of the critical control offered by an integrative perspective, the limitations and partial nature of the separate models tend to be overlooked and the models regarded as normative in and of themselves. Part of the problem is the proliferation and availability of software, especially in the hands of naive modelers who have little or no appreciation of the underlying assumptions, both philosophic and mathematical, of the decision technologies. This problem has been expressed by Winner (1975) who quotes an employee as saying, "I don't comprehend it, but I'm sure the system does" (Winner, 1975, p. 65). This approach leads to problems
48
Knowledge in Society/Spring 1990
of validity, especially Type III errors (using the right empirical method for the wrong problem, Mitroff & Betz, 1972). The above example illustrates the necessity of underpinning the nature of science. This article assumes a strong Kuhnian stance (Kuhn, 1972), believing that science must have strong philosophical and methodological components. Traditionally, the AI field has been broken into two camps: proponents of a "strong AI" led by Newell and Simon (1972), and those whose who prefer a "weaker" AI (Dreyfus, 1981; McDermott, 1981; Searle, 1981 ; Dennent, 1985). Here the existence of a third camp is assumed, one predominated by the view, "I don't care what the foundations are. If it works, I will use it." Carrier and Wallace (1990) have argued that this belief could lead to the totally inappropriate use of tools that would result in problems with validity. Many of these problems can be avoided by an underlying comprehension of the assumptions inherent in decision making. Mason and Mitroff (1973), working from Churchman (1971 ) and Jung (1922), presented the philosophical underpinnings of inquiry systems in light of the nature of individual differences and the structuring of problems. Mitroff and Kilmann (1978) have looked pragmatically at the bias inherent in science. They maintained that scientists see the world through a particular psychological perspective that often creates bias. They delineated four types of scientific researchers: the analytic scientist, the conceptual theorist, the conceptual humanist, and the particular humanist. Their supposition investigated the internal and external qualities of these classifications demonstrating the differences employed in logic, the modes of inquiry, the character of science and its goals, and the values of the scientists themselves. Crucial to the development of AI is the fact that the psychological attributes of domain experts in expert systems and others persons associated with the development of AI have not been fully explored. This perspective is critical since most individuals associated with the development of AI and expert systems have a strong engineering background that not only influences the development of AI and expert systems, but also their outlook on the goals and foundations of the science of AI itself. This is not to imply that most professional modelers do not understand, at least implicitly, the philosophic implications of their models, but as early as Simon (1954), decision scientists have been urged to be cautious in applying decision aids. Morris (1963, 1967) has cautioned that the focus in using decision aids should have its foundations in logic since the difference between the context of discovery versus the context of justification is essentially the differentiation between induction and deducation. Carrier and Wallace (1990) expanded this notion and suggested that the differences among decision technologies are broken down by their logical bases. Inferential statistics is based primarily on induction, and its more normative counterpart,
Carrier
49
operations research, is rooted in deductive thought. They suggest that expert systems, which employs both induction and deduction, has an implicit tacit logic. The proposed philosophic underpining is important, especially for the naive modeler, to provide assistance in establishing a coherent framework, or in other words a pre-decisional context to serve as a basis from which the fight tools can be chosen and a proper problem formulated. This approach can be thought of as a metadecision process wherein procedures and decision rules are actively formulated before the decision process is undertaken (Morris, 1972).
Using Metaphor as a Method of Inquiry We know our world through our own unique perception and perspective. Anything we know is recognized as something and is constructed from a point of view. Metaphor, in the broadest sense, is a tool that sees something from the viewpoint of something else. It involves the transfer (or carrying over) of a specific term from one system of meaning or dimension to another (Brown, 1977). Metaphor is the key to model building; a metaphor is a model whose implications have been spelled out. It is an important tool in understanding AI. The complexity of reproducing or imitating the logic of thought and decision processes utilizes metaphor to improve understanding. The problem is that many empiricists and many who have a strong positivist bent are uncomfortable with the notion of metaphor (Brown, 1977). This view, however, has not stopped the use of metaphor in the physical sciences. Both Einstein and Neils Bohr used metaphor to describe the inner workings of the atom. Bohr's solar system model of the atom was a classical attempt to describe behaviors in an unseen world. This example is an iconic or isomorphic metaphor attempting to establish a one to one correlation between the two realities. Many of the metaphoric examples in AI are of this variety. Computers "think," "analyze," and do "cognitive"-like tasks. The argument could be made that an analogic metaphor would be more appropriate in this context. These models take meaning from one context and employ it in another. In contrast, iconic models create new objects through apparently different frames of reference. The difference, therefore, is the creation of contrast or comparison versus the creation of objects or entities (Brown, 1977). In other words the iconic metaphor is holistic and systemic while the analogic represents the relations among systems or elements. Another problem with the use of metaphor is that it is not as precise as scientific language; rather it is a heuristic whose central task is that of reference and it can aid in the attempt to acquire knowledge (Boyd, 1979). Because of this imprecision it could be argued that some inappropriate
50
Knowledge in Society/Spring 1990
metaphors have been created in AI and expert systems. The relationship of machine and human intelligence is not a continuum. We may speak of a continuum with the animal k i n g d o m - w e can compare the intelligence of higher primates and h u m a n s - b u t the relationship between humans and their machines is discontinuous. A more appropriate metaphor of continuity, perhaps, is the secession of higher level languages to LISP processing machines. It can be reasoned, therefore, that more analogous rather than iconic metaphors should be employed.
Epistemology and Artificial Intelligence What we perceive to be reality often cannot be expressed in words. We have knowledge about love or courage, yet we might be hard pressed to give them an adequate definition. Nevertheless, we believe that we do have some notion of what truth and these other concepts represent. Fuzzy areas of knowledge often do need to be made more explicable. This statement is especially true when discussing the confusing issue of the knowledge of how and what we know. Philosophic inquiry begins from these times of reflection; epistemology, or the philosophy of how and what we know, is an attempt to make explicit and demonstrable what we know, thereby justifying our beliefs. Descartes (1979) started his reflection on doubt, confronted uncertainty, and attempted to make more explicit his relationship to what he knew. The result was his cogitio: he knew that he existed because he thought. The origins of this article began with reflection on human beings as makers of tools. Humans have the particular ability to-make and use tools; homo faber is distinguished from those species that preceded it on the evolutionary tree. Often it appears that our society has lost the notion of computers as tools and has given the computer its own personal identity. Newspapers and popular magazines have reveled in stories of computer foul-ups with responsibility for errors residing deep in "the mind of the machine," as if the machine was a Chaplinesque character who was the brunt of a collective Freudian projection. Of course computer problems are human problems with the responsibility residing in the logic of homo faber. Part of the problem with artificial intelligence specifically and computer technology generally is that there is a tendency to anthropomorphize the function and process of computing machinery to the behavior and cognitive functions of human beings. In a sense, a metaphor is created to explain how the machine works. However, it is in fact the mirror image of the metaphor: that is, we created the machine and then gave it our innate qualities, rather than it actually possessing those qualities itself. What is needed is a new metaphor that will encompass the domain of AI, putting it in a perspective free of anthropomorphic projections, or where these projections are conscious, metaphorical attempts to understand "machine intelligence." This metaphor is a double-edged sword, for not only do we couch the machine in terms of human intelligence, but we also reference the mind in terms of a machine.
Carrier
51
A very coherent picture of this type of metaphor is painted by Lackoff and Johnson (1980, p. 28) who describe this analogy in terms of an "ontological metaphor": "The machine metaphor gives us a conception of the mind as having an on-off state, a level of efficiency, a productive capacity, an internal mechanism, a source of energy, and an operating condition." Ontological metaphors attempt to establish boundaries of description or categorization; it is difficult to view a nonphysical entity such as the mind as a thing in itself, necessitating the creation of metaphors of description. Logic would assume that the metaphor of the computer as a mind flows from a similar rationale. The computer's sense of logic, although rational and constructed by human beings, is sufficiently abstract to demand a metaphor of comprehensibility. Additionally, AI as a field of inquiry has problems with the development of standards; the ascertainment of truth or falsity is less of a question than whether or not a certain algorithm meets its goals. "If there is no demonstrative certainty for the conclusions of science, their 'truth,' or at any rate their acceptability as scientific results, must be established by convention: through a consensus of experts in the field, and the fulfillment of certain methodological and professional c a n o n s - t h e rules of the scientific game" (Majone, 1980, p. 152). Whether AI as a discipline has matured sufficiently to establish conventions through consensus is a matter of conjecture. Perhaps AI will influence and change our philosophy of science in the next century, ringing the death knell of positivism and reductionism so prevalent in twentieth-century science (Hirschheim, 1985). Towards N e w Metaphors For AI
When we consider the qualities of a computer algorithm we usually refer to it as a model or a representation "of" some reality; however, in reality a computer program is a representation "to" or "for" someone (Dennett, I981). Moreover, besides its representational power, as a logical way to arrive at a solution to a problem, logic must be seen as woven into the fabric of our intentionality. It is not enough to have a m o d e l - t h e r e must be a knower who puts meaning to it. The "knowledge of machines" is in fact the knowledge we ourselves have reflected back from the machine. In reality a model is an abstraction, an icon of reality which often has not been distinguished clearly as a representation and has become a thing in itself. It is, however, the basic relationship of intentionality, a conscious attempt for a "knower to know" the world. A model does not necessarily mirror the world; regression is a good analogy..Our predictions are subject to error variance; often residual analysis is more telling than the relationship between the component sum of squares, or the significance of particular Betas. Dennett (1981) suggested that although it is tempting to ascribe an isomorphic relationship between the decomposition of a computer's function
52
Knowledge in Society/Spring 1990
and the decomposition of cognitive functions of the human brain, it is a metaphor that definitely limps, for even the most atomistic of cognitive psychologists admit that human thought cannot be reduced to the binary logic of a digital computer. At best AI is a metaphoric process, never really sharing the actual cognitive process with us. Von Neumann (1958), in one of his last works, took up the task of comparing the brain with the structures of the computer. His thesis did not attempt to equate the intricate physio-neurochemical actions and reactions within the brain to the purely electro-mechanical processes of the computer, but rather clearly stated that the language of the computer (i.e., mathematical logic) was not the language of the brain. Sutherland (1986), in his critique of AI performance, suggested that it is incomprehensible to assume morphological correlates between the neuro-complexity of the human brain and even the most sophisticated computer. This problem inherently epistemological, for computers with their relatively linear logic cannot approximate the mind, which is a simultaneous multidimensional gestalt (Sutherland, 1986), and whose logic is not only nonlinear, but perhaps relativistic, and noneuclidian as well (Munevar, 1981). AI, having a checkered history, has never really lived up to its advanced billing, always falling somewhat short of our anticipations and expectations (McDermott, 1981). In fact some have suggested that it nearly borders on quackery (Dreyfus, 1972; Lighthill, 1973). This perspective should not be surprising, for the history of the effectiveness of human systems has always produced both skeptics and unanswered questions. In many ways the faults and flaws of advanced AI systems are the faults and flaws of human intelligence. The essence of the problem lies in the difficulty of modeling human cognition when we have only a vague and tangled view of what it really constitutes. Only recently have we placed an emphasis on brain research in an attempt to discover the process of cognition. The development of AI leads to some inherent philosophical t r a p s - t h e central issues of which are epistemological, or perhaps metaphysical. One such issue is the difference between human cognition and decision making and attempts to replicate it with machine logic. The principal problem, therefore, is the relationship between "the knower and the known." This relationship, moreover, is intensified by the complexity of' the variables involved, the human brain with billions of interconnecting neurological components, and machines that can compute billions of arithmetic computations in a second. Perhaps the most convincing argument in this vein comes from G6del's incompleteness theorem (Lucus, 1961; Winner, 1975; Hofstedter, 1979), suggesting that the computer is limited by the inherent finitude of mathematical logic itself. The limits of computation are just as real as the limits of the cognitive capacity of the brain. Although she is admittedly comfortable with a mechanistic root metaphor, Boden (1979) was aware of the anthropomorphic component in AI. She suggested that the "puniness" of AI's computational ability in comparison with the cognitive processes of the brain has led to iconic com-
Carrier
53
parisons b e t w e e n computer algorithms and psychological terms. Additionally she claimed: Second (and philosophically more controversial), it is often claimed that the possession of intrinsic interests-that is, interests whose existence cannot be explained by reference to the purposes of some other individual or agent-is essential to any creature that can merit the literal application of psychological terms, and that no imaginable computer program could possibly possess such interests. On such a view, terms like "intelligent" (or even "stupid") could never be applied in their full sense to any program, no matter how impressive its "thinking" abilities (Boden, 1979, p. 114). Similar objections have been raised by Searle (1981), w h o suggested that a process of attribution is inherent in the description of mechanical processes. Thermostats "perceive" temperature changes, photoelectric doors " k n o w " w h e n to open, computers "understand" or are "able." "Our tools are extensions of our purposes, and we find it natural to make metaphorical attributions of intentionality to them . . . . The sense in which an automatic door 'understands instructions' from its photoelectric cell is not at all the sense in which I understand English" (p. 288). Quoting McCarty (1979), w h o posited a belief system for machines as simple as thermostats, Searle suggested that it is a fatal error in logic to assume rational qualities for mechanical entities. This approach stands in sharp contrast to Newell and Simon (1972) w h o supported the notion that machine and h u m a n understanding are isomorphic, using symbols as the source of their intelligent action. Searle's conviction is that the Newell-Simon hypothesis is physicalism, that is, "mental states are in fact identical with physical states of an organism's central nervous system" (Torrence, 1984, p. 17). Support for Searle's argument can be found in Lucas (1961) and Dreyfus (1981) w h o doubted w h e t h e r mental qualities are inherent in computers and at best are "mind like," suggesting a simile or metaphor. Torrance (1984), criticizing Wittgenstein (1953), gave evidence that contemporary theorists are moving a w a y from the positivistic and behavioristic approaches of Newell and Simon to a "cognitive science" rooted in the generative g r a m m a r of Chomsky (1959), and in the linguistics of Fodor (1975), w h o concluded that philosophical considerations are more relevant than "postulating an internal representations code to which all h u m a n cognitive activity is computationally related" (Torrance, 1984, p. 12). Alice Mary Hilton (1963), described the problem in attributional terms: w e have forgotten w h o has created the intelligence of our machines. Arguing from Stoddard (! 943), she characterized the seven attributes of intelligence: the ability to handle difficulty, complexity, abstractness, economy, adaptiveness to a goal, social value, and originality. These attributes share not only a hierarchical dimension (as in the case of dealing with difficulty), but also breadth or area (as denoted in complexity).
54
Knowledge in Society/Spring 1990
There is an intricate two-way relationship between difficulty and complexity, for complexity increases with the number of difficult tasks undertaken successfully, and difficulty rises with the ability to accomplish each difficult task. High intellectual accomplishment is a pyramid. Complexity is its base and difficulty is its height. The ability to undertake activities with the attribute of complexity consisting merely of an accumulation of facts from many fields, however, is not necessarily intelligence. Intelligent behavior always involves the ability to relate 15arts to the whole, to make generalizations, to classify, and finally-to abstract" (Hilton, 1963, p. 11). She is not convinced that hardware, despite its sophistication, has an innate ability to think intelligently. Torrance (1984) casts the problem in the perspective of the history of Western thought. He argued that there has been, since Aristotle, a traditional theme of the essence of humanity which separates the h u m a n species from the kingdom of animals and inanimate objects. This essence has been our innate intellectual capacity. It w a s Aristotle's " t e l o s " - t h e rational activity w h i c h enables the species to achieve goals. This teleological argument w a s further e n h a n c e d by Aquinas until three centuries ago w h e n Descartes boasted that he could k n o w that he existed simply because he thought. Descartes' dualism b e t w e e n the mental or spiritual qualities and the material was a distinction of higher to lower, allowing him to posit what w a s essentially h u m a n as opposed to that which was merely animal or mechanical. Searle (1981) suggested that the two camps within the AI community, those that support a "strong" AI, the Newell-Simon camp, and those who agree on a " w e a k " AI, are in strong disagreement. He, along with Dreyfus (1981) and McDermott (1981), agreed with Torrance's (1984) assertion that researchers have been too quick to assume that the achievements of AI in pattern recognition, theorem proving, and natural language programming w e r e achievements in machine intelligence, rather than the achievem e n t of the investigators themselves. Perhaps Searle's most convincing argument follows Husserl's notion of intentionality: It is not because I am the instantiation of a computer program that I am able to understand English and to have other forms of intentionality.., but as far as we know it is because I am a certain sort of organism with a certain biological (i.e., chemical and physical) structure, and this structure under certain conditions is casually capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality. . . . But the main point ... is that no purely formal model will ever be sufficient for intentionality, because the formal properties are not themselves constitutive of intentionality, and they have themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running (p. 299).
Carrier
55
Those that support a strong version of AI, Searle concluded, with an illusion to Plato's cave, considers as reality the shadows cast upon the wall, ignoring the real "thinker" while positing intellectual power in an artifact. Formal symbol manipulations were just that: "They have syntax but do not necessarily have semantics" (Searle, 1981). Dennett (1981) struck an argument similar to Searle's based on an intentional argument but further concluded that AI research is not based on empirical experimentation, but rather should be gedanken experiments, or what could best be called experimental epistemology. Computers with their awesome speed and memory capacity have left an impression on us all, but perhaps our awe should be reflected back upon us and the neurocomplexity of the brain, which dwarfs our hardware by comparison. The strong versus weak AI factions are only a part of the AI and expert systems community. Perhaps the vast majority subscribe to a more pragmatic view: "If it works, use it." These individuals may not see a purpose in developing underpinnings to science whether they be philosophical or linguistic. This article does not argue against a pragmatic stance; on the contrary, without pragmatics nothing would ever get done. Yet there is a strong foundation in Western scientific culture for both philosophical and linguistic underpinnings. Kuhn (1970) on the paradigmatic structure of scientific change; Churchman (1961, 1971) on the analysis of the philosophical foundations of operations research and information systems; and Burrell and Morgan (1979) on the underpinnings of organization and social structure all point to the need to carefully examine these foundations. Carrier and Wallace (1990) have gone so far as to suggest that a failure to at least implicitly understand these foundations could lead to errors of the third type. An analogy may be drawn to tl~e logical foundations of mathematics. A mathematician with no background or interest in logic would perhaps be more prone to errors in analysis or in theoretical development. At least a tacit, implicit knowledge of logic is essential in the evolvement and proof of theorems. Those who would argue that the philosophy of knowledge and an appreciation of linguistics are not important to the knowledge engineer and AI development specialist are in the same tenuous position as the mathematician with little interest in logic. The same argument can be made about the nature of metaphor. Those who would deny the value in metaphorical analysis are ignoring the process orientation of the brain in lieu of a strictly production view. Perhaps the paradox of their argument is that if we limit the brain, we are simultaneously limiting what the computer can do.
Analogicial Contrasts Between Human and Machine Intelligence Throughout this critique the argument has been made that iconic or isomorphic comparisons of human and machine intelligence are inadequate and lead to an anthropomorphic view of computer hardware. Can analogic metaphors be created to highlight the strengths and weakness of
56
Knowledge in Society/Spring 1990
Table 1 A Comparison of the Characteristics of Human Versus Machine Intelligence
HUMAN INTELLIGENCE Stochastic Process (praxls) Analog Kosmos Divergent thought Spontaneous Tacit knowledge Dlscovery Solldary Reflectlve Intentional Deep structure Artlflcer
MACHINE INTELLIGENCE Ordered
Production (Poesls) Digital Taxis Convergent thought Non-spontaneous Expllclt knowledge Invention Solitary Unreflectlve Automatic (Indellberate) Surface structure Artifice
machine intelligent systems? Table 1 shows a comparison of the characteristics of human versus machine intelligence. These differentiations begin with stochastic versus ordered processes. Human intelligence is dependent upon processing a stream of events, some of which are random and others nonrandom. Human creativity is dependent on both streams. New and unforseen events coupled with our knowledge and experience produce the new ideas necessary for human growth and survival (Bateson, 1979). While a machine may simulate random processes through the generation of random numbers, it cannot handle a stream of random events coming from the external environment. In fact a computer does not have access to its environment except for the input relationship from the human who controls the entry of data and feedback (Bolter, 1984). This leads to the next comparison of process versus production. While both humans and machines have a goal orientation, the computer is strictly programmed to produce results (Harmon & King, 1985). We need not know in detail how the electronics of the machine performs its various tasks. Human beings are process oriented: a decision to give upsmoking, for example, is very much an internal process with many variables of personality and life environment coming into play in this one decision. This contrast can be best exemplified as the difference between praxis and poesis. Praxis is a theory of internal action where the goal is within the individual (Bernstein, 1971; Ihde, 1979). It is a distinct activity from production orpoesis which has external output as its goal. Carrier and Wallace
Carrier
57
(1990) suggested that such a differentiation is important for expert systems, for the knowledge engineer attempts to capture the internal heuristics of the expert and turn them into production rather than process-based information. The next comparison is the difference between analog and digital. Computers process information discretely in binary fashion in sharp contrast to human thought, where there is a synthesis between what is known and new information coming in from the environment. Humans make contrasts and comparisons between the incoming data received through the senses and what is already known and then make new judgments. This comparison leads to the next. Campbell (1982) made the distinction between kosmos and taxis. Kosmos is a form of ordering which is spontaneous, where not all the rules are k n o w n - m u c h of human knowledge is not so much rule-based as conceptual. Taxis on the other hand has known rules which are deliberately written into a system, be it an organization or a machine algorithm or heuristic. Kosmos in general and abstract; taxis is specific and concrete. Because of our capacity to develop analogies and the general ordering of information, human thought has the capability of divergent thought in contrast to the convergent thinking of machines (Bateson, 1979). It can be readily seen in expressions such as: "The machine took x number of iterations to converge to the solution." Divergent thinking is unpredictable in comparison with convergent thought. This contrast is important, for we want our human thinkers to be spontaneous, yet we do not want any surprises from our machines. Humans also know tacitly: we may not be able to express into words the knowledge we have. A master cabinetmaker knows just from the feel of the wood how it should be cut and shaped. This knowledge comes from years of experience and cannot be readily expressed in an explicit way to an apprentice. Polanyi (1966) carried out an impressive investigation into the nature of tacit knowledge. He suggested that we are able to integrate awareness about reality, yet simultaneously we may not be able to identify the particulars that make up the reality. He made a sharp contrast using an analogy of the difference between the German "wissen" and "k6nnen," or "knowing what" and "knowing how." "These two aspects of knowing," he says, "have similar structure and neither is ever present without the other. This is particularly clear in the art of diagnosing which intimately combines skillful testing with expert observation" (Polanyi, 1966). Computers do not know tacitly-the knowledge stored or converged up on is explicit information. Collins, Green, and Draper (1985) have applied the notion of tacit knowledge to expert systems. They indicated strongly that it is a mistake to believe that knowledge elicitation techniques are adequate and sufficient in definition to uncover the whole of an expert's knowledge. Given the comparisons made up to this time, the comparisons suggest the difference between discovery versus invention (Bolter, 1984).
58
Knowledge in Society/Spring 1990
Discovery is a divergent, tacit process with general rules and is dependent upon random information to enlighten the discoverer; it is a gestalt process (Mayer, 1983). Computers, like inventors, have a goal to produce some worthwhile object and proceed in an orderly fashion, and to follow explicit rules and knowledge in order to converge on a solution. By way of analogy perhaps we could speak of the discovery by Einstein that energy was some function of the square of matter times constant, versus Edison's repeated trials with various materials to invent a filament that would make the electric light practical. It is interesting that the same analogy holds in the difference between a priori versus a posteriori thought, although humans do make a posteriori judgments as well. As Table 1 continues there is an obvious connection with the next two contrasts. Humans are solidary creatures; we live in communities and depend on societal cooperation. If I have a thought, I might share it with a colleague to see if my interpretation or perception of a problem is correct. Machine intelligence, on the contrary, merely processes the information given by the human. It is solitary and unreflective. Humans reflect upon the information they receive from their environment-they behave intentionally with deliberation and self-awareness. In contrast, machine intelligence is undeliberate and proceeds automatically with no reflection or awareness of itself. This dicotomy parallels Chomsky's notion of deep versus surface structure. Humans invest in developing elaborate theories as well as the surface practicality. Computers do not develop theoretical insight but rather perform analysis and make projections based on the information they have been given. Perhaps the last comparison is the most cogent differentiation, for it summarizes the others in a concise fashion. Humans create tools; homo faber is a species of tool makers. The computer is a reflection of that capacity. Rather than looking for iconic explanations for machine intelligence, perhaps analogic comparisons are more valid: machine intelligence mirrors, in an analogous sense, part of the human thought process, but we should not carry out our personifications too far. A computer is a tool, a human tool to increase our cognitive capacities. To posit that it has an existence in itself is mistaken ontology.
Conclusion This article makes an argument for developing an appreciation for the philosophic and metaphoric foundations of AI. It is further argued that many of the metaphoric views of machine intelligence has been based on an isomorphic or iconic relationship between human and machine intelligence. Maintaining, moreover, that comparing the neurocomplexity of the human brain with a machine that does billions of arithmetic calculations a second is not a trivial task, it is argued that perhaps a more analogic group of metaphors can place proper perspective on the relationship between humans and machines.
Carrier
59
A series of contrasts and comparisons are made in the hope of demonstrating the nonisomorphic properties of human versus machine intelligence. The emphasis is placed on the flexibility of human intelligence rather than on its symbol processing capability. This is not a denial of the symbol processing capacity of the brain but rather an attempt to view human intelligence in a broader, more encompassing way. The stochastic, random inputs of human environment, coupled with a strong process orientation and a tacit, generic rule system, make human intelligence fundamentally more divergent in thought. Finally it was maintained that the intentional, reflective capacity of the brain promotes discovery and makes humans creators of artifacts. The computer generally, and AI more specifically, is a tool. As a microscope has extended our ability to see substructures of life, perhaps machine intelligence can extend our notion of cognition and human thought processes. But presently it would be a mistake to posit a microscope as a specialized type of eye as it is to posit machine intelligence as cognition. References Bemstein, R.J. (1971). Praxis and action. Philadelphia, PA.: University of Pennsylvania Press. Boden, M.A. (1979). The computational metaphor in psychology. In N. Bolton (Ed.), Philosophical problems in psycholoRv. New York: Methuen. Boyd, R. (1979). Metaphor and theory change: What is "metaphor" a metaphor for. In A. Ortony (Ed.), Metaphor and thought. Cambridge: Cambridge Universi{y Press. Brown, R.R. (1977). A poetic for sociology. Cambridge: Cambridge University Press. Burrell, G. & Morgan,G. (I 979). Sociological paradigms and organizational analysis: Elements of the sociology of corporate life. London: Heineman. Campbell, J. (I 982). Grammatical man: Information, entropy, language, and life. New York: Simon & Schuster. Carrier, H.D., & Wallace, W.A. (1990). A philosophic comparison of decision aid techniques for the policy analyst. Evaluation and Program Planning (in press). Chomsky, N. (1957). Syntactic structures. The Hague: Mouton. Churchman, C.W. (1961). Prediction and optimal decision: Philosophical issues of a science of values. Englewood Cliffs, NJ: Prentice-Hall. Churchman, C.W. (197 I). The design of inquiring systems. New York: Basic Books. Dennett, D.C. (198 I). Intentional systems. In J. Haugeland (Ed.), Mind design: Philosophy, psycholo.~y, artificial intelligence. Cambridge MA: MIT Press. Dreyfus, H.L. From micro-worlds to knowledge representation: AI at an impasse. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence. Cambridge MA: MIT Press. Dreyfus, H.L. & Dreyfus, S.E. (1986). Mind Over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press. Descartes, R, (1979). Meditations on first philosophy. Indianapolis, IN: Hackett Publishing Co. Fodor, J.A. (1975). Language and thought. New York: Thomas Y. Crowell. Harmon, P. & King, D. (1985). Expert systems: Artificial intelligence in business. New York: Wiley and Sons. Hilton, M.A. (1963). Logic, computing machines, and automation. New York: The World Publishing Company.
60
Knowledge in Society/Spring 1990
Hirschhiem, R.A. (1985). Information systems epistemology: An historic perspective. In E. Mumford, R.A. Hirschheim, G. FitzGerald, & T. Wood-Harper (Eds.), Research methods in information systems. North-Holland. Hofstadter, D.R. (1979). Godel, escher, bach: An eternal golden braid. New York: Basic Books. Ihde, D. (1979). Technics andpraxis. Boston, MA: D. Reidel. Jung, C.G. (1922). Psychological {ypes. London: Rutledge and Kegan Paul. Kuhn, T. (1970). The structure of scientific revolutions (2d Ed.). Chicago, IL: University of Chicago Press. Lackoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago, IL: University of Chicago Press. Lighthill, J. (1973). Artificial intelligence: A general survey. In Artificial Intelligence: A Paper Symposium. London: Great Britain Research Council. Lucas, J. (1963). Minds, machines and g0del. In K. Sayre & F. Crosson (Eds.), The modeling ofrnind. Notre Dame, IN: University of Notre Dame Press. McCarthy, J. (1977). Epistemological problems of artificial intelligence. Proceedings of the 5th International Joint Conference on ArtificialIntelligence. Cambridge, MA: Massachusetts Institute of Technology. McDermott, D. (1981). Artificial intelligence meets natural stupidity. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificial intelligence. Cambridge MA: MIT Press. Majone, G. (1980). Policies as theories. OMEGA: The International Journal of Management Science, 8, 151-162. Mason, R.O. & Mitroff, 1.1. (1973). A program for research on management information systems. Management Science, 19, 475-87. Mitroff, I.I. & Betz, F. (1972). Dialectical decision theory: A meta-theory of decisionmaking. Management Science, 19, 11-24. Mitroff, I.I., Kilmann, R.H. (1982). On evaluating scientific research: The contribution of the psychology of science. Journal of Technological Forecasting and Social Change, 6, 389-402. Morris, W.T. (1963). Management science in action. Homewood, IL: Richard D. Irwin. Morris, W.T. (1967). On the art of modeling. Management Scien~ce, 13, 707-716. Morris, W.T. (1972). Management for action: Psycho-technical decision making. Reston, VA: Reston Publishing. Munevar, G. (1981). Radical knowledge: A philosophical inquiry into the nature and limits of science. Indianapolis, IN: Hackett Publishing. Neumann, J. von. (1966). Lecture on redundancy at University of Illinois. In W. Burks (Ed.), Theory of self-reproducing automata. Urbana, IL: University of Illinois Press. Newell, A. & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: PrenticeHall. Polanyi, M. (1966). The tacit dimension. Garden City, NY: Doubleday. Putnam, H. (1967). The mental life of some machines. In H. Castaneda (Ed.), IntentionalJ{y, Minds andp~rception. Detroit, MI: Wayne State University Press. Pylyshyn, Z. (1981). Complexity and the study of artificial intelligence. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificialintelligence. Cambridge MA: MIT Press. Searle, J.R. (1981). Minds, brains, and programs. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, artificialintelligence. Cambridge MA: MIT Press. Simon, H.A. (1954). Some strategic considerations in the construction of social science models. In P. Lazarsfeld (Ed.), Mathematical thinking in the social sciences. Glencoe, IL: The Free Press. Sutherland, J.W. (1986). Assessing the artificial intelligence contribution to decision technology. IEEE Transactions on Systems, Man, and Cybernetics, SMC-16, No. I. Torrance, S.B. (1984). The mind and the machine. New York: John Wiley & Sons. Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.
Carrier
61
Winner, L. (1975). Complexity and the limits of human understanding. In T.R. LaPorte (Ed.), Organized social complexity. Princeton: NJ: Princeton University Press. Wittgenstein, L. (1953). Philosophic investiga~'ons. New York: Macmillan.
"A major purpose of The IQ Controversy is to talk back to the media about intelligence testing..,* "An interesting study of important problems: differing judgments regarding the development of intelligence, and the conflict between expert and public views on the significance of intelligence testing....Well-written and well-mseamhed, this first-rate title is an illuminating piece of scholarship."
-K.F. Widaman, Choice
HARK 5NYDERHAN
"Shed(s) important new light on the role of the media in shaping public opinion on the subject of intelligence testing."
-Terry Teachout, Nationa/ Review *The book's survey of the experts is a major event."
-Daniel Seligman, Commentary
ISBN:0-88738-151-0(cloth)192pp $24,95 ISBN:0-88738-839-6(paper) $18.95 At your bookstore of direct from the publisher. Major credit cards accepted. Call (201) 932-2280
D
transaction
ransaction publishers
Department IQ3 Rutgers-The State University transactionNew Brunswick, N.J. 08903