Synthese DOI 10.1007/s11229-012-0240-6
Folk psychology as science Martin Roth
Received: 19 July 2012 / Accepted: 14 December 2012 © Springer Science+Business Media Dordrecht 2012
Abstract There is a long-standing debate in the philosophy of action and the philosophy of science over folk psychological explanations of human action: do the (perhaps implicit) generalizations that underwrite such explanations purport to state contingent, empirically established connections between beliefs, desires, and actions, or do such generalizations serve rather to define, at least in part, what it is to have a belief or desire, or perform an action? This question has proven important because of certain traditional assumptions made about the role of law-statements in scientific explanations. According to this tradition, law-statements take the form of generalizations, and the laws we find in well-established sciences are contingent and empirical; as such, if the kinds of generalizations at work in folk psychological explanations of human action act like definitions, or state conceptual connections, then such generalizations could not play the kind of explanatory role we find in mature sciences. This paper argues that the aforementioned way of framing the debate reflects a still powerful but impoverished conception of the role laws play in scientific explanations, a conception that, moreover, cannot be reconciled with a good deal of actual scientific practice. When we update the philosophy of science, we find the concerns that are raised for folk psychological explanations largely evaporate or are found not to be specific to such explanations. Keywords
Folk psychology · Laws · Models · Explanation · Functionalism
M. Roth (B) Drake University, 2507 University Avenue, Des Moines, IA 50311, USA e-mail:
[email protected]
123
Synthese
1 Introduction There is a long-standing debate in the philosophy of action and the philosophy of science over folk psychological explanations of human action: do the (perhaps implicit) generalizations that underwrite such explanations purport to state contingent, empirically established connections between beliefs, desires, and actions, or do such generalizations serve rather to define, at least in part, what it is to have a belief or desire, or perform an action? This question has proven important because of certain traditional assumptions made about the role of law-statements in scientific explanations. According to this tradition, law-statements take the form of generalizations, and the laws we find in well-established sciences are contingent and empirical; as such, if the kinds of generalizations at work in folk psychological explanations of human action act like definitions, or state conceptual connections, then such generalizations could not play the kind of explanatory role we find in mature sciences. This paper argues that the aforementioned way of framing the debate reflects a still powerful but impoverished conception of the role laws play in scientific explanations, a conception that, moreover, cannot be reconciled with a good deal of actual scientific practice. When we update the philosophy of science, we find the concerns that are raised for folk psychological explanations largely evaporate or are found not to be specific to such explanations. The paper proceeds as follows: Section 2 presents a well-known argument against the possibility of turning folk psychology into serious science and reveals the role that a commitment to a deductive-nomological conception of explanation and a Humean approach to causation plays in the argument. Though the argument is ultimately inconclusive, Sect. 3 shows how a commitment to a deductive-nomological conception of explanation and a Humean approach to causation can be used to formulate a second, more powerful argument against turning folk psychology into science. However, if successful, this argument would render illegitimate a good deal of obviously legitimate scientific explanation. Section 4 appeals to a model-based approach to science to resolve this tension, and in doing so, reveals folk-psychological explanations to be on par with many of the explanations we find in the rest of science. 2 Against folk psychological laws, part one One debate over the status of explanations of human action turns on how we should interpret the following principle of folk psychology: [L] If any person, agent, individual, wants some outcome, d, and believes that an action, a, is a means to attain d under the circumstances, then x does a.1 Superficially, [L] looks like the kind of law-statement familiar in science: just as Boyle’s law reveals how temperature, pressure, and volume are connected, and Ohm’s law reveals how current, voltage, and resistance are connected, [L] reveals the manner 1 Rosenberg (2012). As stated, [L] is a clear non-starter. To make it at all plausible, we need to add that
there is no obvious better means than a available, a has to be reasonably low risk compared to the benefit of securing d, and so on. While there are serious doubts about whether we can fill out [L] to make it both plausible and non-trivial, the issue won’t matter to the arguments of this paper.
123
Synthese
in which beliefs, desires, and actions are connected. As such, [L] appears to be at least a candidate law in a mature science of psychology. The thought that a generalization like [L] would be needed in order for folk psychological explanations of human action to be scientifically legitimate derives largely from a commitment to a deductive-nomological account of explanation (Hempel and Oppenheim 1948; Hempel 1966). According to the deductive-nomological account, individual events are explained when they are deduced from laws—understood as universal generalizations—and a statement of initial (or boundary) conditions. Applied to the present case, the events to be explained are actions, and an action is explained just in case it can be deduced from [L] plus initial (or boundary) conditions. The availability of a generalization like [L] is not enough to establish that folk psychological explanations are on all fours with other explanations in science, of course, for Boyle’s law and Ohm’s law appear to state contingent connections. However, the contingency of folk psychological generalizations looks to be secured by the thought that folk psychological explanations cite causes, and causal connections are contingent. So, for example, Fodor writes, “causal powers are a function of your contingent connections, not of your conceptual connections. As, indeed, Uncle Hume taught us” (1991). Let’s call the dual commitment to a deductive-nomological account of explanation and Humean causation the Hempel-Hume account of explanation (hereafter ‘H-H’). A salient feature of [L] is that the explanations it is supposed to generate make actions intelligible: an action “makes sense” in light of the beliefs and desires that are said to produce the action. Furthermore, it has been argued that this connection between explanation and intelligibility is not accidental, but a constraint on intentional ascription and explanation as such.2 Not surprisingly, philosophers who think that intentional explanations are causal explanations tend to reject the intelligibility constraint; in principle, we could discover that a creature had beliefs and desires and performed actions, even if such a creature were wildly irrational, by our lights. This simply follows from the commitment to a Humean account of causation. If [L] asserts causal connections between beliefs, desires, and actions, those connections need to be confirmed empirically, and the falsity of [L] would not entail that there were no beliefs, desires, and actions. So, for example, in the course of criticizing Daniel Dennett’s position that intentional ascription presupposes rationality, Fodor and Lepore (1991) claim, “We’d get some predictive value out of belief ascription even if it worked, say, 87 percent of the time that a creature that believes ( p → q and p) believes q” (1991). In response, Dennett (1991) declares: This superficially plausible claim presupposes the independent discoverability of individual beliefs, whereas I hold that it is the assumption of rationality that makes belief-ascription ‘holistic’…individual belief attributions [are] not anchorable in this imagined way because one always encounters what we might call the Quinian circle: the very best evidence there could be that you made a mistake in attributing to someone the beliefs that p and p → q was the evidence that you should also attribute the belief that not-q. 2 This is often called the principle of charity (Quine 1960; Dennett 1978; Davidson 1980), a principle that
says that, in attributing beliefs and other propositional attitudes to a creature, we need to assume that the creature is by and large rational.
123
Synthese
Dennett goes on to point out that looking in the brain for a “belief box” will not help either, if what we are trying to do is identify a “belief box” independently of identifying other boxes—the holism just reappears when we look in the head. Applied to [L], the suggestion would be that instead of stating contingent connections that we can empirically test, [L] tells us (in part) what it is to have beliefs and desires and to perform actions. We could never discover that [L] was false, for any reason we had for thinking [L] was false would be a better reason for thinking that we had mischaracterized the beliefs, desires, or actions in the alleged counter-example. Alexander Rosenberg (2012) puts the point this way: Every explanation of a human action is in fact tantamount to a redescription of the event to be explained. It does not identify other distinct and logically independent events, states, or conditions that determine the action. If a statement like [L] is essential in the explanation of human action, it is because [L] is part of what we need to make the redescribing explanations. [L] defines what it is to be an action in terms of the notions of desire, belief. [L]’s function is to show us what counts as having a reason for doing something and to show us when a movement of the body is an action. Since [L] is required for the identification of beliefs, desires, and actions, the argument goes, the connection between beliefs, desires, and actions [L] specifies looks to be conceptual or logical, not contingent. This, in turn, seems to undermine [L]’s status as a possible causal law. If the debate over the contingency of [L] turns on whether we should accept the rationality constraint on belief ascription, however, the friends of contingency look to have the upper hand. For one thing, even if [L] guides our ascriptions of beliefs, desires, and actions, [L] isn’t the only thing in play. If you have been at a party for an hour, milling around with the guests, one of whom is a full-grown elephant, it is a pretty safe bet that you believe there is an elephant at the party. It is a pretty safe bet that you desire less pain than you are having when you have just slammed your finger in a door. And so on. Once all of this is added in, ascriptions of belief, desire, and action get pretty messy, and nothing like “any argument for irrationality is a better argument for mistranslation or misattribution” will be a very convincing trump card. Worse still, if we accept the rationality constraint on belief ascription, then it looks like we couldn’t understand irrationality at all, and we do (Stich 1985). Not only do we know about failures to take base rates into account (Kahneman and Tversky 1973), we realize that because they cannot inhibit action, patients with certain kinds of frontal lobe damage don’t act in accordance with their beliefs (Bechara et al. 1994).3 3 Against folk psychological laws, part two Here’s the plot so far: we began with the issue of whether folk psychological explanations are akin to the kinds of explanations we find in science. According to the 3 A holist could say that [L] is just incomplete: if you had a “complete” theory, it would accommodate
the elephant and frontal lobe patients, and so on. But this move makes it plain that the resulting complete theory would be an empirically backed theory of a certain part of human psychology.
123
Synthese
H-H picture of scientific explanation, explanations require law-like contingent generalizations. [L], a principle of folk psychology, was presented as a candidate law-like contingent generalization. We then considered an argument against the contingency of [L] based on a rationality constraint on belief, desire, and action ascription, and found it wanting. The friends of contingency appear to be on firm ground. The fault lines here run deep, however, and the threat of quake comes into focus when we consider the familiar, widely accepted functionalist interpretation of folk psychology. For example, in their well-known defense of folk psychology, Horgan and Woodward (1985) write: Folk psychology is a network of principles which constitutes a sort of commonsense theory about how to explain human behavior. These principles provide a central role to certain propositional attitudes, particularly beliefs and desires. The theory asserts, for example, that if someone desires that p, and this desire is not overridden by other desires, and he believes that an action of kind K will bring it about that p, and he believes that such an action is within his power, and he does not believe that some other kind of action is within his power and is a preferable way to bring it about that p, then ceteris paribus, the desire and the beliefs will cause him to perform an action of kind K. The theory is largely functional, in that the states it postulates are characterized primarily in terms of their causal relations to each other, to perception and other environmental stimuli, and to behavior. According to functionalism, what makes some state a state of, e.g., belief, is the role that state plays in the larger cognitive economy of an organism. Fodor (1987) himself says that the generalization “if you want that P and you believe that not-P unless Q, then all else being equal, you try to bring about that Q” is the kind of generalization that functionalists should be prepared to accept, and Fodor happily concedes that a belief state may be defined as one that causally interacts with desires and actions in the way specified by the generalization. But if generalizations like [L] inter-define belief, desire, and action, [L] can hardly be a contingent law statement that is subject to empirical testing. Rather, if our common sense mental concepts are broadly functional concepts, then certain connections between mental states and actions are conceptually guaranteed. Robert Cummins expresses this point nicely when he writes, “When we describe causal interactions between functionally characterized components, the relevant causal generalizations pretty much come for free, since a functionally characterized component is a component identified by its relevant causal powers” (2002). Insofar, then, as there are folk psychological laws, these laws trivially follow from functionalism about the mental. The moral of the story is clear: functionalist folk psychology is incompatible with H-H. This moral hasn’t been lost on certain critics of functionalism. For example, Rupert (2006) relies on the incompatibility to argue that functional explanations in psychology are vacuous: The problem comes into focus more clearly when we consider the law of a supposedly autonomous empirical psychology mean to underwrite the singlecase explanation: A subject that is in some state or other that causes, among
123
Synthese
other things, aversion behavior exhibits aversion behavior. Such a ‘law’ appears vacuous, stating no more than that states that cause e cause e. Metaphysical necessity—not empirical law—grounds this claim. Here we find a clear commitment to H-H: science is (or should be) in the business of providing causal explanations of single events by subsuming those events under contingent, empirical laws. The “laws” that functionalism generates, however, are trivial. Thus, if folk psychological concepts are functional concepts and [L] expresses (in part) the functional roles of belief and desire, then [L] cannot factor into the kind of explanations required by H-H. Rupert’s rejection of functionalism starts with the incompatibility of functionalism and H-H, and because he affirms the latter, the former has to go. But function talk is rife in science, engineering and everyday life, and terms like ‘gene,’ ‘catalyst,’ ‘resistor,’ ‘brain,’ ‘heart,’ and ‘force’ are all, in some of their uses, functional. If Rupert is right, however, it appears that the functional explanations in which these terms figure are all vacuous. Something has gone seriously wrong here, and we need to find a way to get the fly out of the fly-bottle. 4 Folk psychology as (non-H-H) science Assuming that functional explanation is legitimate in general, two lines of response to our predicament present themselves: (1) The predicament is genuine. Interpreting functional explanations along H-H lines renders functional explanations vacuous. But functional explanations are not vacuous. Therefore, we cannot interpret functional explanations along H-H lines. (2) The predicament is not genuine. Interpreting functional explanations along H-H lines does not render functional explanations vacuous; H-H is compatible with functional explanation after all. In this section, I argue that closer inspection of how functional explanations work suggests the predicament is indeed genuine: functional explanation cannot be squared with H-H. I also show that if we approach functional explanations from the perspective of a model-based account of science, it will become clear why the “trivial” generalizations that functionalism yields do not render functional explanations vacuous. Increasingly, philosophers have emphasized the centrality of models in science,4 and in his seminal Explaining Science, Ronald Giere argues that law-statements (or ‘principles’, as Giere prefers to call them) are best thought of as specifying models, rather than making direct claims about the world. As Giere points out, if we interpret law statements as asserting generalizations about the world, all of our laws turn out to be literally false. Philosophers of science have recognized this, of course, and some have tried to overcome the problem by articulating a notion of approximate truth (Popper 1972; Newton-Smith 1981). However, given the well-known problems with providing 4 The role of models in science has been widely examined, though philosophers of science differ in what
they take models to be and how they think models function in science. See van Fraassen (1980, 1989), Cartwright (1983, 1999), Giere (1999, 2004, 2006), and Teller (2001, 2004) for excellent discussions of these issues.
123
Synthese
a clear and coherent notion of approximate truth (Miller 1974; Laudan 1981), and given that models can do the required representational work, Giere advises us to give up trying to salvage approximate truth. Rather, we should regard law-statements as true “by definition” of the models they partially specify, with the empirical content of science consisting in the theoretical hypothesis that a particular model is similar, in degrees and respects, to some target domain. According to Giere, most of the models we find in science are abstract entities that possess all and only the properties that we characterize them as having, and instead of being true or false, these models are more or less accurate representations of the domains we assert them to represent. In the spirit of Giere (and Dennett), we can say that principles like [L] function to specify models of intentional systems.5 Giere’s approach to understanding how conceptually guaranteed laws contribute to empirically grounded explanations emerges from his discussion of F = ma in Newtonian mechanics. Noting the long-standing dispute over whether this law should be construed as an empirical generalization or a definition, Giere (1999) claims that it should be understood as a principle for building models, models that can be brought into agreement, closely enough for our purposes, with other perceptual or theoretical phenomena. Furthermore, very general principles like F = ma do not specify models by themselves. Rather, a model is constructed from F = ma when we specify the relevant forces. When the forces are specified, we end up with a model of, e.g., a linear oscillator (Giere 1988). Teller’s discussion of Hooke’s law provides a nice example of the idea. Teller (2009) writes: Hooke’s law provides a simple example of what I have in mind. To what does it apply? What counts as a spring? Whenever the restoring force for a deformed material can be expressed as a Taylor series there is a linear first term. We count a material as a spring just in case, for ranges in which we are interested, the restoring force is well approximated by the linear first term of the Taylor expansion, that is, exactly if the material satisfies Hooke’s law. Applied to [L], the empirical question is not whether [L] is a true, contingent generalization about the world, but whether anything satisfies a model specified using [L]; to the extent that humans exhibit the effects predicted by such a model, to that extent we can regard humans as an intentional system of that kind. For present purposes, however, the key point is that if the failure to state a contingent generalization renders [L] explanatorily vacuous, then Hooke’s law is equally vacuous: after all, if we count a material as a spring just in case the restoring force is well approximated by the linear first term of the Taylor expansion, it turns out that ‘springs satisfy Hooke’s law’ isn’t a contingent generalization, either. To understand why this does not entail that models 5 Both Maibom (2003) and Godfrey-Smith (2005) have appealed to Giere’s account of models to resolve
certain disputes concerning what the folk are doing, psychologically speaking, when they predict and explain the actions of others. According to both Maibom and Godfrey-Smith, our quotidian capacities to predict and explain human action are best understood in terms of facility with a model. However, the appeal to models in the current paper has an importantly different aim: Maibom and Godfrey-Smith are concerned with the psychology of mental state attribution, not whether principles like [L] are contingent or conceptual, or the possible role [L] might play in the science of psychology. Thus, while the arguments of the present paper may complement the views of Maibom and Godfrey-Smith, it is not an extension of their project.
123
Synthese
specified by Hooke’s law—or [L]—are explanatorily empty, we need to clarify what and how a good many models explain. According to Giere (1988), science isn’t primarily in the business of explaining events. This isn’t to say that scientists aren’t concerned with explaining events, of course, but only that this concern is largely secondary. To see what Giere has in mind, consider the physiologist who attempts to explain bodily capacities and effects by developing models of various systems, e.g., the nervous system or the circulatory system. A physiologist might develop a model that is highly accurate of some system in our body without explaining, of any particular event that occurs in our body, why it occurred. Her goal is not to explain this, but to construct a model of how our bodies work—what the various parts are, how the interaction of those parts reliably produce certain effects, and so forth. Many of the explanations we find in science, engineering, and everyday life are like this: instead of explaining particular events, these explanations consist in providing a functional analysis of a capacity or dispositional property (Cummins 1975, 1983). For example, imagine an electrical engineer’s explanation of how a particular lighting system works. The engineer would specify a model of a circuit, a model that includes various components (e.g., resistors, power supplies) and how those components interact. In the model, whether a component is a resistor depends only on whether the component causes potential drops, and so ‘resistors cause potential drops’ will be true of the model by definition. The explanatory payoff of such a model is that, because components are characterized solely in terms of contributions relevant to analyzing the target capacity, it becomes transparent why any system satisfying the model will have the target capacity. The worry that “analytic” generalizations like ‘resistors cause potential drops’ cannot figure into genuine explanations is an artifact of H-H’s commitment to a deductivenomological account of explanation. If we accept that explanation is subsumption under law, then we will be tempted to think that the job of a generalization like ‘resistors cause potential drops’ is to explain particular occurrences of potential drops (events). According to H-H, the generalization cannot perform this job if the generalization is not contingent—if a resistor just is something that causes potential drops, any explanation of potential drops that appeals to ‘resistors cause potential drops’ would look to be empty/vacuous. However, reflection on the circuit model should make it clear that the worries about contingency are misplaced, or at least premature. Functional analyses are principally geared towards explaining capacities, not events, so from the outset H-H misidentifies the primary explanatory targets of functional analysis. The role that the generalization ‘resistors cause potential drops’ plays isn’t that it accounts for (“covers” or “subsumes”) particular potential drops (individual events). By functionally characterizing components as resistors, we isolate effects of components in a way that makes them (potentially) relevant to an analysis of some capacity. The generalization makes explicit what those contributions are, of course, but it’s the models in which functionally characterized components figure that are doing the explanatory work. If I know that a circuit model accurately represents some aspect of a physical system and know that a part, X, of the physical system corresponds to the resistor, then I am in a position to understand why, if a current passes through X, the characteristic effects of such a system occur, e.g. the lights go out. The model provides the
123
Synthese
understanding of how certain aspects of this physical system work, and identifying part of the target as a resistor, e.g., using ‘this resistor’ to pick out a nickel-chrome wire, reflects an assumption that the circuit model is an accurate representation of certain aspects of this physical system (target). But the model itself does not guarantee that the nickel-chrome wire is a resistor, or that a potential a drop occurs when a current passes through this wire, or that the lights go out when there is such a potential drop. The analyticity/metaphysical necessity is in the model, not the target. As far as explanation goes, the appeal to the model is thus both indispensible and non-vacuous. These points apply, mutatis mutandis, to springs: in terms of the model, a spring is anything having a restoring force that is well approximated by the linear first term of the Taylor expansion, and so ‘springs have a restoring force that is well approximated by the linear first term of the Taylor expansion’ is true by definition. However, specifying components in a schematic diagram (model) as springs, i.e., as things that satisfy Hooke’s law, may be precisely what is needed to understand how something, e.g. my child’s pogo stick, works. To the extent that the model is accurate of some target system, this licenses calling an aspect of the target—a piece of coiled metal in my child’s pogo, for example—‘spring’. However, because the license rests on a contingent relation between model and target, the explanatory gains provided by the model do not entail that the necessity/analyticity in the model transmits to the target, i.e., it does not entail that ‘this piece of coiled metal satisfies Hooke’s law’ is necessary/ analytic. Linking elements of a model to the perceptual and theoretical phenomena we are trying to represent is also crucial to evaluating the accuracy of models, of course. In the examples of the lighting system and pogo stick, we can imagine that the model builder has a target system in mind when specifying a model. But a model of a lighting system, or a pogo stick, is not itself a lighting system, or a pogo stick; bringing a model to bear on some target system requires identifying aspects of the model with aspects of a target system (Giere 1988). How, then, can we test whether a specified model accurately represents a particular target system? Two factors are relevant to such a test: (i) the extent to which the model predicts the effects or regularities the system is known to exhibit, and (ii) the extent to which the functional characterizations employed by the model can be shown to be implemented in the target system. To illustrate the general idea here, imagine that we want to explain water runoff patterns in a certain region. Generate a topographic map of the region, superimpose assumed rainfall quantities and locations on the map, and run a simulation. If the result of the simulation matches our runoff data, then this is evidence that our topographic map is accurate. We can also use the map to make novel predictions about runoffs by superimposing possible rainfall quantities and locations. If the predictions about runoff prove correct, this provides additional support for thinking the map accurate. If our simulations don’t work, this might be the result of incomplete or inaccurate information about initial rainfall quantities and locations. If that is not the problem, the map can suggest likely places where the map is inaccurate or incomplete. This hypothesis can be tested, and along with it the map itself. In any case, it is important to notice that the relations between the map’s components are “analytic,” or certainly not contingent; it is just geometry, after all. The contingency is in the fit between map and world (model and target), i.e., it is contingent whether the geometry actually is, or
123
Synthese
approximates, the spatial properties of the target. Compare: Kepler’s equal areas law is analytic in Euclidean space, but it is contingent (and actually false, but a very good approximation) that it holds of the planets in our solar system. With these points in mind, consider Horgan and Woodward’s claim that “It is satisfaction of the ‘causal architecture’ of FP, by some set of (possibly complex) events in the central nervous system, which is crucial to the truth of FP” (1985). If we think of the causal architecture of FP (folk psychology) as the content of a model of an intentional system (where [L] is true by definition of the model) and treat aspects of the central nervous system as our representational target, then the “truth” of FP turns on the fit, in specified degrees and respects, between the model and the central nervous system. The task of confirmation is thus the task of identifying elements and functional relations in the model with certain states of the central nervous system and causal relations among those states, and we can test these models in much the same way we can test topographic maps. The explanatory power of FP resides largely in the extent to which principles like [L] can be used to construct a family of models, models that can account for the wide range of behavioral capacities of humans. As with ‘resistors cause potential drops’ and ‘springs have a restoring force that is well approximated by the linear first term of the Taylor expansion’, that [L] is true by definition of these models reveals that we are identifying states solely in terms of specific contributions those states make to a system. Again, this is precisely what’s needed in order for a functional analysis of some capacity to succeed, and if the price of this explanatory transparency is a trivialization of the resulting generalizations, this is a price well worth paying. To repeat: because the generalizations are true by definition of models, not targets, and because these analyses do not subsume events under laws, trivializing the generalizations does not render the models explanatorily vacuous. Functional explanation of the sort familiar from psychology and biology typically consists in providing a functional analysis of a capacity or property, with the analysis provided constituting an abstract model of the capacity or property. The explanatory power of an analytic strategy resides largely in the extent to which it can be used to construct of a family of models, models that can be brought into agreement with other theoretical or perceptual phenomena. The history of cognitive science demonstrates Godfrey-Smith’s claim that “Cognitive science encounters the folk-psychological model as a starting point and as a source of structural ideas” (2005). In effect, early cognitive scientists were using principles like [L] to construct models of intentional systems and then applying them to adult humans, pre-verbal children, sub-personal systems, and non-human animals (Cummins et al. 2004). Though competing ideas about how to develop the details emerged, these models were unified insofar as each was constructed (tacitly or explicitly) in accordance with principles like [L]. Included among the challenges facing contemporary cognitive science are both figuring out the extent to which new theoretical models (e.g., those inspired by work in robotics, dynamical systems theory, connectionism, and/or neuroscience) can be adequately viewed as based on principles like [L] and, if they cannot be so viewed, determining whether these new models should be thought of as rivaling or complementing existing [L]-based models.
123
Synthese
5 Conclusion It ought to be a ground rule for discussions of intentional theories that problems which hold for scientific explanations in general should not be raised as difficulties for intentional explanations in particular. There is, to be sure, a problem about the dependence of the identification of a scientific construct upon the assumption that the theory which employs the construct is true. The tendency of the fundamental generalizations which structure a science to act like implicit definitions of its theoretical terms creates notorious problems about intertheoretical reference; about the distinction between conceptual and empirical truth; and about the nature of empirical confirmation. In extreme cases, it causes Idealism. For all I know, it causes sunspots, too. What hasn’t been shown, however, is that the dependence of identifications of propositional attitudes upon the truth of causal generalizations…amounts to anything more than a special case of this general difficulty. If it doesn’t, then it isn’t anything that we ought to worry about here. –Jerry Fodor (1981) Fodor was right that the problems raised for intentional explanations hold for scientific explanations across the board. This paper has argued that a solution to the general problem requires a departure from the H-H account of explanation, and by moving towards a model-based approach, it becomes clear how folk-psychological principles like [L] can play a role in explanations similar to the role other generalizations play in science. References Bechara, A., Damasio, A., Damasio, H., & Anderson, S. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50, 7–15. Cartwright, N. (1983). How the laws of physics lie. Oxford: Clarendon Press. Cartwright, N. (1999). The dappled world. Cambridge: Cambridge University Press. Cummins, R. (1975). Functional analysis. The Journal of Philosophy, 72, 741–765. Cummins, R. (1983). The nature of psychological explanation. Cambridge: MIT Press. Cummins, R. (2002). Neo-teleology. In A. Ariew, R. Cummins, & M. Perlman (Eds.), Functions: New essays in the philosophy of psychology and biology. New York: Oxford University Press. Cummins, R., Poirier, P., & Roth, M. (2004). Epistemological strata and the rules of right reason. Synthese, 141(3), 287–331. Davidson, D. (1980). Mental events. In D. Davidson (Ed.), Essays on actions and events. Oxford: Clarendon Press. Dennett, D. (1978). Brainstorms. Cambridge: MIT Press. Dennett, D. (1991). Back from the drawing board. In B. Dahlbom (Ed.), Dennett and his critics. Cambridge: Blackwell. Fodor, J. (1981). Representations. Cambridge: MIT Press. Fodor, J. (1987). Psychosemantics. Cambridge: MIT Press. Fodor, J. (1991). A modal argument for narrow content. The Journal of Philosophy, 88, 5–26. Fodor, J., & Lepore, E. (1991). Is intentional ascription intrinsically normative? In B. Dahlbom (Ed.), Dennett and his critics. Cambridge: Blackwell. Giere, R. (1988). Explaining science. Chicago: University of Chicago Press. Giere, R. (1999). Science without laws. Chicago: University of Chicago Press. Giere, R. (2004). How models are used to represent reality. Philosophy of Science Supplement, 71, S742– 752.
123
Synthese Giere, R. (2006). Scientific perspectivism. Chicago: University of Chicago Press. Godfrey-Smith, P. (2005). Folk psychology as a model. Philosopher’s Imprint, 5, 1–16. Hempel, C. (1966). Philosophy of natural science. Englewood Cliffs, NJ: Prentice Hall. Hempel, C., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–75. Horgan, T., & Woodward, J. (1985). Folk psychology is here to stay. Philosophical Review, 94(2), 197–226. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237–251. Laudan, L. (1981). A confutation of convergent realism. Philosophy of Science, 48, 19–48. Maibom, H. (2003). The mindreader as scientist. Mind and Language, 18, 296–315. Miller, D. (1974). Popper’s qualitative theory of verisimilitude. British Journal for the Philosophy of Science, 25, 166–177. Newton-Smith, W. H. (1981). The rationality of science. London: Routledge & Kegan Paul. Popper, K. (1972). Objective knowledge. Oxford: Oxford University Press. Quine, W. V. O. (1960). Word and object. Cambridge: MIT Press. Rosenberg, A. (2012). Philosophy of social science (4th ed.). Boulder: Westview Press. Rupert, R. (2006). Functionalism, mental causation, and the problem of metaphysically necessary effects. Nous, 40(2), 256–283. Stich, S. (1985). Could man be an irrational animal? Synthese, 64(1), 115–135. Teller, P. (2001). Twilight of the perfect model model. Erkenntnis, 55, 393–415. Teller, P. (2004). How we dapple the world. Philosophy of Science, 71, 425–447. Teller, P. (2009). Provisional knowledge. In M. Bitbol, P. Kerszberg, & J. Petitot (Eds.), Constituting objectivity. Netherlands: Springer. van Fraassen, B. (1980). The scientific image. Oxford: Oxford University Press. van Fraassen, B. (1989). Laws and symmetry. Oxford: Oxford University Press.
123