Minds & Machines DOI 10.1007/s11023-017-9440-7
Still Autonomous After All Andrew Knoll1
Received: 1 February 2017 / Accepted: 10 June 2017 Springer Science+Business Media B.V. 2017
Abstract Recent mechanistic philosophers (in particular, Boone and Piccinini in Synthese 193(5):1287–1321, 2016) have argued that the cognitive sciences are not autonomous from neuroscience proper. I clarify two senses of autonomy–metaphysical and epistemic—and argue that cognitive science is still autonomous in both senses. Moreover, mechanistic explanation of cognitive phenomena is not therefore an alternative to the view that cognitive science is autonomous of neuroscience. If anything, it’s a way of characterizing just how cognitive processes are implemented by neural mechanisms. Keywords Autonomy Cognitive science Mechanism Neuroscience Explanation
1 Introduction Twenty years ago, despite widespread agreement with his view, Jerry Fodor (1997) argued that the cognitive sciences are autonomous. One hopes he is gratified that a number of philosophers (Piccinini and Craver 2011; Boone and Piccinini 2016; Bechtel 2016) continue to disagree with him. The dissenting views have it that, one way or another, the cognitive sciences proper (e.g., linguistics, perceptual psychology, etc.) are being supplanted by cognitive neuroscience. Thus, the generalizations promulgated by the former are not autonomous of those promulgated by neuroscience. These contentions are part of a broader movement to pull apart three doctrines that have traditionally gone hand in hand within the philosophy of mind: autonomy, & Andrew Knoll
[email protected] 1
Department of Philosophy, University of Maryland, College Park, MD, USA
123
A. Knoll
multiple realization, and rejection of type–type identity theory. For instance, Polger and Shapiro (2016) have argued for maintaining autonomy, while embracing typeidentity theory and jettisoning multiple realization. In contrast Boone and Piccinini and Piccinini and Maley (2014) have argued in favor of retaining multiple realization, while jettisoning autonomy. For what it’s worth, I expect the classic triad has things roughly correct. I don’t have space here, however, to defend it from the opposition it’s facing from all sides. Instead, I want to push back against the Boone and Piccinini criticism that an autonomous cognitive science is being replaced by cognitive neuroscience. That is, as long as you are willing to buy into multiple realization and reject type-identity theory, there isn’t good reason to then nevertheless give up on the autonomy of cognitive science. So, I’ll argue Fodor is still correct, at least about a large part of what counts as cognitive science. To the extent that explanations in cognitive science are couched in terms of computations over representations, these explanations are autonomous of neuroscientific explanations. The following explains why. First, we’ll have to sort out just what it means for cognitive science to be autonomous from neuroscience proper. The thesis that cognitive science is autonomous of neuroscience has both a metaphysical and epistemic component, both of which come in different strengths. We’ll take a look at the different strengths of metaphysical commitments, and then at the different strengths of epistemic commitments. At each stage, we’ll see that there is no reason to give up on the autonomy of cognitive science in any interesting way.
2 Metaphysical Autonomy To hear Fodor put it: I will say that a law or theory that figures in bona fide empirical explanations, but that is not reducible to a law or theory of physics, is ipso facto autonomous (1997, p. 149). We can generalize this to say that a law or theory that figures in empirical explanations, but that is not reducible to a law or theory of neuroscience, is ipso facto autonomous of neuroscience. Now, strictu dictu Fodor really ought to have said that such conditions make for theories that are ipso facto metaphysically autonomous. After all, it may be that even if psychological generalizations are autonomous from neuroscience in the above sense, we can’t come to know any of those generalizations without first coming to know certain neurological generalizations. Thus, psychological generalizations would not be epistemically autonomous. We’ll address the extent to which cognitive science is so epistemically autonomous of neuroscience in Sect. 4 below. So, the cognitive sciences are metaphysically autonomous just in case their generalizations are not reducible to generalizations of neuroscience or some other science. Now, just what this claim amounts to depends on what we’re talking about
123
Still Autonomous After All
when we talk about reduction. One way to go is to say that cognitive science is metaphysically autonomous of neuroscience just in case the types cognitive science quantifies over cannot be type-identified with the types neuroscience quantifies over without loss of generalizations.1 To take Boone and Piccinini’s preferred statement from Fodor: [I]n the language of neurology… presumably, notions like computational state and representation aren’t accessible’’ (Fodor 1998, p. 96, quoted in Boone and Piccinini, p. 1512) Usually, as I’ve noted, this commitment is taken to be tantamount to a denial of a type-identity theory that would identify each kind of entities quantified over by cognitive science with a kind quantified over by neuroscience. As has been oft remarked, denial of such type-identity reductions does not preclude the possibility of token-identity reductions. While a certain type of belief (say, that food is left of the nest) may not just be identical to a certain type of neuronal activity (say, theta precession of a certain population of place cells), it may well be that any token belief that the food is left of the nest may be identical to a token neuronal activity (e.g., neurons A-FF firing in theta precession for 50 ms in Rat A’s brain at time t). This denial of type-identity and simultaneous embrace of token identity is the characteristic of the received view that cognitive processes are instantiated by neural processes, while nonetheless metaphysically autonomous of them. For what follows, however, we want to leave open the possibility that these views on multiple realizability and type-identity theory are correct, even though cognitive science is not autonomous of neuroscience. Thus, for our purposes, we can take metaphysical autonomy just to be the doctrine that there are generalizations ranging over cognitive properties that can’t be captured by generalizations ranging over neurophysiological properties.
3 Challenges to Metaphysical Autonomy Now, you might suppose that the metaphysical autonomy of the cognitive sciences was a helpful working hypothesis that has now become obsolete. Perhaps as we come to know more about neuroscience, we’ll realize that cognitive science is not autonomous from neuroscience after all. There are several possible ways such a story might go, so we should look at each of them in turn. The moral, it turns out, is that any way you try it, the cognitive sciences remain metaphysically autonomous of neuroscience. Here’s one way our neuroscience might turn out. Suppose neuroscientists discover that there is a particular population of neurons in all human brains that is type-individuated in neuroanatomical terms—say, perhaps, those at a certain location of the Dentate Gyrus of the hippocampus. Further suppose that psychologists explain the phenomenon of humans flocking to a particular area of 1
Note that this is a claim about explanatory efficacy, not existence. If you’re so inclined to draw ontological conclusions from explanatory considerations, you are welcome to do so. If you think such an ontological program is misguided, I won’t contradict you here. We have enough to worry about without getting bogged down in ontological disputes per se.
123
A. Knoll
Buenos Aires by attributing to these humans a belief that there is delicious choripan at a location south of the Rio de la Plata estuary. The psychologists type-identify this belief in terms of its representational content: to be such a belief just is to be a belief that there is choripan south of the Rio de la Plata estuary. Now, it could turn out that people believe that there is choripan south of the Rio Plata estuary just in case that particular population of neurons fires in a certain way. You might suppose that if such a relation exists between all representational kinds and neuronal kinds, we can replace cognitive science with neuroscience. If pain just is C-fibres firing, we may as well quantify over C-fibres rather than pains. If beliefs about the Costanera del Sur just are firings of neurons in the Dentate Gyrus, then we may as well quantify over those firings rather than the beliefs. But, just because certain types of human neurobiological events are in fact perfectly correlated with instances of humans believing, it does not follow that quantifying over beliefs rather than such neurobiological events has no explanatory virtue. Quantifying over beliefs may allow for broader generalizations than quantifying over neural events. For example, by hypothesis, it may be that ceteris paribus humans have beliefs about choripan just in case a certain population of neurons fires in the right way. But, this does not preclude, say, rats, cockroaches, and birds having beliefs about the location choripan. It does not preclude humans with abnormal neuroanatomy from having such beliefs. It also does not preclude extra-terrestrials, silicon-based robots, humans with silicon brain prostheses, and others that lack our human neural structures from having beliefs about choripan. Quantifying over beliefs would allow us to explain the behavior of all these organisms in general terms, whereas quantifying over neurobiological structures unique to humans could not. Cognitive science is no more concerned with characterizing the operation of human brains than physical science is concerned with characterizing the motion of the planets. This is not to say that neither science is concerned at all with either phenomena. Newton’s concern with celestial mechanics led to the revolutionary characterization of planetary motion as a special case—an instantiation, if you will—of principles that were both general and counterfactual supporting. The same principles explained how both apples and planets moved, as well as how they would move were the planets different distances from the sun, or apples made of denser flesh. Now, just what kind of generalizations cognitive posits such as computations and representations allow for are likely multifarious, and it turns out to be a quite subtle issue just what sort of explanatory work things such as representations perform.2 I’m taking it for granted that the anti-autonomists agree that there is at least some explanatory work for the posits of cognitive science, so we need not get bogged down too much in debates about just what that is at present.
2
There’s been lots of recent interest lately on the explanatory usefulness of representations, in particular whether representations with intentional content—or inherent aboutness or correctness conditions—buy you any explanatory work at all. For negative views, (see Stich 1983; Chomsky 2000; Collins 2007; Egan 2010; Ramsey 2007; Orlandi 2014), inter alia. Georges Rey and myself (Knoll and Rey in press) and Tyler Burge (2010), all erstwhile defenders of the explanatory efficacy of intentional content, concede that some cognitive explanations do not appeal to intentional contents.
123
Still Autonomous After All
Suffice it to say that generalizations over representations and computations may buy you many different explanatory virtues. They may be more stable, specific, proportional, etc.3 They may be more stable insofar as we can characterize perceptual constancies such that a visual system could continue to represent something as blue despite changes to its distal and proximal input.4 They may be more proportional insofar as they allow for explanations of token behaviors that isolate different counterfactuals. To modify an example from Yablo (1992), we can explain a pigeon as pecking at a square because it represented the square as red or because light from the square triggered a series of nerve firings that resulted in the pecking. Giving the former explanation helps capture the counterfactual that the pigeon would peck at squares of many different shades of red, as long as they triggered the relevant representation. They may be more specific insofar as they allow for the kind of ‘‘Galilean abstraction’’ (about which more below) that points to abstract relationships that hold across circumstances despite numerous other factors influencing the process. For example, we might fail to produce speech sounds that would be categorized as nasal fricatives not because we have computational constraints on representing such properties, but because humans have an articulatory apparatus that makes carrying out the motor instructions eventuated by such representations next to impossible.5 Along similar lines, they may help characterize not just similarities, but differences amongst organisms. Neither rats nor humans tend to generate sounds such as ‘‘buffalo buffalo buffalo buffalo buffalo,’’ but in the rat’s case this is due to lack of a representational capacity, whereas the humans have the relevant representational capacity but lack the memory resources that would aid in such constructions. In general, if cognitive neuroscience succeeds in discovering neural structures that have representational content, that content will only be explanatorily efficacious if explanations continue to quantify over states type-individuated in terms of that content. Otherwise, we can eliminate representational content from cognitive science. It’s like this (Fodor 1987): we can discover that subatomic particles are headsparticles whenever my lucky quarter is heads-up and tails-particles whenever my lucky quarter is tails-up. But, if we can make all the generalizations of our best science without appealing to the property of being heads- or tails-particles, then those properties are explanatorily inert.6 If we can make all the generalizations of cognitive science in neurobiological terms, then representational properties are explanatorily inert. Now, as a matter of fact, it may turn out that it’s simply impossible for anything without the relevant neural structures to have a contentful representations. For example, it could turn out, say, (contrary to Fodor’s protestations that it’s merely sufficient) that it’s necessary for something to asymmetrically co-vary with a 3
See Woodward (2010) for an overview of how such factors feature in biological explanation.
4
Cf. Burge (2010).
5
Hale and Reiss (2008, pp. 154–56); Knoll (under review).
6
I’ll leave it to the metaphysicians to figure out whether the properties still exist.
123
A. Knoll
property to have representational content as of that property. It could further turn out that only certain neurobiological structures are able to asymmetrically co-vary with belief contents. In this case, one might have good reason to eliminate explanation in terms of representational content in favor of explanation in terms of neuroanatomical features. But, notice, even if all human beliefs thus studied perfectly co-vary with certain neuroanatomical events, that’s not sufficient evidence—or even all things considered good evidence—to suppose that being a certain neuroanatomical structure is a necessary condition on having a particular representational content. So, it may turn out that, as a matter of contingent fact, human mental states are not multiply realized by multiple kinds of neuroanatomical features. But, even if this is true, it is not good reason to suppose that cognitive science can somehow be replaced by neuroscience while preserving explanatory power. If someone still wanted to insist, in light of these considerations, that cognitive science should be eliminated, one could just promulgate the social convention that the proper scope of enquiry for those who call themselves ‘‘psychologists’’ and ‘‘cognitive scientists’’ ought to be how human brains, with their own particular neuroanatomical features, function in their normal, de facto circumstances. If scientists wish to explore how minds—and indeed human brains—could or would function across counterfactual conditions, they are not ‘‘cognitive scientists,’’ but something else—‘‘Schcognitive scientists,’’ say. Such a person may have insisted to Newton that he was welcome to make generalizations subsuming stars, apples, possible planetary motions, and hypothetical celestial objects if he’d like. But, true astronomers would restrict themselves to the study of how the actual planets in fact move. One could promulgate such a social convention. But, notice, the convention does not change the fact that the counterfactual generalizations made by cognitive science in terms of representational states that can be multiply realized are, well, more general, and thus more explanatorily virtuous and powerful. They can’t be eliminated, though they may be ignored. So, if cognitive science is not autonomous of neuroscience, it’s not because it has been eliminated in favor of neuroscience. There still seems good reason to suppose that there are powerful generalizations that can be captured in terms of representational states, but not neurobiological features. I take everything up to here to be simply review. Hopefully we’re all on the same page. But, it will be useful to keep the preceding considerations in mind when evaluating Boone and Piccinini’s novel, less vulgar challenges to the metaphysical autonomy of cognitive science. Admirably, Boone and Piccinini (2016) do not make the mistakes surveyed above. They claim that explanation in terms of representational states can not be eliminated in terms of explanation in terms of neurobiological features. Their claim is that cognitive science simpliciter is being replaced by cognitive neuroscience. Both types of science make generalizations in terms of representational states (hence the ‘‘cognitive’’). The difference is that cognitive neuroscience does not allow these generalizations to be autonomous of the neuroscience.
123
Still Autonomous After All
Notice that this claim must not amount to the claim that the representational states of cognitive science can be type-identified with the neurobiological features of neuroscience. We went down that road above, and realized that there just weren’t any generalizations to be made in terms of representations at the end of it. Once you type-identify terms of the two sciences, you might as well get rid of one.7 So, Boone and Piccinini must not be claiming that the representations quantified over by cognitive science broadly can be type-identified with the neurobiological terms of neuroscience.8 Perhaps instead they are pressing the view that token representations can be identified with token neurobiological features. But, such token identification is not incompatible with the metaphysical autonomy thesis! Again, that thesis allows for generalizations of the form: ceteris paribus, humans represent places by implementing theta precession in the Dentate Gyrus. It just allows, in addition, generalizations of the form: both honey bees and humans represent places even though the bees don’t have a Dentate Gyrus. Boone and Piccinini describe what they take to be a mechanistic alternative to both the elimination of cognitive explanation and to the autonomy of cognitive explanation: Cognitive neuroscience thus strives to explain cognitive phenomena by appealing to and analyzing (both separately and conjointly) multiple levels of organization within neural systems (p. 1515). But, it’s not clear how this ostensible alternative is at all inconsistent with the metaphysical autonomy thesis. Suppose it’s a cognitive generalization that, ceteris paribus, when given evidence that there is food on the Costanera, both hungry bees and hungry humans will navigate toward the location of the food. The representation of the food location in humans might be a mechanism consisting of a particular set of neurons in the Dentate gyrus that go into theta precession in a particular way. The representation in the bees might instead be a mechanism consisting of activities of Kenyon cells in the Mushroom Body (cf. Menzel 2012). Each of these mechanisms might in turn consist of various neuromolecular components and capacities. The standard cognitive autonomy story would have no problem with this characterization. The generalization that both bees and humans have place representations, which are responsive in similar ways to similar input, is autonomous of whether those place representations are instantiated by Dentate Gyrus mechanisms or Mushroom Body mechanisms. Thus, this multilevel mechanistic explanation does not seem to be an alternative to the standard autonomy view. If anything, it would seem to be an elaboration of how autonomous cognitive generalizations can be multiply realized by different physical structures. Now, the defenders of multi-level mechanisms as incompatible with metaphysical autonomy might retrench to the claim that the relevant representation in each 7
Pace Polger and Shapiro (2016), who argue in favor both of the autonomy of psychology and a typeidentity theory of its states (see, in particular, Chap. 10).
8
Indeed, Piccinini and Craver (2011, p. 284. n. 2; pp. 288–89) claim as much.
123
A. Knoll
case is constituted by the mechanism that explains it one level down. Thus, it’s just not true that both bees and humans share a representation of the food’s location, because they don’t implement the same neurobiological mechanisms. But, notice, this strategy would just be to type-identify representational contents with neurobiological mechanisms. A different neurobiological mechanism is ipso facto a different representation. Even if we could do that while preserving explanatory power, we will have eliminated representational contents from the explanation. We can run all our explanations solely in terms of neurobiological mechanisms. Perhaps the mechanist would want to get off the boat at this point. It may be that representational contents are constituted by the mechanisms that underlie them. But, so they may claim, that’s not sufficient reason to eliminate representational content from neurocognitive explanations. After all, as Bechtel (2016) argues, neuroscientists sure seem to be in the business of identifying neurobiological features with representational contents. Now, we need to ask again just what kind of identification these cognitive neuroscientists are pursuing. Are they type-identifying representational contents with neurobiological features, or merely demonstrating how representational contents happen to be realized by certain kinds of neurophysical features in certain kinds of organisms? All the evidence Bechtel adduces for his claim is characteristic of the latter. For example, he points out that, in rats, the spiking of activity in place cells in the hippocampus relative to underlying theta rhythms ‘‘provided a more accurate representation of the rat’s position than place cell activity alone’’ (1304). The high correlation between this theta precession of place cell activity and the rat’s location ‘‘serves as a finer grain representation of location’’ and thus ‘‘supports the claim… that the object of the research is identifying the representational vehicles and their content’’ (1305). Let’s agree with Bechtel that place cell theta precession not only correlates with position, but in fact represents locations in rats.9 This datum might give us some good reason to expect that similar neurobiological processes represent locations in humans and other mammals. But, it is less clear how the generalization could make sense of the location representations of honeybees, or other creatures with much different neuroanatomy. The rat evidence establishes at most that, in rats, location representations are sometimes realized by theta precession of place cells. It does not provide much evidence for the strong claim that location representations just are theta precessions of place neurons across counterfactual conditions. Again, identifying how cognitive properties are contingently realized in different organisms is fully consistent with the metaphysical autonomy thesis! And merely pointing out how certain representations are realized by particular kinds of neurobiological features in some kinds of organisms is neither sufficient nor good reason to suppose that representations are type-identical with those neurobiological features. So, multilevel mechanistic explanation doesn’t seem to be an alternative to 9
I’d argue that he is wrong to suppose that such place cell activity has representational content in any explanatorily useful sense. But, I’ll not do so now. The current point is that even if we grant that this activity has representational content, it doesn’t undermine the metaphysical autonomy thesis.
123
Still Autonomous After All
the thesis that cognitive scientific generalizations are metaphysically autonomous of neuroscientific generalizations. The general form of the above arguments has been that there’s just no way to give up on the metaphysical autonomy of cognitive science without giving up on the theses that cognitive states are multiply realizable and not type-identical to neurophysiological states. But, perhaps there’s a way for the autonomists to have their cake and eat it too. Piccinini and Maley (2014) push back with an argument that cognitive science may well fail to be autonomous of neuroscience even though the properties of cognitive science are multiply realizable. The argument turns on a proprietary notion of multiple realizability. Their notion of multiple realizability has it that a higher level property is realized by lower level properties when the causal powers of the higher level property is a proper subset of the causal powers of the lower properties (p. 133). Thus, the property of being a corkscrew has the causal power to pull corks out of wine bottles, and this causal power is one amongst many possessed by the configuration of metal and plastic molecules that compose the corkscrew. Insofar as the power to pull corks out of bottles is a power possessed by a wide variety of different configurations of different materials, the property of being a corkscrew is multiply realized. So too, presumably, the properties of representing ducks as such, or desiring chocolate cake are properties that can be realized both by neurophysiological states, states of silicon circuits organized in various ways, or perhaps even configurations of wooden gears. As long as amongst the causal powers of the gears, silicon circuits, and neural nets are the causal properties of desires and representations, those properties are multiply realized. Nonetheless, Piccnini and Maley argue that explanations in terms of cognitive states are not autonomous of neuroscience. To be truly autonomous, these higherlevel cognitive properties would need to have causal powers over and above those of their realizers. And, according to the view of multiple realization adopted by Piccinini and Maley, multiple realization doesn’t buy you these extra causal powers. Indeed, quite the opposite: multiple realization entails that you have no causal powers over and above your realizers. Or so it would seem. But, it’s not so clear what we ought to say about the causal powers of multiply realizable properties, even on the rather precisified construal of multiple realization Piccinini and Maley are working with. Take, for example, a generalization like the following: activating a desire for food in a creature that believes that there is food two meters to the right will cause it to move two meters to the right (generally speaking, amongst some class of creatures). Suppose in one creature, the relevant belief and desire are realized by different neuronal processes. In another, they are realized by different silicon circuitry. Now, we might say that the desire in question has a causal power such that it can cause a creature with neuronally-realized beliefs to move and also cause a creature with silicon-realized beliefs to move. But, none of the realizers of that desire has this same causal property. The neuronal processes that realize the desire in the first creature cannot cause a creature with silicon-realized beliefs to move. Similarly, the silicon circuitry that realizes the desire in the second creature cannot cause a creature with neuronally realized beliefs to move. Thus, the property of having a
123
A. Knoll
desire for food would seem to have a causal power that none of its realizers themselves have. Ergo, the cognitive property of desiring food is not multiply realized in the particular sense of Piccinini and Maley in that its causal powers are not a proper subset of those of its realizers. It’s not clear what Piccinini and Maley would want to say about this case. What is clear, though, is that whatever they say, the case demonstrates that once again it’s not open to them to deny the metaphysical autonomy of cognitive science in any interesting sense. One thing they might conclude is that I’ve just parsed the causal powers incorrectly in the above case. Yes, you could say that in a certain sense the property of desiring food has causal powers over and above the causal powers of its realizers. But, the causal powers of any given token desire do not outstrip the causal powers of its realizer. So, in that sense, desiring food is multiply realizable, explanation in terms of such desires is not autonomous in the right sense, and everything is fine. But, if this is the response, the autonomy thesis they are out to repudiate becomes something of a straw-man. The metaphysical autonomy thesis just becomes the thesis that token mental states have causal properties over and above those of their realizers. I suppose some dualists hold something like this view. But, no serious non-dualist defender of multiple realizability and autonomy would hold that a token mental state has causal powers in excess of those of the realizers to which it’s identical! That would be a straight forward violation of Leibniz’s law. The other possible response would to be to accede that generalizations such as those I’ve given above are violations of multiple realizability and do exhibit autonomy in the preferred senses. Thus, denying that the cognitive sciences are metaphysically autonomous is to deny that they traffic in generalizations such as the above. But, this denial is false, and more to the point, would seem to conflict with Piccinini’s and his collaborators’ own views about the nature of cognitive explanation. Piccinini and Maley explicitly commit to the sort of generalization described above as being a case of multiple realization in their preferred sense. They note (p. 140 ff.) that logic gates in a computer program can be realized in many different ways—e.g., by silicon or gallium-arsenide circuits, or wooden gears, or what have you. Different computers could thus have a type of logic gate (call it ‘‘G’’) realized variously by one of these sets of components, and it could be a generalization about these computers that their respective G gates cause input to be sent to another logic gate when they receive input of type I. This causal relation, of course, would be realized by causal relations amongst gears in one computer, amongst voltages over silicon circuits in another, etc. But, crucially, as Piccinini and Maley note, replacing the silicon circuit of one machine with the wooden gear configuration of the other would fail to maintain the generalization. For, the wooden gears just don’t have the same causal powers vis a` vis silicon circuits as they do vis a` vis other configurations of wooden gears. Nonetheless, Piccinini and Maley take this to be a clear case of multiple realization. So, even though there may be a sense in which logic gates have causal powers in excess of their realizers, it’s evidently not a sense that Piccinini and Maley take to
123
Still Autonomous After All
preclude the property of being a logic gate from being multiply realized in their preferred sense. Thus, logic gates are multiply realized, and we can make generalizations about the above computers without attributing causal powers to logic gates that illicitly outstrip those of their multiple realizers. Therefore, generalizations about logic gates are autonomous of generalizations about silicon circuits or wooden gears—even in the specialized sense of ‘‘autonomy’’ favored by Piccinini and Maley. This sort of autonomy strikes me as good enough for the traditional Fodorian autonomist. Recall, Fodor’s notion of autonomy was one such that cognitive science is autonomous of neuroscience just in case there are generalizations ranging over cognitive properties that can’t be captured by generalizations ranging over neurophysiological properties. So, we could talk about causal relations holding amongst representations (for example) that could not be captured in terms of causal relations holding amongst neurophysiological states—even if the causal properties of any token representation just are some subset of causal powers possessed by its realizers. But, notice, this sort of autonomy is also of just the right variety to run into all the problems we ran into previously when we try to get rid of it. In order to exorcise this sort of autonomy, you’d have deny that cognitive states could ever possibly be realized by components that can’t be swapped out for one another without gumming up the works. We could not generalize that bees and humans both represent the location of food because (I’m speculating here) Kenyon cells from a bee’s Mushroom Body would not stand in the right causal relations to the cells representing that location in my hippocampus. You certainly couldn’t build a mind out of silicon circuits that instantiated the same types of computations and representations of a human mind for the same reason. If Piccinini and his collaborators are in for at least some modicum of multiple realizability of cognitive components, it seems they must accept the autonomy of cognitive science even in the specialized sense they advocate. Now, perhaps I’ve haplessly got myself in a verbal dispute with Boone and Piccinini. I’ve been assuming that representational states are terms characteristic of non-neuroscientific cognitive explanation, whereas neuroanatomical terms are characteristic of neuroscientific explanation proper. Perhaps Boone and Piccinini agree with me that cognitive science in this sense is autonomous of neuroscience proper. Perhaps they are just marking the difference between what they call ‘‘psychology’’ or ‘‘cognitive science’’ broadly and ‘‘neuroscience’’ proper differently than I have been. They characterize the autonomy thesis as holding that: psychological explanation captures cognitive functions and functional relations between cognitive states and capacities, whereas neuroscientific explanation aims at the structures that implement cognitive functions (p. 1512) By contrast, ‘‘every level of a multilevel mechanism is both functional and structural, because every level contains structures performing functions’’ (1520). So, perhaps the claim is that functional explanations of cognition are not metaphysically autonomous of non-functional, structural, explanations.
123
A. Knoll
But, if this is the claim, it seems to be going after something of a straw-person. It’s true that the functionalist program had great success arguing that propositional attitude types, like beliefs and desires, were functionally individuated. But many— most notably Fodor himself—argued that other mental states could not be individuated functionally. His (1987) and much subsequent work argues that representations are type-individuated by their content—and that contents just cannot be individuated in functional terms.10 Whether a representation is about choripan is not constitutively determined by the function it plays in mental processes. Nonetheless, representations are multiply realizable, autonomous states par excellence. We can explain the behavior of both bees and rats by attributing to them representations that there’s honey at a particular location despite differences in their neurobiology.11 So, the received view would have it that both functionally defined and nonfunctionally defined mental states are autonomous of neurobiology. Moreover, the received view has it that such functionally defined and non-functionally defined mental terms interact with one another. How a belief interacts with a desire is in part determined by which representation each is related to. So, autonomous cognitive science is just as much committed to the interaction between functional and nonfunctional terms as the multi-level mechanist. Neuroscientists may well identify mechanisms that relate structures and functions. I have no argument with that claim. As we saw Bechtel point out above, they could identify structures (e.g. place cells) that function in just such a way so as to represent places. They could further generate interesting counterfactual generalizations concerning how structural changes to the place cells could alter their function, and how changes in their function might alter their representational content—thus ‘‘understanding the complex interplay between structure and function’’ (Boone and Piccinini p. 1522). But, that all is compatible with cognitive generalizations being metaphysically autonomous. Boone and Piccinini argue that, in contrast to the commitment of the autonomy thesis, ‘‘in our framework, functions constrain structures and vice versa’’ (p. 1522). But, the autonomy thesis is committed to this claim as well. Suppose there’s a psychological commitment that food representations are constitutively functionally related in a certain way to taste representations. Any structures that instantiate food and taste representations—be they in bee brains, rat brains, or silicon circuits—must be functionally related to one another just so as to instantiated the functional relationship between the representations they instantiate. Functions constrain the structures that are capable of instantiating them. But, perhaps Boone and Piccinini’s point is that cognitive science, as autonomous of neuroscience, does not allow us to talk about how realizers interact with the 10 The quick argument is that a content like ‘sun’ can function to token a false belief that the sun revolves around the earth, but can also function to token a true belief that the earth revolves around the sun. So, ‘sun’ must be individuated independently of its functional role in cognition. 11
Fodor’s treatment of atomic representations generally extends to other entities posited by cognitive scientists. For example, there’s reason to think (Hale and Reiss 2008, chap. 5–7) that the distinctive features (e.g., [?velar], [-voice]) posited as mental states by generative phonology are individuated independently of their functional roles. See further discussion of such distinctive features below.
123
Still Autonomous After All
functions that they instantiate (and in light of the above, how they interact with structures, such as COW-representations and [?velar] states). For example, they press that a problem with type-identity theories is that they make it incoherent to ask questions of the kind: how do beliefs interact with neural firing? For a type-identity theory, beliefs just are a kind of neural firing. Autonomists don’t make such questions incoherent, but you might worry that they set such questions aside: an autonomous cognitive science, you might think, just doesn’t ask how beliefs interact with nerve firings. You just describe how beliefs and COW-representations interact and then presume there is some story to tell about how neural firings realize those interactions. So, perhaps the thinking is that multi-level mechanisms allow for explanations of these questions in a way that the traditional autonomist framework does not. But, cognitive scientists working within the classical autonomist tradition have been concerned with such questions from the beginning. Take, for example, generative linguists’ concern with strings such as ‘‘buffalo buffalo buffalo buffalo buffalo’’ or Fodor’s (1975, p. 168) favorite, ‘‘Bulldogs bulldogs bulldogs fight fight fight.’’ According to the our best theory of language processing, such strings should be generable and parseable by the mental language faculty. But, in general, they don’t seem to be. The solution, famously, is to distinguish between the competence and performance of the language faculty. The system has a competence to parse these strings as having, e.g., multiple center embeddings, but it can’t perform this competence because of exogenous limits on our memory capacities. One way to think of the competence/performance distinction is precisely as a means of making clear which aspects of human speech processing are due to the vagaries of how representations of lexical items and syntactic categories are realized in the idiosyncratic human brain, and which can be studied in abstraction from such things as its memory and processing speed limitations. Thus, generative linguistics has long been concerned to answer such questions such as: how do exogenous properties of brain structure influence the general competence of the language faculty? The memory limitations of the human brain make it difficult or impossible for humans to parse the above sentences. But, we can allow for counterfactual creatures that realize our language faculty in a substrate with greater memory capacity, and could thus parse such sentences with ease. Thus, memory limitations of the human brain interact with representations of lexical items and their functional relations so as to limit the strings that the language faculty can parse. In this way, cognitive science, as autonomous from neuroscience, has always been concerned to answer questions about how properties of realizers interact with the cognitive structures and functions they realize. This sort of idealization goes on in cognitive domains outside of linguistics as well. For example, Carruthers (2015) argues that the main difference between human and animal cognition is not due to differences in the kinds of cognitive architecture, but is largely a difference in working memory capacity and the existence of the human faculty of language. It is the addition of these capabilities to a shared cognitive architecture that allows for our ability to combine representations from diverse domains in a way that many other animals cannot.
123
A. Knoll
Thus, whether a particular cognitive architecture is instantiated by a brain that also possesses a language faculty or a high working memory capacity influences the inferences it is able to perform. All this is compatible with the idea that there are generalizations that can only be captured in the terms of representations typeindividuated in abstraction from either of these particular instantiations. That just is the metaphysical autonomy thesis.
4 Epistemic Autonomy Perhaps you’ve bought my story that cognitive science is metaphysically autonomous of neuroscience, but you want to press that, surely, at least, cognitive science is no longer epistemically autonomous of neuroscience. Only a very poorly read philosopher would think it was! The literature abounds with psychological theories fruitfully informed by neuroscientific information. If that’s your view, I agree; you’re correct—depending on just what you mean by ‘‘epistemic autonomy.’’ I take it the idea comes in at least two flavors, and we should get clear on which we’re talking about. Once we do, we’ll see that no one ever seriously thought that cognitive science was strongly epistemically autonomous of neuroscience. Nonetheless, there seem to be good reasons to suppose that it’s weakly epistemically autonomous of neuroscience. What I’m calling strong epistemic autonomy is just the case in which the generalizations of cognitive science are impervious to the discoveries of neuroscience. Cognitive science is susceptible to only a narrow domain of evidence, and neuroscience is not part of it. If the Twentieth Century taught us anything, it’s that scientific advancement is not well served by stipulating in advance the domain of evidence that will be allowed to bear on a theory. So, if anyone holds the strong epistemic autonomy thesis, I’m happy to agree that they’re wrong. It’s true that many early defenders of mental autonomy couched their position in rather strong terms, which could easily be read as a avowals of strong epistemic autonomy: We could be made of Swiss cheese and it wouldn’t matter (Putnam 1975, p. 291, quoted in Boone and Piccinini, p. 1513) …it is often the case that whether the physical descriptions of the events subsumed by [special science] generalizations have anything in common is, in an obvious sense, entirely irrelevant to the truth of the generalizations… or, indeed, to any of their epistemically important properties (Fodor 1974, p. 103) A charitable interpretations of such proclamations, however, does not commit them to the strong epistemic autonomy thesis. Putnam’s point about cheese was simply that ‘‘two systems can have quite different constitutions and be functionally isomorphic’’ (p. 292). So, as all parties to the debate agree, in principle, functional relations can be multiply realized in both swiss cheese and neural tissue. In principle, you might make a mind out of cheese—but that doesn’t imply that were
123
Still Autonomous After All
we to open up human skulls and realize there’s only swiss cheese inside, we’d have no reason to reconsider our cognitive science. Fodor’s point was about the special sciences generally, and thus a fortiori cognitive science. He simply takes cognitive science to be similar to special sciences such as economics insofar as: A natural kind like a monetary exchange could turn out to be co-extensive with a physical natural kind; but if it did, that would be an accident on a cosmic scale (p. 104). Whether monetary exchanges constitute a physical kind does not preclude us from making generalizations about them. This doesn’t entail that we oughtn’t revise our views about monetary exchanges if it turns out that, say, exchanges realized by silicon computer calculations over real numbers have properties that are different from exchanges realized by manual transfer of clam shells. Fodor’s concern just seems to be that whether psychological kinds are typeidentical to physical—or neurophysiological—kinds ought to be irrelevant to psychological explanations. This point doesn’t at all restrict us from jettisoning a psychological theory if it turns out to be unimplementable by the human brain, given what we know about its neurophysiology. Cognitive scientists indeed make this kind of argument all the time, even in regard to theories that otherwise might quite accurately capture certain psychological generalizations. For example, even though Optimality Theory (Prince and Smolensky 2004) does a relatively good job of generating the phonological alternations observed in human languages, many (e.g. Idsardi 2006) criticize it as being cognitively intractable. The computations it requires would take a brain an infinitely long time to process. It would be odd to attribute to Fodor in particular the view that such reasoning is somehow illicit. Fodor’s epistemology, if nothing else, has always been thoroughly Quinean even to the extent that it has made him despair of the ultimate success of cognitive science generally (2000). The sticking point is Quine’s epistemic isotropy: in principle, evidence from any domain might be relevant to confirming a theory. It would thus be very odd for Fodor to stipulate from the outset that neuroscientific evidence is a priori irrelevant to determining whether psychological theories—such as Optimality Theory—are correct. The most charitable interpretation of his proclamations is that he takes psychological evidence to be independent of any potential type-reduction of psychology to neurophysiology. It does seem reasonable, however, to attribute to Fodor and his ilk belief in a certain weak epistemic autonomy. The weak epistemic autonomy thesis is just that neuroscientific data are not necessary to explain generalizations in cognitive science. This weak sort of epistemic autonomy is correct—for roughly the same reasons the strong sort is incorrect. Insofar as scientific justification is holistic, we ought not restrict the domain of evidence pertinent to justifying hypotheses. This means we should not rule out neuroscientific evidence ad hoc, but also that we should not rule out nonneuroscientific evidence by stipulation.
123
A. Knoll
In this respect, I take it that the weak epistemic autonomy thesis is not peculiar to cognitive science per se. It’s just the application of a more general scientific norm to the cognitive case: that any sort of evidence is potentially relevant to justifying an hypothesis.
5 Challenges to Epistemic Autonomy Nonetheless, Boone and Piccinini seem to reject the weak autonomy thesis when it comes to cognitive neuroscience. One claim they make in this regard is the following: For a functional hypothesis to prove correct, the structures that perform that function within the nervous system must be identified (p. 1522). That is, neuroscientific knowledge is necessary to confirm a functional hypothesis about cognition. It’s not clear what reason they have to think this claim is correct. It certainly doesn’t seem to be correct when it comes to sciences other than the cognitive. Take the ideal gas laws. It seems there is ample justification to suppose that the volume of a gas varies inversely with its pressure at constant temperature without knowing anything about the particular molecular kinematics that underlie the generalization. It’s not clear why cognition should be any different in this regard. Even cognitive neuroscientists who pursue discovery of cognitive phenomena alongside characterization of neurophysiological processes don’t seem to buy the claim that identification of neurophysiological realizers is necessary to prove cognitive science generalizations. Take for example Poeppel et al. (2008). They explicitly adopt the classical Marr-style 3-level framework of cognition in order to provide an account of speech perception that integrates insights from cognitive phonology and auditory neuroscience. Key to their reasoning are premises such as the following: We commit to a specific representational theory, that of distinctive features… As decades of research show, phonological generalizations are stated over features …we assume that one of the central aspects of speech perception is the extraction of distinctive features from the signal (p. 2) A central question is how to accomplish the mapping from a spectro-temporal, acoustic, representation to the lexical-phonological one (p. 3). …we are convinced that one cannot do without the constraints derived from linguistics, and particularly phonology (p. 4). The goal of the research is not to discover neural realizers of distinctive feature representations in phonological posits exist. Rather, distinctive features basis of non-neuroscientific research within cognitive
123
structures that serve as the order to prove that such are assumed to exist on the phonology. The point of the
Still Autonomous After All
neuroscientific investigation is to discover how input arriving in auditory cortex might be formatted so as to play the functional role of distinctive feature representations: Cellular and systems neuroscience teaches us essential facts about how acoustic signals are analysed in the afferent auditory pathways… [I]f we assume… that there are abstract internal representations that form the basis for linguistic representation and processing, there must be some stage at which auditory signals are translated into such representations (p. 4). If lexical and distinctive feature representations are indeed a unique aspect of language processing, you might expect to find unique cortical processes that could be picked out as the realizers of these representations. But, Poeppel et al. are at pains to point out that failure to identify such unique neurophysiological structures should by no means in itself militate against the decades of phonological research they cite to the contrary. They point out that according to one way of thinking, the idea that speech representations are specialized …presumably means that the cerebral machinery we have to analyse speech is specialized for speech signals… [but]… It is not obvious whether very much can be learned by focusing on whether or not there is this kind of extreme specialization’’ (p. 12). Even though all the neurophysiological areas recruited for speech perception are also involved in other tasks, this is not sufficient evidence to think that lexical and distinctive feature representations do not have the distinctive characters attributed to them by non-neuroscientific cognitive phonology and linguistics: They may share properties with other cognitive representations, but they have a number of extremely specialized properties… some formal attribute of the representations must be such that they can participate in formal operations ranging from pluralization to compound generalization to phrase structure construction (p. 13). I don’t take Poeppel et al.’s statements above to be an argument for the weak epistemic autonomy of the cognitive sciences. They are merely illustrative of the sociological fact that some of the foremost practitioners of cognitive neuroscience (broadly construed) indeed embrace weak epistemic autonomy. The considerations Poeppel et al. point to are characteristic of the general embrace of ‘‘Galilean abstraction’’ advocated by Chomsky and other cognitive scientists working within the tradition that includes Marr, Pylyshyn, and generative linguistics (Chomsky 2002, p. 98; Hornstein 2005). The general argument here is that insofar as other sciences fruitfully abstract away from the vagaries of how, for example, gas laws may be implemented by molecular kinematics, or Newton’s laws may be implemented by masses that are not perfect points in closed systems, cognitive sciences should be free to proceed without waiting on the neurophysiological details. That is just to embrace weak epistemic autonomy. The fact that the planets do not move in perfect ellipses as Newtonian mechanics would predict of point masses in a closed system is not in itself sufficient reason to
123
A. Knoll
argue that Newtonian mechanics be replaced by a ‘‘planetary Newtonian mechanics,’’ which would look at how the principles of Newtonian mechanics are influenced by the interactions between the idiosyncratic compositions of Mars, Venus, various asteroids, space dust, etc. Rather, astronomers assume that all sorts of interactions will influence how particular planets will in fact move, but nonetheless extract interesting generalizations about how they would move in idealized conditions. The assumption is that any variation from the ideal should be liable to further explanation.12 The burden is on those who think that cognitive science is different from astronomy in this regard to give reasons as to why it is. Insofar as weak epistemic autonomy seems to work well for many other sciences, it’s unclear what reason we would have to deny it to cognitive science. Perhaps Boone and Piccinini did not mean to make such a strong epistemic claim. Perhaps they just wished make the metaphysical claim that for a cognitive function to be instantiated, the neurophysiological structures that instantiate it must be able to instantiate that function. That point seems uncontroversial, albeit irrelevant to whether cognitive science is autonomous of neuroscience. As we saw earlier, this seems both like a truism and perfectly consistent with both the metaphysical and epistemic autonomy thesis. Either way, we as yet have no reason to deny the weak autonomy thesis. They do press what seems to be an epistemic point that the picture of cognitive science as autonomous from neuroscience fell apart in part because: new modeling and empirical techniques—including the emergence of neuroimaging methods—have provided more sophisticated ways to link cognitive capacities to the activities of specific neural system (1530). Again, it’s not clear why new sources of evidence would undermine either the metaphysical or epistemic autonomy thesis. That there are generalizations that cannot be made in neurobiological terms does not preclude us learning more about just how cognitive processes are implemented in particular kinds of brains. That we now have new sources of evidence about how cognitive processes are implemented in particular kinds of brains does not seem to disqualify using other sorts of evidence to justify cognitive generalizations.
6 One Last Alternative Boone and Piccinini nonetheless continue to press that their multilevel mechanism account is an alternative to the claim that cognitive science is autonomous of neuroscience. One last difference that they point to is that they take computational representational accounts of cognition to describe only how minds might possibly work—not how they actually work: The computational-level descriptions Marr and others sought are best construed as a valuable step along the way to integrated multilevel 12
See Pietroski and Rey (1995) for an account of ceteris paribus laws along these lines.
123
Still Autonomous After All
mechanistic explanations. It is no longer enough to simply home in on ways in which problems might be solved in the brain; contemporary cognitive neuroscience aims to understand how those problems are actually solved in the brain (1520). Now, it may be true that there are many computational explanations that aim only characterize how, for example, the output of a system could be generated by particular input. See, for example, Jones and Love (2011) for an argument against Bayesian theories of this form. What Jones and Love stress is that such theories should not pull research away from theories within cognitive science that alternatively seek to describe the computational processes in fact implemented by minds. There are indeed many computational, representational theories of cognition that describe how minds actually work, and that are both metaphysically and (weakly) epistemically autonomous of neuroscience. Take for example the research program of Pietroski et al. (2009, 2011) and Lidz et al. (2011). This research seeks to figure out just which of several possible algorithms humans in fact use to determine the truth of sentences like, ‘most of the dots are yellow.’ There are several possible ways to assess whether most of the dots are yellow in a display of blue and yellow dots. One would be to calculate how many blue dots there are, how many yellow dots there are, and then assess whether the quantity of yellow dots is greater. Another would be to set up a one-to-one correspondence between the blue and yellow dots, and then determine whether any yellow dots remain. Pietroski et al. (2009) provide psychophysical reaction time evidence that the former is the actual algorithm human minds deploy when determining whether the sentence ‘‘most of the dots are yellow’’ is true. Subsequent research has further precisified this algorithm (Lidz et al.). Notice that there are many procedures that could be used to calculate the cardinality of the set of non-yellow dots used in the above algorithm. One procedure would be to calculate the cardinality of the things that have the properties of being dots but not being yellow. Another would be to calculate the cardinality of the set of all the dots, as well as the cardinality of the set of yellow dots, and then subtract the second from the first. Lidz et al. obtained psychophysical evidence that humans deploy the latter procedure. Such research is an example of cognitive science proceeding autonomously of neuroscience to characterize not just how human minds might possibly solve problems, but indeed to determine which of many possible algorithms human minds in fact deploy. They are weakly epistemically autonomous in that they do not depend on neurophysiological evidence for justification, but nonetheless remain defeasible in light of neurophysiological evidence that might indicate it’s unlikely that neural tissue is unable to implement the algorithm they specify. The explanations are metaphysically autonomous in that they are described in terms of computations over numerical representations, in abstraction from how the algorithm might be implemented by neurophysiological processes, silicon circuitry, or what have you. So, the research studies precisely what algorithms human minds in fact compute in a way that is both metaphysically and (weakly) epistemically autonomous of neuroscience.
123
A. Knoll
Now, perhaps some mechanists may object that the sort of accounts Pietroski et al. provide are not so much explanations of how people assess whether most dots are yellow, but instead just provide explananda that themselves are in need of explanations. One might further think that explanations of, for example, why most humans deploy the algorithms characterized by Pietroski et al. will most plausibly be given in neurophysiological terms. Thus, the Pietroski research does not exemplify an autonomous explanation—because it’s not an explanation at all!13 Here’s where I heartedly agree with Boone and Piccinini: whether the Pietroski results count as explananda or explanantia need not be a mutually exclusive choice. Boone and Piccinini are quite right to point out that under their multilevel mechanistic framework: Multilevel neurocognitive mechanisms have an iterative structure: at any level, each component of the mechanism is in turn another mechanism whose capacities are explained by the organized capacities of its components; and each whole mechanism is itself a component part that contributes to the capacities of a larger whole (p. 1515). Thus, it is perfectly correct to point out that the deployment of algorithms, such as those described by Pietroski et al., require explanations as to how they are in fact implemented by human neural tissue. At the same time, attributing the use of such algorithms to human minds helps explain how they assess sentences involving the word ‘‘most.’’ An algorithmic level of explanation serves as both explanandum and explanans. Neuroscientific evidence may well bear on deciding just which computational processes different brains implement. That consideration does no damage to the weak epistemic autonomy of cognitive science. Cognitive neuroscientists may well discover lots of interesting information about how computational, representational processes are implemented by particular brains and types of brains. But, that does no damage to the metaphysical autonomy of cognitive science. It’s always a possibility that computational representational explanations get eliminated entirely. But as long as that’s not likely (and I agree with Boone and Piccinini that it’s not), the autonomy of cognitive science seems here to stay. As a bonus, it seems completely compatible with mechanistic explanation.
References Bechtel, W. (2016). Investigating neural representations: the tale of place cells. Synthese, 193(5), 1287–1321. Boone, W., & Piccinini, G. (2016). The cognitive neuroscience revolution. Synthese, 193(5), 1509–1534. Burge, T. (2010). Origins of objectivity. Oxford: Oxford University Press. Carruthers, P. (2015). The centered mind: What the science of working memory shows us about the nature of human thought. Oxford: Oxford University Press. Chomsky, N. (2000). New horizons in the study of language and mind. Cambridge: Cambridge University Press. 13
Thanks to an anonymous reviewer for raising this possibility.
123
Still Autonomous After All Chomsky, N. (2002). On nature and language. Cambridge: Cambridge University Press. Collins, J. (2007). Meta-scientific Eliminativism: A reconsideration of Chomsky’s review of Skinner’s verbal behavior. British Journal for the Philosophy of Science, 58(4), 625–658. Egan, F. (2010). Computational models: a modest role for content. Studies in History and Philosophy of Science Part A, 41(3), 253–259. Fodor, J. A. (1974). Special sciences (or: The disunity of science as a working hypothesis). Synthese, 28(2), 97–115. Fodor, J. A. (1975). The language of thought. Cambridge: Harvard University Press. Fodor, J. A. (1987). Psychosemantics: The problem of meaning in the philosophy of mind. Cambridge: MIT Press. Fodor, J. (1997). Special sciences: Still autonomous after all these years. Nouˆs, 31, 149–163. Fodor, J. A. (1998). Concepts: Where cognitive science went wrong. Oxford: Oxford University Press. Fodor, J. A. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge: MIT Press. Hale, M., & Reiss, C. (2008). The phonological enterprise. Cambridge: Oxford University Press. Hornstein, N. (2005). ‘‘Chomsky’s natural philosophy’’ forward to Chomsky, N. rules and representations. New York: Columbia University Press. Idsardi, W. (2006). A simple proof that optimality theory is computationally intractable. Linguistic Inquiry, 37, 271–275. Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34(4), 169–188. Knoll, A. (under review). Intentional content in phonology. Knoll, A., & Rey, G. (in press). Arthropod intentionality? In K. Andrews & J. Beck (Eds.), The Routledge handbook of philosophy of animal minds. Abingdon: Routledge. Lidz, J., Pietroski, P., Hunter, T., & Halberda, J. (2011). Interface transparency and the psychosemantics of most. Natural Language Semantics, 19(3), 227–256. Menzel, R. (2012). The honeybee as a model for understanding the basis of cognition. Nature Reviews Neuroscience, 13, 758–768. Orlandi, N. (2014). The innocent eye: Why vision is not a cognitive process. New York: OUP. Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese, 183(3), 283–311. Piccinini, G., & Maley, C. J. (2014). The metaphysics of mind and the multiple sources of multiple realization. In M. Sprevak & J. Kallestrup (Eds.), New waves in philosophy of mind (pp. 125–152). New York: Palgrave Macmillan. Pietroski, P., Lidz, J., Hunter, T., & Halberda, J. (2009). The meaning of ‘most’: Semantics, numerosity and psychology. Mind and Language, 24(5), 554–585. Pietroski, P., Lidz, J., Hunter, T., & Halberda, J. (2011). Seeing what you mean, mostly. Syntax and Semantics, 37, 181–218. Pietroski, P., & Rey, G. (1995). When other things aren’t equal: Saving ceteris paribus laws from vacuity. British Journal for the Philosophy of Science, 46(1), 81–110. Poeppel, D., Idsardi, W. J., & van Wassenhove, V. (2008). Speech perception at the interface of neurobiology and linguistics. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 363(1493), 1071–1086. Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. Oxford: Oxford University Press. Putnam, H. (1975). Philosophy and our mental life. In H. Putnam (Ed.), Mind, language and reality: Philosophical papers (Vol. 2, pp. 291–303). Cambridge: Cambridge University Press. Ramsey, W. M. (2007). Representation reconsidered. Cambridge: Cambridge University Press. Stich, S. P. (1983). From folk psychology to cognitive science: The case against belief. Cambridge: MIT Press. Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25, 287. Prince, A., & Smolensky, P. (2004). Optimality theory: Constraint interaction in generative grammar. New York: Wiley. Yablo, S. (1992). Mental causation. Philosophical Review, 101, 245–280.
123