AI & Soc DOI 10.1007/s00146-012-0390-6
25TH ANNIVERSARY VOLUME A FAUSTIAN EXCHANGE: WHAT IS TO BE HUMAN IN THE ERA OF UBIQUITOUS TECHNOLOGY?
The quest for artificial wisdom David Casacuberta Sevilla
Received: 11 July 2011 / Accepted: 6 January 2012 Springer-Verlag London Limited 2012
Abstract The term ‘‘Contemplative sciences’’ refers to an interdisciplinary approach to mind that aims at a better understanding of alternative states of consciousness, like those obtained trough deep concentration and meditation, mindfulness and other ‘‘superior’’ or ‘‘spiritual’’ mental states. There is, however, a key discipline missing: artificial intelligence. AI has forgotten its original aims to create intelligent machines that could help us to understand better what intelligence is and is more worried about pragmatical stuff, so almost nobody in the field seems to be interested to join this new effort of contemplative science. In this paper, I would like to accomplish the following: (1) To give a brief description of the field of ‘‘contemplative sciences;’’ (2) To argue why AI should actively join this new paradigm on the study of the mind; and (3) To set up a research program on artificial wisdom: that is to design computational systems that can model at least some relevant aspects of human wisdom. Keywords Wisdom Cognitive sciences Artificial wisdom Contemplative sciences Alternate states of consciousness Meditation Buddhism
1 Introduction The three main aims of this paper are: (1) To give a brief description of the field of ‘‘contemplative sciences’’ and show how it is relevant to change the main emphasis in our research in cognitive sciences, by D. Casacuberta Sevilla (&) Department of Philosophy, Universitat Auto`noma de Barcelona, Catalonia, Spain e-mail:
[email protected]
paying more attention to positive mental states like empathy or compassion and how to make them flourish. (2) To argue why AI should actively join this new paradigm on the study of the mind. (3) To set up a research program on artificial wisdom: that is to design computational systems that can model at least some relevant aspects of human wisdom like the ability to analyze a situation from a nongoal-oriented perspective, to generate a global understanding of problems by a continuous coupling between the cognitive system and the environment or to realize how positive healing emotional states like empathy or compassion could play a role in a new way to understand human–machine interaction. 1.1 Structure of the paper The paper is divided into the following sections: The first section briefly presents the field of contemplative sciences, its main aims and methodological program. The second section discusses why AI is not a part of the contemplative sciences. Third section reviews the ‘‘classical’’ debate between Baltes and Ardelt in order to define wisdom. The next section dwells on what wisdom is and what artificial wisdom should look for in order to make a comprehensive and working theory of it. Fifth section argues for the usefulness of a symbolic approach to artificial wisdom, and the final section discusses how all these theoretical reflections could lead one day to implement simulations of wise behavior.
2 What are contemplative sciences? Contemplative Sciences is a cluster of a new discipline aimed at understanding those ‘‘alternative’’ states that so
123
AI & Soc
far psychology or cognitive sciences were not interested or able to analyze. The term was introduced by meditation researcher Alan Wallace to find a common term to bring together results from philosophy, psychology or neurosciences on alternative mental states that were not covered in classical cognitive sciences. More specifically, it tries to scientifically understand the special states that are generated during mediation practice and analyze both the neural and cognitive mechanisms that make those states possible, as well as to establish the effects that those mental states have on the person. Like the cognitive sciences enterprise in the 1950s, it is based on an interdisciplinary working model that includes cognitive psychology, anthropology, neurosciences, philosophy, religious studies or psychotherapy, among many others. Therefore, contemplative sciences have a twofold aim, one theoretical, to understand those mental states, and one practical, to find ways to help human development. Among the different alternative mental states that can be analyzed under the ‘‘contemplative sciences’’ label, a key one is wisdom. Historically, a subject only studied in philosophy and religion, now we are watching the first comprehensive steps to understand wisdom under a scientific light. The program and aims of contemplative sciences are largely based on the philosophical revolution that the rediscovery of Eastern religions brought in the 1950s and 1960s, specially the introduction of Tibetan and Zen Buddhism in the West. One can point at the foundation of the Naropa Institute by Cho¨gyam Trungpa as the first milestone in the creation of contemplative sciences. One of the main differences between those countercultural 1960s and today’s contemplative sciences is scientific rigor: Excess of enthusiasm as well as lack of deep knowledge produced by scientific papers without proper control protocols and too simplistic, all-or-nothing approaches to what meditation or wisdom was. No distinctions were made between, let us say, Tibetan and Zen meditation procedures. Techniques such as fMRi did not exist, and there was too much trust in untested first person reports from meditators to convince skeptics. As a result of that lack of precision—both conceptual and methodological—the movement went largely discredited until recently, thanks to great efforts from organizations like The Mind and Life Institute, and we are watching the rebirth of the contemplative sciences, now improved with better philosophical understanding of different religious traditions under study, better technological equipment and more sound experimental protocols. For this revolution, the philosophical and scientific models of neurophenomenology and enactivism were key. Thanks to the work of Varela and Thompson (Varela et al. 1991; Thompson 2007), it was possible to work the ‘‘first person perspective’’—basic to understand meditative states or wise performances—in a scientific way, creating bridges
123
between the phenomenology of mental states and its neural basis. There is another important difference between cognitive sciences and this new contemplative sciences that I want to stress on: If one checks the list of disciplines related to contemplative sciences, it is easy to spot the difference— artificial intelligence, so present and active in the cognitive science movement, is completely missing in this new research field. Nevertheless, I do think that AI has a lot to accomplish in the contemplative science realm: Among other things, to help the construction of a meaningful theory of what wisdom really is. To do so, first we need to see whether AI really fits within this new research paradigm.
3 Why is AI missing from the picture? There is no deep theoretical or methodological reason that explains why AI cannot join the contemplative sciences team. Main reasons are either historical or structural, and these can easily be reversed. Here is my own outline of such reasons: 1.
2.
In the recent engineering approach to AI in the days of Minksy, McCarthy, Sloman or Simon—even Dreyfuss in opposition! (Sloman 1971; McCarthy 1979; Simon 1997; Dreyfuss 1972, 1994)—AI was viewed as a theoretical science that aimed at making models on how the mind worked, a view shared by philosophers like Stephen Pinker (Pinker 1997). Then came expert systems, next machine learning, and AI took a deep engineering and problem-solving strategy, and verisimilitude was no longer an aim. What was really important was to solve problems, no matter whether we had to use brute force or any other algorithmic tricks to produce a meaningful response. What counts for a face recognition program is its ability to recognize people, not to explain how the human brain is actually able to spot the face of a known person in a crowd. At the same time, computers were incredibly fast and powerful, and Internet gave us enormous data sets for free, which could be used toward pure statistical approaches to AI. Google Translate is probably the best translating tool we got so far (Levy 2011), but it does not teach us anything about how minds and brains do make translations from one language to another. The enactivist movement, interestingly enough, one of the main forces behind the contemplative sciences movement—enactivism—is at odds with the way most AI intelligence work. Following the work of Varela and Thompson, and then philosophers and computer scientists like Di Paolo or Froese (DiPaolo 2003, 2005;
AI & Soc
3.
Froese and Ziemke 2009), we are starting to realize important gaps in the way AI tries—or tried—to understand the mind. First, Varela (1996) pointed out how the symbolic approach in AI could not give us real coupling with the external world, or embodiment. In a word, it could not give us sense-making (Dreyfuss 2007) because those symbols were only driven by syntax, and semantics was overimposed by the external user. Neural networks seemed to be able to do that, or that was the common view, but first Varela and then Thompson in more detail (Thompson 2007) showed the fact that neural networks were actually the result of external decisions from a programmer on what would the training elements be, so they could not really deliver sense-making. Brooks made the case for autonomous AI (Brooks 1991, 1999), and Varela et al. (1991) was happy to present autonomous robots as a neurophenomenological approach to AI. However, DiPaolo and Froese (ibid.) have argued very soundly and precisely against that possibility. Briefly, the problem continues to be a lack of semantic understanding of the social world around us, lack of real autonomy. Absence of spiritual-related issues in AI despite the fact that Alan Turing was interested in alternative states, and in his seminal ‘‘Computer Machinery and Intelligence’’ (Turing 1950), he pondered on what would happen if telepathy was used in a Turing test, the deep fascination that Douglas Hofstadter feels for Zen Koans (Hofstadter 1979), or how the Dalai Lama (Hayward and Varela 1992) has argued in favor of the possibility that a computer could be enlightened. Most of the people within the computer science business seem to be mostly uninterested in spiritual issues. There are some exceptions, like the Wisdom 2.0 conference, in which mindfulness and meditation issues are discussed by the computer science lore— mostly from a pragmatical point of view, though—but surely lack of tradition to study spiritual issues in AI has stopped researchers to join the contemplative sciences movement.
The above discussion, Sects. 2 and 4, is just historical and structural. There is a room for people and enough resources in the world to devote some thinking to theoretical issues in AI without losing any of the helpful practical inventions that AI gives us. After all, Brooks is responsible for both a major theoretical shift in AI and the design of the dust-cleaner robot Roomba. Ditto with spiritual issues. Section 3, however, sounds more like a theoretical problem. If none of the actual theoretical models in AI can help us to generate sense-making, should AI therefore be excluded from the scientific study of
wisdom? I believe in the contrary and think that even the now so criticized symbolic approach can be very helpful to understand wisdom. Before being able to state it, though, we need a better understanding of what wisdom is.
4 What is wisdom? As the reader would probably suspect, with such a slippery concept, there is not a common definition of what ‘‘wisdom’’ is. I would like to start with the definition proposal from Paul Baltes (Baltes and Staudinger 2000), the first psychologist that undertook a systematic approach to scientifically understand wisdom. Then I will discuss some of the objections that can be presented—mostly following the famous Baltes/Ardelt debate—to state how difficult an AI implementation of wisdom sounds. Bear in mind that this is not a mathematical definition that states necessary and sufficient conditions. As usual in cognitive science, what we have is a prototype that can help us to understand better what wisdom is. Like Wittgenstein stated in his Philosophical Investigations (Wittgenstein 1999) in philosophical inquiries, we cannot expect to get at the essence of a concept, to find all the necessary and sufficient conditions, but only a ‘‘family resemblance.’’ We will follow Paul Baltes’ ‘‘Wisdom: a metaheuristic (Pragmatic) Theory to Orchestrate Mind and Virtue Toward Excellence’’ (ibid.), where wisdom is defined as follows: Wisdom in this paradigm is defined as an expert knowledge system concerning the fundamental pragmatics of life. The fundamental pragmatics of life include knowledge and judgment about the meaning and conduct of life and the orchestration of human development towards excellence while attending conjointly to personal and collective wellbeing In order to state the above definition, Baltes and his team interviewed hundreds of people to find out what is common in our understanding of wisdom. They found the following characteristics: (1) Wisdom is a concept that carries specific meaning that is widely shared and understood in its languagebased representation. For instance, wisdom is clearly distinct from other wisdom-related psychological concepts such as social intelligence, maturity, or sagacity. (2) Wisdom is judged to be an exceptional level of human functioning. It is related to excellence and ideals of human development. (3) Wisdom identifies a state of mind and behavior that includes the coordinated and balanced interplay of intellectual,
123
AI & Soc
affective, and emotional aspects of human functioning. (4) Wisdom is viewed as associated with a high degree of personal and interpersonal competence including the ability to listen, evaluate, and to give advice. (5) Wisdom involves good intentions. It is used for the well-being of oneself and others. In other papers Baltes (1993, 1999), this wisdom expert identified the following seven properties. (1) Wisdom represents a truly superior level of knowledge, judgment, and advice; (2) Wisdom addresses important and difficult questions and strategies about the conduct and meaning of life (3) Wisdom includes knowledge about the limits of knowledge and the uncertainties of the world (4) Wisdom constitutes knowledge with extraordinary scope, depth, measure, and balance; (5) Wisdom involves a perfect synergy of mind and virtue; that is, an orchestration of knowledge and character (6) Wisdom represents knowledge used for the good or well-being of oneself and that of others (7) Wisdom, though difficult to achieve and to specify, is easily recognized when manifested. Based on these properties, Baltes finally proposed two basic criteria and three metacriteria.The two basic criteria are: The two general basic wisdom criteria (factual and procedural knowledge) are characteristic of all types of expertise and stem from the tradition of research in expertise. Applied to the present subject area, these criteria are (1) rich factual (declarative) knowledge about the fundamental pragmatics of life and (2) rich procedural knowledge about the fundamental pragmatics of life. The factual knowledge part concerns knowledge about such topics such as human nature, life-long development, variations in developmental processes and outcomes, interpersonal relations, social norms, critical events in life and their possible constellations, as well as knowledge about the coordination of the well-being of oneself and that of others. Procedural knowledge about the fundamental pragmatics of life involves strategies and heuristics for dealing with the meaning and conduct of life; for example, heuristics for giving advice and for the structuring and weighing of life goals, ways to handle life conflicts and life decisions, knowledge about alternative back-up strategies if development were not to proceed as expected. The first metacriterion is lifespan contextualism: lifespan contextualism is meant to identify knowledge that considers the many themes and contexts of
123
life (e.g., education, family, work, friends, leisure, the public good of society, etc.), their interrelations and cultural variations, and in addition, incorporates a lifetime temporal perspective (past, present, future). Another feature of lifespan contextualism is the historical and social location of individual lifespan development, as well as the idiographic or nonnormative events that operate in human development. These are the other two metacriteria: The second wisdom-specific meta-criterion, relativism of values and life priorities, deals with the acknowledgment of and tolerance for value differences and the relativity of the values held by individuals and society. Wisdom, of course, is not meant to imply full-blown relativity of values and valuerelated priorities. On the contrary, it includes an explicit concern with the topic of virtue and the common good. The third meta-criterion, the recognition of and management of uncertainty, is based on the ideas that (1) the validity of human information processing itself is essentially limited (constrained), (2) individuals have access only to select parts of reality, and (3) the future cannot be fully known in advance. Wisdom-related knowledge and judgment are expected to offer ways and means to deal with such uncertainty about human insight and the conditions of the world, individually and collectively. In case the reader is worried by the fact that criteria and metacriteria are mixed here, I would like to point out that this is something Baltes looked for, considering that wisdom is mostly about metacriteria revising what our views of the world are and whether they are really functioning as expected or not. So far so good. Let us consider how we could implement Baltes model in a research on artificial wisdom. Baltes is actually thinking in a sort of expert system: we distill the rules a wise person should follow, create a hierarchy of them, relate those roles to real life scenarios, we monitor these rules with metarules, and that is wisdom, mostly. So, if we implement Baltes model, we can have a wise program, able to give us recommendations on how to endure the difficulties of life and to seek real happiness. Things, however, will get trickier as we realize that Baltes definition is not good enough to understand wisdom. Ardelt in her analysis of Wisdom as expert system (Ardelt 2004) has several objections to Baltes definition: They ‘… define wisdom as expert knowledge in the fundamental pragmatics of life that permits
AI & Soc
exceptional insight, judgment, and advice about complex and uncertain matters’. The Berlin group does not conceptualize wisdom as a personality characteristic or a combination of personality qualities but as an expert knowledge system, which belongs to the cognitive pragmatics of the mind […] Moreover, it means that a person cannot be wiser than the ‘collectively anchored product’ of wisdom. If this is correct, individuals who are presumed to be exceptionally wise by many people, such as Jesus of Nazareth and Gautama the Buddha, could not have been wiser than the collectively accumulated wisdom-related knowledge of their time. Ardelt is not presenting religious figures as a way to be politically correct, but just looking for prototypes of wisdom. When common citizens are asked to give examples of wise people, they always tend to cite religious and political figures. The main task of the psychologist, sociologist or anthropologist is to find the ‘‘family resemblances’’ among them to get a working definition of wisdom. So here we have the first problem. If we look closely to our intuitions, we will see that wisdom (despite catchy sentences like ‘‘Wisdom of crowds,’’ Surowiecki 2004) is a property of individuals. Actually, Jesus and the Buddha were wiser than their society at that time, which is precisely why we considered them sages, wise men, because they were able to look far beyond the shared knowledge of their time and propose a new understanding of things and a new approach to living. Therefore, wisdom is a characteristic of individuals, deeply related to the first person view of the contemplative sciences, and cannot be studied just as a sociological process. In the words of Ardelt: Intellectual or theoretical knowledge is knowledge that is understood only at the intellectual level, whereas wisdom is understood at the experiential level. It is only when an individual realizes (i.e., experiences) the truth of this preserved knowledge that the knowledge is re-transformed into wisdom and makes the person wise(r). According to Ardelt, one of the main problems of Baltes theory is the questionnaires they present to their experimental subjects to understand wisdom as hypothetical life review tasks and life-planning problems. To understand the situation better, here is an example of such questions: Joyce, a widow aged 60 years, recently completed a degree in business management and opened her own business. She has been looking forward to this new challenge. However, she has just heard that her son
has been left with two small children to care for. Joyce is considering the following options: She could plan to give up her business and live with her son, or she could plan to arrange for financial assistance for her son to cover child- care costs. Formulate a plan that details what Joyce should do and should consider in the next 3–5 years. What extra pieces of information are needed? (Baltes et al. 1995, p. 159) If you ponder over the problem in detail, you will realize that it is a too general situation to really invoke any wisdom. A wise person confronted at such dilemma would ask first of all for a more detailed description of the problems. How is Joyce like? What are her priorities in life? How independent is her son? So, as Ardelt coherently suggests, those tests are not really measuring wisdom; they are only stating our intellectual, social knowledge on what wise people do in a given situation. They are only assessing expertise in life-reviewing based on common social norms. Therefore, Baltes and his team are trying to quantify a field that is mostly based on quality answers, answers with depth that depend heavily on context and emotions. Moreover, wisdom is not just what one says, but mostly how one behaves (Clayton and Birren 1980; Strijbos 1995). Like Ardelt points out: ‘‘A wise statement alone is not an indication of wisdom.’’ Therefore, according to Ardelt: the acquisition of wisdom is primarily influenced by a person’s willingness to remain open to all kinds of experiences and to engage in reflection, self-examination, and self- awareness. […] It might be expected that the acquisition of wisdom is accompanied by a strengthening of positive personality characteristics, such as maturity, integrity, and generativity, and a weakening or transcendence of negative characteristics, such as neuroticism or self-centeredness Based on the above considerations, Ardelt proposed a new characterization of wisdom heavily dependent on the sense-making abilities of the person: I propose that the simultaneous presence of cognitive, reflective, and affective personality characteristics is necessary but also sufficient for a person to be considered wise. All men and women of the past and present whom many people regard as wise, such as Jesus of Nazareth, the Buddha, Muhammad, Mahatma Gandhi, Christian saints, Zen masters, etc., seem to possess those three qualities First, they seem to know something that eludes others. But this knowledge is more than a simple accumulation of facts that can be written down in a book and studied by their followers […] They perceive a deeper truth that had a profound effect on their personality and
123
AI & Soc
conduct in life. Hence, they teach others as much by words as by personal example. Second, wise individuals are able to transcend their subjectivity and projections and look at events objectively and from many different perspectives. Third, people who lack sympathy and compassion for others are generally not considered wise. So, when Ardelt’s objections are properly considered, we realize that wisdom is not as easily computable as we thought by just reading Baltes’ approach. Let us see how wisdom looks now in the next section.
5 The elusiveness of wisdom If we combine Ardelt’s ideas with the main proposals in the contemplative sciences program, we can list the main characteristics of wisdom that need to be taken into account when planning an Artificial Wisdom research project. Here is my own proposal. There are probably more things to consider, but I do not think one can deliver a sound conception of wisdom without taking care of the following concepts: 1.
2.
3.
Should versus how. Knowledge, intelligence, is about how to do something, what steps need to be taken in order to solve a specific problem. This is what cognitive science is about. Wisdom, on the contrary, is about whether doing something or not makes sense, as Ardelt has properly argued, wisdom is mostly related to action, not about canned sentences, and the main questions is whether one should do anything at all, and not how to do that something. Embodiment. Because wisdom is, after all, about doing or not doing some stuff in the real world, it must be in some way embodied in our physical structure and become the result of continuous coupling and decoupling with the environment. As Ardelt argued, wisdom is heavily dependent on context. Wisdom is a 24-h task, and you need to be continuously vigilant of changes in the environment to be able to act wisely. Self-relevance. In order to be considered wise, your wisdom has to affect your own living, help you to become a better person, to avoid suffering and to make your life flourish.
Embodiment and self-relevance are probably the two key features that make the task of actually creating a wise artificial creature so difficult. We are very far away to create autonomous robots that are embodied and enactive, that is, that they are able to couple and decouple with the environment, have the ability to create sense-making from what happens in the outside world, and have their own
123
relevant goals, plans and values. So far, only alive creatures are really able to do that. 4.
Empathy. A wise person can give good advice to other people because he or she can do empathic reasoning, getting in someone else’s shoes, and be able not only to think like they think, but also to feel like they feel. That way, their advice is really helpful.
Empathy also poses a difficulty to create artificial wise beings. We need to solve the qualia conundrum in order to be able to design a creature able to feel, if that is possible at all. Mere intellectual understanding of other people’s suffering does not count as empathy. One needs to feel it, too. This part is mostly lacking in Baltes’ approach, and not enough stressed in Ardelt’s work, but it follows closely what they state. 5.
6.
7.
Focused in the here and now. A wise man is in continuous touch with what happens right here, right now, and can always generate the most correct answer in the specific moment. That is, there are no true stable answers in the world of the wise creatures. What should be or should not be done is posed by the context and can change every minute. Interconnection. As has been described in Buddhism— probably the religious tradition most influent in contemplative sciences nowadays—the wise person is the one able to understand that things are void, that is, they do not have any inherent nature, because everything is connected to everything, and the way the weather, a cat, a book, a piece of software or myself behaves is the result of the fine and deep interconnections of millions of beings. Non-reacting. If something comes out of a habit, in an automatic response, then it is not really wise, despite the fact that it might work well to solve a problem. The wise person is free from habits and never generates a canned reply, but always gives the right answer, and proposes the right action paying minute by minute attention to the context.
Not allowing canned replies seems to rule out symbolic approaches to AW. If we design an expert system to pour some wisdom in a specific subject—for example, let us imagine a ‘‘global warming AW expert’’—and we design it with certain symbols and rules to be applied, then we are creating a system that produces pre-designed responses. So even if the system looks wise and is able to recommend the wisest action to a politician who is looking for the wisest way to cap CO2 emissions in his/her country, it is just a knowledge-based system, not really able to express wisdom. At least, that is what the usual understanding of wisdom seems to imply. However, I do not think that the situation is a black-or-white issue. In the next section, I
AI & Soc
would like to defend the possibility that symbolic AI can play a role in AW as well.
6 A symbolic approach to AW Based on what we said so far, the characteristics of wisdom according to Baltes and Ardelt and the main characteristics of wisdom we just had outlined, it seems as if there is no room for any symbol-based approach to artificial wisdom. However, I want to argue that the opposite holds. In the 1990s, during the big conceptual battle between the symbolic approach and the connectionist one in order to decide which theory was better suited to understand cognition, it was common to discuss the two theories under an ontological perspective. That is, within our skulls, neurons, where either symbols are processed by following syntactic rules or subsymbols, are embedded in groups of neurons and processed in parallel. The two theories were trying to establish the ontological building blocks to understand how cognition is possible. Philosophers like Andy Clark (1989) or Tim van Gelder (1990) worked hard to show that you do not need symbols and syntactic rules to have models of what cognition is and what their primitive elements are. Entering the twenty-first century, a new candidate, enactivism, has gained popularity, and it seems to be another candidate to explain cognition. However, I do not think this is the case: Enactivist explanations, based on the continuous coupling and decoupling between the cognitive system and the environment (Varela 1997), so far have not been able to generate a comprehensive theory that can be used to make sense of what cognition is and what its building blocks are. Enactivism can give very detailed and promising descriptions of minimal cognition issues—like how bacteria make sense of their surroundings (Di Paolo 2005)—and it can also give us very detailed differential equations to predict coupling and decoupling of very simple systems (van Gelder 1997), but does not have a working general theory of what cognition is. If we were able one day to analyze a person who shows wise behavior using the current enactivist model, what we would get is incredibly complex differential equations generating degrees and degrees of complexity, which would not make any sense from a theoretical point of view. Maybe, if the equations are well established and we have enough computational power to run the calculations, we would be able to predict when and how this person will show some wise behavior, but we would not be able to understand what is wisdom at all. Enactivism has presented enough arguments that show the impossibility to hold the symbolic approach from an ontological point of view.
Clearly, on the basis of our thoughts and cognitions, there are no symbols, but a very complex process of coupling and decoupling between system and environment. However, that does not render the symbolic approach useless. Despite the fact that Newton was proved wrong, first by the theory of relativity and next by quantum mechanics, engineers who design bridges still rely on classical kinematics and dynamics to make them, as they offer a good enough approach to build bridges. So, I view the symbolic approach just as an abstraction of what the mind is and how cognition is generated that is simple and powerful enough to give us the possibility to understand cognitive processes and generate meaningful theories that can be tested and, more importantly, could in the end help to improve our cognition. For example, if I use a symbolic approach to understand wisdom, I might end up generating a symbolic and heuristics model that could be run as an expert system. Despite the fact that we know that our brains do not act like expert systems, designing one can help to generate heuristic rules that a fellow human can understand, for example, a cognitive psychologist or even a therapist, and they can make sense of that. An enactive model would probably be more precise and ontologically accurate, but also useless outside the specific research project that gave birth to it, or that is the situation nowadays. Maybe in the future, we will get a comprehensive theory of wisdom—based on enactivism— that can make sense of what wisdom is, but so far there, this is just a far-off possibility, so symbols are our best options. Let us see another example from the symbolic versus neural networks debate. Clearly, neural network-based models are better at face recognition than symbolic systems (Clark 1993). However, if I want to create a theory of what faces are, what are the basic elements that make a face and how they relate to each other, I need a symbolic model with elements like ‘‘noses,’’ ‘‘eyes,’’ ‘‘eyebrows,’’ ‘‘jaws’’ and so on. If I just train a neural network to recognize the faces of the FBI’s most Wanted, it will be better at tracking terrorists and serial killers than a symbolic system, but it would not teach me much about faces, only some statistical regularities that make easy to differentiate between different faces. Moreover, I do not even get systematicity; if I train the neural network with the faces of the members of the Spanish Parliament, the type of representations and statistical calculations they will make to differentiate Mariano Rajoy and Jose Luı´s Rodrı´guez Zapatero will be very different, as the input to generate the statistical analysis is also different. I would not get any theory of what faces are being used in cluster analysis or what each neuron does. However, if I create an expert system to recognize faces, although it can be very bad at recognizing actual people, it will help me to develop a theory on main types of
123
AI & Soc
facial expression and to understand the main differences between, let us say, Caucasian and Asian types. The first generation of computer programs that played chess could be used to learn chess and develop some heuristics. Deep Blue plays better than them. Actually, it plays better than most human beings, but Deep Blue is not helpful at all to understand how humans do play chess. When AI moved from the symbolic approach to neural networks, machine learning and other statistical methods, we lost our desire to explain things and just got interested in making things that work. We stopped being scientists and philosophers and became only techno-centric engineers. There is nothing wrong about being a techno-centric engineer, of course, and producing good software to recognize faces, simulating the weather or beating humans at chess is something good to have. But it is also good to generate abstractions that lead to theories that help us to understand a domain, and not just simulate it. Until enactivism is able to produce general theories to analyze higher states of cognition, I believe that the symbolic approach is a much better tool to understand the components, rules and heuristics behind human emotion or wisdom from the point of view of AI.
(vi) To use the results obtained to improve our model of wisdom. (vii) Return to (iv).
7 How to implement AW?
References
Now that we have the conceptual preliminaries, let us see how an AW program should be like and how it could be implemented. I envision the following steps: (i) To accept that the goal is not to get an artificially wise creature— supposing that this is possible after all—but just computer simulations that can help us both to understand at least some aspects of what wisdom is and to produce tools that can help humans to become wiser. (ii) To obtain models from contemplative sciences and positive psychology that can help us to understand how collective intelligence, wisdom and compassion work in humans. (iii) To stick to a symbolic approach model. The main reason to do so is that it is—as I argued in the former section—the only modeling system that can give us some understanding of what is going on. Besides, wisdom is clearly related to higher human actions, actions that are language based, and therefore symbol based. (iv) To generate symbol-based computer simulations that aim at understanding specific paths of wisdom. Maybe we are modeling the way koans are processed in human minds. Or we are considering how a tense familiar situation is processed when one is completely focused in the here and now. (v) Test the simulation using both ‘‘normal subjects’’ and expert meditators. As Ekman states (Ekman 2008), expert meditations like Mathieu Ricard are able to differentiate between mental and emotional states a lot better than non-trained subjects.
123
8 Conclusions According to Mathieu Ricard (quoted in Hall 2010), wisdom in Buddhism is related to the following two factors: (1) To understand the true nature of reality and go beyond appearances and (2) to know how to apply that knowledge in order to raise happiness in all beings and reduce suffering. By developing artificial wisdom, the discipline of artificial intelligence could also help in the pursuit of these two noble goals. This can be viewed as two different missions, if one wishes: on one side, a more naturalistic research on understanding what wisdom is by means of simulations, and on the other side, a more pedagogical type of research, designing tools that could help people to become wiser. If we are lucky and AW is successful, we could see in the future how these two missions interact and how digital tools help us to revise what is wisdom.
Ardelt M (2004) Wisdom as expert knowledge system: a critical review of a contemporary operationalization of an ancient concept. Human Dev 47:257–285 Baltes PB (1993) The aging mind: potential and limits. Gerontologist 33:580–594 Baltes PB (1999) Wisdom: the orchestration of mind and virtue. Book manuscript. Max Planck Institute for Human Development, Berlin Baltes PB, Staudinger UM (2000) Wisdom: a metaheuristic (pragmatic) to orchestrate mind and virtue toward excellence. Am Psychol 55(1):122–136 Baltes PB, Staudinger UM, Maercker A, Smith J (1995) People nominated as wise: a comparative study of wisdom-related knowledge. Psychol Aging 10:155–166 Brooks RA (1991) Intelligence without representation. Artif Intell 47:139–159 Brooks RA (1999) Cambrian intelligence: the early history of the new AI. MIT Press, Cambridge, MA Clark A (1989) Microcognition: philosophy, cognitive science, and parallel distributed processing. MIT Press, Cambridge, MA Clark A (1993) Associative engines. MIT Press, Cambridge, MA Clayton VP, Birren JE (1980) The development of wisdom across the life-span: a reexamination of an ancient topic. In: Baltes, PB, Brim OG Jr (eds) Life-span development and behavior, Vol 3. Academic Press, New York, pp 103–135 Di Paolo EA (2003) Organismically-inspired robotics: homeostatic adaptation and natural teleology beyond the closed sensorimotor loop. In: Murase K, Asakura T (eds) Dynamical systems approach to embodiment and sociality. Advanced Knowledge International, Adelaide, pp 19–42 Di Paolo EA (2005) Autopoiesis, adaptivity, teleology, agency. Phenomenol Cogn Sci 4:97–125
AI & Soc Dreyfus HL (1972) What computers can’t do: a critique of artificial reason. Harper and Row, New York Dreyfus HL (1994) What computers still can’t do. MIT Press, Cambridge, MA Dreyfus HL (2007) Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Philos Psychol 20(2):247–268 Ekman P (2008) Emotional awareness, a conversation between the Dalai Lama and Paul Ekman. Times Books, London Froese T, Ziemke T (2009) Enactive artificial intelligence: investigating the systemic organization of life and mind. Artif Intell 173:466–500 Hall S (2010) Wisdom: from philosophy to neuroscience. Knopf, New York Hayward JW, Varela F (1992) Gentle bridges: conversations with the Dalai Lama on the sciences of mind. Shambala Sun, China Hofstadter DR (1979) Go¨del, Escher, Bach: an eternal golden braid. Basic Books, New York Levy S (2011) In the plex. Basic Books, New York McCarthy J (1979) Ascribing mental qualities to machines. In: Ringle M (ed) Philosophical perspectives in artificial intelligence. Harvester Press, USA Pinker S (1997) How the mind works. W. W. Norton, New York Simon HA (1997) Artificial intelligence: an empirical science. Artif Intell 77:95–127
Sloman A (1971) Interactions between philosophy and AI. Artif Intell 2:209–225 Strijbos S (1995) How can systems thinking help us in bridging the gap between science and wisdom? Syst Pract 8:361–376 Surowiecki J (2004) The wisdom of crowds. Random House, New York Thompson E (2007) Mind in life. Harvard University Press, Cambridge, MA Turing A (1950) Computing machinery and intelligence. Mind 59:433–460 van Gelder TJ (1990) Compositionality: a connectionist variation on a classical theme. Cogn Sci 14:355–384 van Gelder TJ (1997) The dynamical hypothesis in cognitive science. Behav Brain Sci 21:1–28 Varela FJ (1996) Neurophenomenology: a methodological remedy for the hard problem. J Consc Stud 3:330–350 Varela FJ (1997) Patterns of life: intertwining identity and cognition. Brain Cogn 34:72–87 Varela FJ, Thompson E, Lutz A, Rosch E (1991) The embodied mind: cognitive science and human experience. MIT Press, Cambridge, MA Wittgenstein L (1999) Philosophical investigations. Prentice Hall, NJ
123