Science and Engineering Ethics (2001) 7, 177-192
A Cybernetic Theory of Morality and Moral Autonomy Jean Chambers, State University of New York at Oswego Keywords: cybernetics, feedback, morality, norm, autonomy, control
ABSTRACT: Human morality may be thought of as a negative feedback control system in which moral rules are reference values, and moral disapproval, blame, and punishment are forms of negative feedback given for violations of the moral rules. In such a system, if moral agents held each other accountable, moral norms would be enforced effectively. However, even a properly functioning social negative feedback system could not explain acts in which individual agents uphold moral rules in the face of contrary social pressure. Dr. Frances Kelsey, who withheld FDA approval for thalidomide against intense social pressure, is an example of the degree of individual moral autonomy possible in a hostile environment. Such extreme moral autonomy is possible only if there is internal, psychological negative feedback, in addition to external, social feedback. Such a cybernetic model of morality and moral autonomy is consistent with certain aspects of classical ethical theories.
INTRODUCTION Many people are worried about a perceived breakdown of morality in industrialized Western societies – in everyday life, at work, and in the professions. Some of these concerns may be due to an irrational fear of social change, but some simply reflect an admirable desire to protect values which are basic to the good life for human beings. Scientists and engineers, in particular, have an urgent interest in upholding professional standards of honest research and quality work, since their work often directly affects the health and safety of the public. The simple model presented here suggests concrete and practical strategies individual scientists and engineers can use to maintain professional standards as well as personal moral standards.
Address for correspondence: Jean Chambers, Ph.D., Assistant Professor, Philosophy Department, 128 Piez Hall, State University of New York at Oswego, Oswego, NY 13126, USA;
[email protected] (email). Paper received, 18 October 2000: revised, 4 February 2001: accepted, 15 March 2001. 1353-2452 © 2001 Opragen Publications, POB 54, Guildford GU1 2YF, UK. http://www.opragen.co.uk
Science and Engineering Ethics, Volume 7, Issue 2, 2001
177
J. Chambers
The philosopher John Kekes has recently argued that societies must enforce reasonable moral rules if they are to provide their citizens with the possibility of good lives.1 Kekes argues that good lives depend on the enforcement of reasonable limits which protect primary values important for all people, such as the protection of life, without inflicting intolerant secondary values on people, such as the suppression of homosexuality. If these morally required conventions are violated too frequently, society runs the risk of moral disintegration and deprives its citizens of good lives. However, Kekes’ proposed solution is vague: “The encouragement [of possibilities] and the prohibition [of violations of limits] are done through the legal system, public opinion, custom, and moral education….”1 (p.25) We need a more specific account of how exactly the moral norms might be upheld. Kekes’ appeal to hierarchical institutions such as the legal system and moral education in the public schools also invites the danger that authorities might exercise their power to enforce morality in an immoral way, by punishing violators too severely or by imposing draconian requirements on children. Public opinion and custom are less hierarchical and more ubiquitous, as is the power of informal moral education. Surely it would be preferable to keep the power of moral authority distributed among the people rather than concentrated in a few hands. The purpose of moral enforcement in general is to bring people’s behavior up to the expected norms. It is assumed here that the moral norms themselves are the best possible ones, which everyone can follow with a minimum of individual sacrifice and a maximum of benefit to all. The question which moral norms are best is the core question of the entire field of normative ethics and is largely beyond the scope of this article. However, there are suggestive comparisons between the view of morality defended here and the classical ethical theories of Aristotle, Kant, and Mill. It is also assumed that the process of negotiating and choosing good norms is irreducibly social, so that moral norms are necessarily socially constructed. Nevertheless our evolutionary heritage is also relevant, since it has given us our capacity for both culture in general and morality in particular, as well as our susceptibility to temptations which are opposed to our socially constructed standards. Anyone committed to the dominant interactionist paradigm in biology must assume that moral behavior, like other behavior, may best be explained as the result of some combination of biological and environmental, including social, factors. So let us assume for the sake of argument that we have identified good norms and wish to explore how to best enforce them. I shall argue that any set of standards, such as moral or professional standards, may be understood as part of a feedback control system resembling familiar control systems found throughout science and engineering. The moral norms are the reference values of the control system, while moral praise and blame provide positive and negative feedback which functions to bring individuals’ behavior into line with the norms. Such a simple model, while attractive, is incomplete, since people of outstanding moral courage have a level of moral autonomy that goes beyond mere responsiveness to social feedback. For example, Dr. Frances Kelsey was the junior medical officer at the Food and Drug Administration in 1960 and 1961 who withheld approval of a drug which had been approved in Europe as an over-the-counter 178
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
sleeping pill. In her professional and moral judgment, some of the reported side-effects were potentially dangerous to developing fetuses. She was severely criticized and even ridiculed, only to be vindicated later when the drug, thalidomide, turned out to cause terrible birth defects.2 (p.221-222) So in addition to an account of successful moral norm compliance, we need an account of how moral heroes like Dr. Kelsey succeed in upholding moral norms despite social pressure to the contrary. Two promising functional accounts of human morality as cybernetic feedback control systems have recently been suggested by Raymond Spier3 and James Coleman.4 The idea of morality as a cybernetic feedback control system is theoretically promising because it lends itself to mathematical analysis in terms of game theory, communication theory, and decision theory, as well as cybernetics. Such a model, if correct, would also provide guidance to moral agents about how to successfully hold each other accountable. Reciprocal moral accountability, if practiced universally, might be all that is needed to successfully enforce moral norms. After all, many religions and cultures rely on such reciprocal feedback for norm compliance, and succeed in achieving compliance with psychologically challenging norms, such as the celibacy practiced by the Shakers and members of the Heaven’s Gate cult. However, sometimes such reciprocal feedback is not available, or is being used to uphold bad norms. Consider the situations faced by scientists and engineers in organizations in which professional standards have eroded due to corruption or political influence. The extreme moral autonomy evident in the actions of heroic scientists such as Frances Kelsey, the previously mentioned FDA medical officer, and others, may be best explained by exploring the ways in which internal psychological feedback makes it possible for an individual to uphold high moral standards in the absence of very much, or any, social support.
HISTORY OF CYBERNETICS, FEEDBACK, AND FEEDFORWARD CONTROL According to F.L. Lewis, the development of feedback control theory has both responded to society’s needs for control mechanisms and created the potential for revolutionary changes in society.5 Long before there was any explicitly mathematical control theory, there were ingenious gadgets such as float regulators which were used in water clocks to measure time. The early mathematical analyses of feedback control, culminating in that of James Clerk Maxwell in 1868, made possible the highly accurate regulators used to control factory machinery needed for the Industrial Revolution. After Ludwig von Bertalanffy and others developed general systems theory in the 1920s and 1930s, mathematical control theory was used to reduce distortion in repeater amplifiers which helped to carry telephone voice signals over long distances. Further advances during the world wars enabled engineers to reduce noise in radar signals and accurately aim anti-aircraft guns. More recently, mathematical feedback control theory has been used for space navigation, missile guidance technology, and robotics. The most famous scientist in the history of feedback control theory is undoubtedly Norbert Wiener, who introduced models of stochastic processes into control theory.6 Science and Engineering Ethics, Volume 7, Issue 2, 2001
179
J. Chambers
He called his science of communication and control cybernetics, from the Greek kybernetes, which means ‘steersman’. It is capable of describing both organic and mechanical systems which are controlled by feedforward and feedback. To catch a high pop fly, one begins running to the general area where one expects it to fall – this is feedforward control. Then one keeps correcting one’s course to intercept it, based on further observations of its trajectory and one's own position – this is feedback control. Feedforward control is direct control, while feedback control is more subtle, involving measurement of the discrepancies between the actual and the desired outputs. Feedback control can come from outside the person, as in the case of the coach’s instructions to a new ballplayer, or from within, as when a skilled outfielder makes mid-course corrections on the way to catching the ball, based on his own observations. Feedback in general refers to control situations in which some of the output information is returned as input to the system. Everyone has experienced situations in which a microphone is placed too close to its speaker and there is feedback of some of the output of the speaker back into the microphone. The signal is then amplified and sent to the speaker, making its output even louder, which causes the input to the microphone to be greater, which makes the speaker output louder again, and so on, until the familiar situation of sound escalating out of control develops. This phenomenon is called positive feedback, because the output of the speaker adds to the input signal to the microphone. Another way of saying the same thing is that “...the fraction of the output which reenters the object has the same sign as the original input signal.” 6 (p.222) Positive feedback often leads to the loss of control, because it increases the discrepancy between the actual and the desired outputs. Feedback control, on the other hand, usually involves negative feedback, which works to reduce the discrepancy between the actual and the desired output, as in the case of a thermostat. Many engineers and scientists are familiar with mechanical, electrical, and electronic systems which use feedback mechanisms to maintain stability. These include thermostats of all kinds. Consider a very simple regulator thermostat which is designed to stabilize room temperature at a comfortable level. The operator assigns a set point, or reference value, such as 70 degrees. The thermostat periodically measures the air temperature and compares it with 70 degrees. If the air temperature is 70 degrees or above, the thermostat does nothing. Suppose the next measurement is 62 degrees. The thermostat compares the actual value, 62, with the reference value, 70, and generates an error signal which it uses to operate a switch turning on the furnace. If the next measurement after that is at least 70 degrees, the thermostat generates a signal which turns off the furnace. The form of the desired output distinguishes between two uses of negative feedback. If it is a set reference value, negative feedback is said to solve a regulator problem, as in the case of the thermostat. But if the desired output is a trajectory, which has no set reference value, the control problem is called a tracking problem, as in the case of aiming an anti-aircraft gun to track enemy aircraft. Norbert Wiener developed the theory of cybernetics to solve just this tracking problem.6 The solution consists of repeated corrections to approximate successive reference values. Morality, viewed as a negative feedback control system, might be analogous to a feedback loop designed to 180
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
solve either a regulator or a tracking problem, depending on whether the norms themselves are considered to be constant or changing. For the purposes of this article, we will assume constant moral norms; however, since norms do change, the tracking problem remains potentially relevant. Norbert Wiener purposely did not apply cybernetic concepts to the social sciences: From the beginning of my interest in cybernetics, I have been well aware that the considerations of control and of communication which I have found applicable in engineering and in physiology were also applicable in sociology and in economics. However, I have deliberately refrained from emphasizing these fields as much as the others, and here are my reasons for this course. Cybernetics is nothing if it is not mathematical, if not in esse, then in posse. I have found mathematical sociology and mathematical economics or econometrics suffering under a misapprehension of what is the proper use of mathematics in the social sciences and what is to be expected from mathematical techniques, and I have deliberately refrained from giving advice that, as I was convinced, would be bound to lead to a flood of superficial and ill-considered work.7 (p.87-88) But others were at least intrigued by the possibilities of such an approach. At the important Macy cybernetics conferences, sponsored by the Josiah Macy, Jr. Foundation from 1946 to 1953, there were cultural anthropologists who viewed ...negative feedback structures as the stabilization mechanisms of societies. They reported interwoven, multi-loop systems involving elaborate forms of distinctions and rules with respect to kinship, forms of address, hazing, bullying, praise, blame, and even rituals with respect to eating.8 (p.98) (emphasis added) No records remain of the relevant Macy conferences, so it is not clear how these anthropologists, whoever they were, viewed praise and blame in cybernetic terms.
MORALITY AS A NEGATIVE FEEDBACK CONTROL SYSTEM Despite Wiener’s warning that cybernetics is nothing if not mathematical, it provides a promising conceptual model of social control, as some of the participants of the Macy conferences apparently realized. Several authors have recently applied similar control models to morality. David Harnden-Warwick gives an account of chimpanzee morality that seems to fit with a negative feedback model, although he does not use the language of negative feedback.9 Raymond Spier’s 1996 editorial in this journal calls for a feedback control model for social behavior quite generally,3 and James Coleman analyzes human morality as a feedback system.4 Chimpanzees, as our closest evolutionary relatives, can provide better insight into the origins of human morality than the mythical, premoral ‘state of nature’ preferred by traditional moral philosophers, in which free, independent individuals join together to Science and Engineering Ethics, Volume 7, Issue 2, 2001
181
J. Chambers
form society by negotiating a social contract among equals or by deferring to some authority. Our ancestors resemble chimpanzees more than they resemble hypothetical individualists. Chimpanzees are inherently interdependent, social creatures who live in groups. Homo sapiens is a naturally social species, too. We depend upon social cooperation for survival in any state of nature. Analyzing Frans de Waal’s groundbreaking work with chimpanzees’ social behavior at the Yerkes Primate Center,10 David Harnden-Warwick argues that chimpanzees actually have morality.9 They exhibit the four essential components of morality – agency, obligation, blame, and empathy, as evidenced respectively by selfawareness, calculated reciprocity, moralistic aggression, and consolation. HarndenWarwick describes chimpanzee morality in terms consistent with a negative feedback model: Chimpanzee societies seem to function according to a simple system of obligation and blame that recognizes when individuals fail to meet the ideal of equality in the food distribution context; it is right that individuals should share, and it is wrong that they should not share….Community sanctions operate so as to reward those who meet obligations of just exchange and punish transgressors by exclusion from the social flow of resources.9 (p.39) This description of chimpanzee behavior fits the classic negative feedback pattern. There is a reference value – “Always share food.” Individuals’ behavior is compared with this reference value, and if it is sufficiently different, an error signal generates social sanctions which work to bring the errant chimpanzee’s behavior back into line. If Harnden-Warwick is correct in his analysis, and if it is consistent with a negative feedback structure, it means that humans probably have evolved tendencies towards giving and receiving moral feedback in light of moral norms. The work of cognitive ethologists on chimpanzee ‘morality’ suggests that human beings may have some universal proto-moral tendencies in common with our primate cousins. The capacities for kin altruism, reciprocity, and the moral emotions of guilt, shame, resentment, and outrage appear to be part of our primate heritage. So while cultural differences among human groups exist, some of them extreme, there also seem to be universal moral tendencies. Cultural differences among moral systems are undoubtedly learned, and represent humans’ only hope for modifying our primatetypical tendencies. Culture, and cultural differences, are therefore important resources for rational moral norm design. If humans are as social as chimpanzees, Raymond Spier’s hypothesis that human society as a whole functions as a negative feedback control system is more plausible than it might initially appear. …the words we use to formulate our Ethics provide the set points for the control of human behavior, particularly, but not solely, when we see ourselves as people conducting our business in a manner which may affect other humans….The forms such words take may be laws, rules, guidelines, codes, 182
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
regulations, statutes, instructions, injunctions or any other word form which tells us how we should or ought to conduct ourselves. Human societies also have “effector engines” which enable them to control behavior. Armies, police, judges, prison warders, other citizens, parents and teachers are all involved in the regulation of behavior.3 (p.260) According to Spier, ‘ethics’ is the set point component in the larger control system called society. Society is the system; ‘ethics’ comprises only laws, moral norms, and presumably the rules of etiquette, which together are the set points for society. All of the societal forces and institutions of family, education, and the justice system are simply the effector engines which bring errant human behavior back into line with ‘ethics.’ The sociologist James Coleman, in Foundations of Social Theory, gives a somewhat different account of how norms function as reference values in human society: Once in existence, [norms] lead, under certain conditions, to actions of individuals (that is, sanctions or threat of sanctions) which affect the utilities and thus the actions of the individuals to whom the sanctions have been or might be applied. Thus norms constitute a social construction which is part of a feedback process, involving either negative feedback, which if effective discourages and dampens certain actions, or positive feedback, which if effective further encourages certain actions.4 (p.244) For Coleman, the social practice of holding one another morally accountable involves negative feedback, since there is a reference value, an actual value, a measurement of the discrepancy between these two values, and a feeding back of that discrepancy in such a way as to reduce it. When you hold me accountable to a moral norm, you compare my actual behavior with that prescribed by a moral norm, and measure the difference. If my behavior complies with the moral reference value, you do not react in any particular way. Indeed, you may not even bother doing a conscious comparison until a discrepancy becomes salient. But if you do become aware of my deviating from the norm, you might choose to express your moral disapproval to me, and even try to persuade me to right whatever wrong I have done and comply with the norm in the future. You have provided me with negative feedback. Spier’s and Coleman’s accounts differ in their drawing of the analogy between social control and the workings of a negative feedback control system. Spier’s list of relevant reference values is weighted more toward formal and large scale societal requirements: “laws, rules, guidelines, codes, regulations, statutes, instructions, injunctions or any other word form which tells us how we should or ought to conduct ourselves,”3 (p.260) while Coleman’s definition of ‘norm’ clearly also includes very informal, even tacit, behavioral regularities. One of Coleman’s examples involves a child who litters, and a passerby who provides negative feedback. Even though littering may be against the law, the negative feedback does not take the form of an arrest and Science and Engineering Ethics, Volume 7, Issue 2, 2001
183
J. Chambers
action by the criminal justice system. Instead, Coleman refers to negative feedback as informal social disapproval, ostracism, and other social signals between individuals. Coleman’s view of human morality is therefore more similar to Harnden-Warwick’s analysis of the workings of chimpanzee morality than to Spier’s model. Spier’s comparators and effector engines are the authorities in society, particularly the police and criminal justice systems, while Coleman’s are ordinary moral agents. For the purposes of understanding the informal workings of morality, Coleman’s model is probably more plausible, since it is more consistent with our evolutionary history. Another advantage is that it allows for the analysis of experiences of individual moral agents who participate in the informal moral control system both as agents and as comparators and generators of negative feedback. Suppose human morality is a negative feedback control system with moral norms such as “Do not steal” as reference values and praise, blame, and punishment as typical kinds of feedback. If everyone participated in giving and receiving appropriate feedback, that would ideally function to reduce the moral error of everyone’s ways, bringing everyone’s behavior closer to that specified by the moral norm. Such a scenario provides a specific solution to the problem of moral enforcement raised by John Kekes.
COMPARISON WITH CLASSICAL ETHICAL THEORIES How compatible is a functional view of human morality as a negative feedback control system with the main traditional normative ethical theories of Aristotle, Immanuel Kant, and John Stuart Mill? Elements of Aristotelian virtue ethics, specifically the Doctrine of the Mean, and the essential principle of Kantian deontology, the Categorical Imperative, appear to be strikingly consistent with a cybernetic view of morality. However, Mill’s utilitarianism seems to imply that giving negative feedback is wrong in some cases in which we would normally think it is well-deserved. According to Aristotle’s famous Doctrine of the Mean, each specific virtue, such as courage, is actually a mean between two extremes, such as foolhardiness and cowardice. A young man attempting to become courageous may err either on the side of excess, that is, foolhardiness, or on the side of deficiency, that is, cowardice. After repeated corrections of his early errors, the young man learns to consistently act ‘in the mean’ of courage. The stable habit of acting courageously just is the virtue of courage. As Aristotle famously asserts, “…we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.”11 (p.34 [1103b]) According to cybernetic learning theory, negative feedback is a necessary component of learning. One makes successive attempts to master a skill, using negative feedback repeatedly to reduce the difference between one’s actual performance and the ideal. Thus target practice in many sports involves repeated attempts to aim accurately, whether the projectile is a ball, a bullet, or an arrow. One improves one’s performance by utilizing negative feedback based on observing the amount and direction of one’s errors and using this information to get the projectile closer and closer to the target. This cybernetic learning process is similar in form to Aristotle’s description of learning virtue. Both involve successive attempts to perfect one’s performance by referring to 184
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
standards and either giving oneself negative feedback or receiving it from someone else, thereby gradually improving one’s ability to comply with the reference value, whether that is a virtue or a target for a projectile. Suppose one is aiming at the virtue of scientific honesty, which requires correctness and reliability of one’s data collection, presentation, and interpretation. One’s first attempts to live up to this reference value, perhaps as a student, might not be directly on target. One might fail to keep sufficiently accurate records of one’s observations and be tempted to falsify or even fabricate data, thereby exhibiting the deficiency, scientific dishonesty. Alternatively, one might keep collecting and checking one’s data so carefully and so extensively that one unnecessarily deprives the world of useful information, thereby exhibiting the excess, perfectionism. Scientific honesty would be the mean between these two extremes of scientific dishonesty and perfectionism. Aristotle lists the excesses and defects characteristic of the various virtues as an aid to learning them. If one has a detailed description of cowardice, one can compare one’s own behavior with that description, and if it fits, one can work on increasing the risks one is taking. But if the description of foolhardiness fits, one should work on decreasing the risks one takes and increasing preparation for real dangers. Similarly, if one has an accurate description of what counts as scientific dishonesty, one can compare one’s own behavior with that description, and if it fits, one can work on increasing the accuracy and conscientiousness of one’s data collection, presentation, and interpretation. If, on the other hand, the description of perfectionism fits, one can work on better meeting the needs of others for one’s data and conclusions. In either case, the comparison between actual behavior and the descriptions of virtue, excess, and defect may be made by either oneself or others, with the negative feedback resulting in enabling one to comply with the reference value of scientific honesty. The virtue of scientific honesty lies in the mean, but achieving it requires more than mere persistence. As Aristotle teaches, one must use correct descriptions of honesty, dishonesty, and perfectionism to generate the negative feedback needed to correct one’s efforts to target honesty long enough to form a stable habit, or virtue, of scientific honesty. The stability of such virtuous habits is feature of moral autonomy. Immanuel Kant’s deontological ethical theory is very different from Aristotle’s virtue theory, but it, too, suggests surprising parallels with the idea of morality as a negative feedback system. One formulation of Kant’s famous Categorical Imperative states, “Act only according to that maxim by which you can at the same time will that it should become a universal law.”12 (p.302) By ‘maxim,’ Kant means a rule one follows when one acts, for example, “When doing scientific work, I will always collect, record and report data with appropriate scientific accuracy.” This maxim satisfies Kant’s Categorical Imperative, because one can will that it become a universal law of nature. In Kantian terminology, it is universalizable. If every scientist followed this maxim, the practice of science would continue to function well. But consider the maxim, “I will falsify and fabricate data whenever it is expedient to do so.” This maxim is not universalizable, because if everyone acted on it, science itself would be destroyed.
Science and Engineering Ethics, Volume 7, Issue 2, 2001
185
J. Chambers
Kant’s great insight is that what is right is rational and what is wrong is irrational. Defining rationality as consistency, Kant showed that wrong actions create inconsistency, either in principle or in practice. The maxim, or rule, behind a wrong action fails the consistency test of the Categorical Imperative by failing to be universalizable. The maxim of a wrong action, when universalized, creates either a logical contradiction or a contradiction of the will, according to Kant. For example, the maxim, “I will make deceitful promises whenever it is expedient to do so,” is not universalizable, because if everyone made deceitful promises, the practice of promising would collapse. One could not consistently will such a state of affairs. But notice that universalizability is also a feature of workable reference values in an efficient negative feedback control system. Ideally, any agent should be able to give negative feedback to any other agent at any time, using public reference values applicable to all. Kant himself expresses vaguely system-like views of morality. According to Michael Green, Immanuel Kant’s Critique of Judgment includes descriptions of individual animal organisms, human beings, and social morality as feedback systems.13 “Pleasure and pain function for Kant in a similar way in which an electronic comparator does in a closed-loop robotic system.”13 (p.151) In a footnote in that Critique, Kant mentions “…that an analogy exists between the self-organization of nature given in an organism and the self-organization of a people.”13 (p.152) These quotations suggest that Kant has at least a vague notion of morality as a system. According to Green, one can use the modern notion of a negative feedback control system to apply Kant’s Categorical Imperative to a given maxim. Imagine a possible society of people all of whom uphold the maxim in question by means of positive and negative feedback to each other. If everyone in such a society can do so, then the maxim passes the universalizability test of the Categorical Imperative. Consider Green’s example, the maxim, “I will deceive myself in order to avoid negative feedback.” This maxim does not pass the test of the Categorical Imperative, because: When self-deception becomes universalized, the very self-directing activity of an agent becomes impossible, and one has only the deceptive appearance of an agent. When this elimination of negative feedback becomes built into an agent as a necessary part of his nature, then he could no longer function as an agent, i.e., as a self-organizing center of activity.13 (p.163) The maxim in favor of self-deception is self-defeating, because without the ability to use negative feedback, we could not act. Our very ability to act requires self-regulation, which requires negative feedback. Self-deception fails the test of the Categorical Imperative because, if universally practiced, it would destroy human action. One could not consistently will such a state of affairs. These suggestive parallels with aspects of the classical theories of Aristotle and Kant suggest that the idea of morality as a negative feedback control system is less farfetched than it may at first appear. An exception is John Stuart Mill’s utilitarianism. According to the standard interpretation of utilitarianism, an action is morally right if and only if it will maximize utility in the future, where utility is defined as pleasure, 186
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
goodness, or some other quantity. If a moral agent’s action fails to maximize future utility, it is morally wrong. A notorious problem for utilitarianism is its inability to justify punishment in certain cases. After all, punishment is backward-looking, while utilitarianism is forward-looking. According to utilitarianism, any action of punishing another person is morally right if and only if it would maximize future utility, so it has to be weighed against all other available actions. Is it worthwhile, in terms of future utility, for society to punish crimes of passion if the likelihood of deterrence is low and the cost of incarceration is high? Or would it maximize future utility to let the criminal go free? Consider the action of one agent giving another agent negative feedback. This action would be morally right, according to utilitarianism, if and only if doing so would maximize future utility; otherwise it would be morally wrong. Suppose one notices that another scientist has fabricated some data. According to the standard interpretation of utilitarianism, one is morally required to weigh the action of giving that scientist negative feedback against all the other actions open to one at that time, and to choose, from among the actions available, that action which will maximize utility. Suppose that the choice is a very extreme one in which there are only two mutually exclusive alternatives, whether to give constructive negative feedback to the scientist or to finish one’s own work on a promising cancer treatment. If finishing one’s work on the cancer treatment would in fact maximize future utility, it would be morally wrong, according to utilitarianism, to give constructive negative feedback to the scientist. Utilitarianism’s inability to justify deserved punishment implies a similar inability to justify giving deserved negative feedback. The problem for utilitarianism is that our intuitions regarding desert reflect what people have done in the past, but there is no necessary connection between what people deserve and what would maximize future utility. There would undoubtedly be at least some extreme cases in which it would maximize future utility to forgo giving people warranted negative feedback, just as there are some extreme cases in which it would maximize future utility to forgo punishing wrongdoers. Our strong moral intuitions of desert might have their source in our evolved susceptibility to social control by negative feedback. After all, chimpanzees are also susceptible to social control by negative feedback; perhaps such susceptibility is retained in social species because it directly supports survival and reproduction. Our intuitions about reciprocal justice may well be rooted in such evolved reciprocity. If survival and reproduction were synonymous with utility, then utilitarians could argue that the practice of moral reciprocity in fact maximizes utility in the long run, but survival and reproduction are not the same as utility, on most philosophical definitions of utility. Maximizing one’s reproductive success does not necessarily lead to the happiest, most fulfilled life, because it involves having as many offspring as will live to reproduce, and has little to do with the quality of one’s life. Given the stress associated with raising large numbers of children, there is no reason to suppose that maximizing one’s reproductive success would also maximize one’s happiness, pleasure, or goodness, the traditional measures of utility. It is controversial whether large families maximize utility overall. Science and Engineering Ethics, Volume 7, Issue 2, 2001
187
J. Chambers
It is theoretically interesting that a cybernetic theory of morality as a negative feedback control system appears to be compatible with both our evolutionary heritage and with some elements of traditional moral theories, while further emphasizing utilitarianism’s flouting of our basic moral intuitions. Perhaps some of the conceptual difficulties in reconciling the recommendations of these theories could be resolved by placing the theories in correct relation to the proper functioning of the moral system, but that ambitious project is beyond the scope of this work.
MORAL AGENCY WITHIN THE MORAL SYSTEM What does the fact that negative moral feedback is necessary for the efficient functioning of the moral system mean for the average moral agent? To the extent that moral feedback is social, each moral agent must be able to play three different roles: moral agent, giver of moral feedback, and receiver of moral feedback. Each role is important, because each contributes essentially to the functioning of the overall negative feedback system. As a moral agent, one is responsible for exercising feedforward control by sincerely attempting to make one’s plans and actions conform with the reference values, which are the moral norms. Simply checking the likely consequences of one’s plans and actions for others is an important first step. Thinking of one’s planned action in several social contexts, such as whether it would be a good example for children, or whether it would make one’s friends proud of one, or whether one would admire someone who did the same, can provide some imaginative internal feedback prior to one’s action. In a professional context, one should consider whether the plan is right both according to accepted moral norms and according to one’s professional code of ethics, regardless of political or economic considerations. Of course, not all norms are moral norms. Clearly there are norms of etiquette, such as those governing which fork to use first, which have no moral force. A sensitive person would not impose the manners of one class on members of another, because of the universal moral norm of reciprocity. While drawing the line between moral and nonmoral norms may be difficult at times, there are clear cases to guide us in most situations. As a giver of feedback, one is responsible for calling other agents’ attention to moral wrongdoing. This is difficult but essential, and requires several skills. First of all, one must be able to distinguish between moral and nonmoral norms. One must be able to perform the comparator function, that is, one must compare other people’s behavior to the appropriate moral norms, making allowances for age and other factors. Snap judgments are inadequate in many cases, since most behavior is responsive to multiple factors. At the very minimum, it is important to find out why the person believes that what she or he is doing is right (or at least acceptable), as is usually the case, before judging whether what she or he did was wrong. Secondly, one must be able to provide proportionate negative feedback. The punishment must fit the crime. Murder deserves more than a dirty look, while rudeness deserves less than road rage. Finally, one must be able to give moral feedback in a way that is likely to be useful. Most people who drive have indulged in giving other drivers impolite feedback about their driving, but 188
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
moral feedback should be as constructive as possible. We have primate-typical tendencies to make faces, use body language, and give dirty looks to people who behave badly. Other natural forms of social punishment are ostracism, hurtful gossip, and insults. We should try to infuse our moral feedback with kindness so that it will be welcomed instead of rejected. Public humiliation is a last resort, to be used for only the most immoral behavior, and only after all other attempts at communication have failed; the best approach for most situations is to have a private, tactful conversation tailored to the maturity of the recipient. In one’s role as recipient of negative moral feedback, one is responsible for paying attention to signs of disapproval such as negative looks, social exclusion, and insults, so as to learn what one can from them. Attending to such signals requires hypothesizing what might be causing the negative moral feedback, determining whether to use the feedback, checking over one’s past behavior, and struggling to become more virtuous. If one is told that her or his actions raise moral issues, the best response is to listen respectfully and thank the person for the feedback, then decide later whether it is useful feedback. If the values behind the feedback are admirable, perhaps it is useful, in which case one should change one’s values and behavior accordingly. A certain moral humility is necessary to continue learning virtue throughout one’s life. The giving and receiving of moral feedback are affected by status differences. Lower status people are understandably reluctant to give negative feedback to those above them in the status hierarchy. So higher status comes to be associated with greater power to get away with immorality. Moral leadership at the top of the status hierarchy is thus crucially important, because it can provide downward moral negative feedback to those in the middle of the hierarchy when upward feedback is lacking. A control feedback model of the moral system reveals that moral agents play several distinct roles – acting, giving feedback, and receiving and using feedback. It is not enough to do one’s best to be moral. One must actively work on improving oneself and providing useful information for others to improve themselves.
A CYBERNETIC THEORY OF MORAL AUTONOMY Often there are few or no witnesses to one’s actions and little or no possibility of negative feedback for one’s transgressions. Many people report that they would commit various crimes, such as theft, if they knew they would never be caught and punished, especially if the amount of money was suitably large. But there are also many people who would never commit a crime under any circumstances. These are people who continue to behave morally even in the absence of local external feedback. A moral agent who behaves in accordance with moral norms in the absence of significant local external feedback is morally autonomous in a quite straightforward, literal sense. She or he is acting morally without local social input or support. Moral learning begins in childhood and ideally culminates in the morally mature adult with virtuous habits. But too often people enter adulthood confused about morality and too easily compromised by corrupting influences. Adolescence is the Science and Engineering Ethics, Volume 7, Issue 2, 2001
189
J. Chambers
period of development characterized by emerging cognitive and sexual maturity. Ideally it should be the period in which adults welcome young adults into moral maturity as well. Moral maturity involves adopting the best moral norms. If the norms of one’s society happen to be good norms, this adoption process should not be overly stressful. Unfortunately, in many societies the existing moral norms are incoherent, oppressive, sexist, and psychologically crippling. Thus adolescence is more painful than it needs to be. How can a girl whose genitals have been mutilated make a smooth transition into adult womanhood? How can a boy who is taught that blind obedience to authority is morally right feel happy about becoming an adult? Most morally mature adults can either act morally without any feedback, using feedforward control, or can monitor their own plans and give themselves moral feedback of an internal, psychological kind. The emotion of guilt may have evolved as a kind of internal feedback signaling a moral norm violation. Of course, some early childhood conditioning can cause people to have too much guilt, or too little. The most sophisticated moral agents typically do a lot of moral self-monitoring, analyzing their actions for subtle hints of norm violations and their motivations for slight inclinations toward norm violation, which they immediately strive to correct. Perhaps what we have labeled a conscience is an internal cybernetic feedback mechanism which we use to guide our own behavior. How could an individual learn virtues such as scientific honesty or courage unless she or he were willing to look at how well she or he was doing compared to the norm? We are not crudely self-regulating like the simplest servomechanisms; rather we are capable of very subtle self-regulation, whether it is best described as feedforward control, external feedback, internal feedback, or some combination of these. When a moral agent can develop and carry out morally exemplary plans in any situation, she or he is fully morally autonomous, in the sense that she or he can function at a high moral level in the absence of local and immediate moral feedback, or in the presence of misguided or opposed moral feedback. Dr. Frances Kelsey of the Food and Drug Administration had previously earned her Ph.D. and M.D. degrees and taught college-level pharmacology for three years.14 When she arrived in Washington, D.C. in 1960, she was hired as a medical officer at the FDA to evaluate new drug applications. Her first assignment was to evaluate thalidomide for use as a sleeping pill called Kevadon.14 Kelsey was concerned about fetal safety. She recalled later: …at this time there was growing concern regarding the exposure of the fetus to drugs and other substances to which the mother was exposed during pregnancy…. Furthermore, the harmful effects of German measles during pregnancy had been recognized…. The recognition of peripheral neuritis developing particularly after long-term use of thalidomide raised in our mind the question as to what effect the drug might have on the fetus who might be exposed to it for up to nine months.15 (p.13) In her scientific judgment, the research done by the company applying for approval of thalidomide was inadequate to rule out this threat to fetal safety. 190
Science and Engineering Ethics, Volume 7, Issue 2, 2001
A Cybernetic Theory of Morality and Moral Autonomy
Using this scientific evaluation and her professional judgment that she, as an officer of the FDA, was ethically responsible for protecting the safety of all Americans, Dr. Kelsey refused to approve it and requested further studies instead.14 Clearly Dr. Kelsey was judging according to scientific, professional, and moral norms she had learned in past environments. In those past environments, she was probably given moral support for her principles by her parents, teachers, and colleagues. But in her new environment at the FDA, she was faced with strong contrary signals from the company seeking approval for thalidomide. The company initially complied with her request for further studies, but then the company complained to her superiors, who apparently backed her up. She successfully continued to resist the company’s pressure.14 If her superiors did agree with her standards and her judgment, their feedback to her was opposed to that coming from the thalidomide manufacturer. So her courage was based partly on her own internal negative feedback, or moral autonomy, and some local moral support. Finally, in November of 1961, Dr. Widukind Lenz of Germany revealed that a rash of birth defects in which limbs failed to form properly was linked to thalidomide use by pregnant women.14 The FDA and the public realized that her moral strength had saved many babies from deformity. Such highly moral and courageous behavior illustrates the importance of independent feedforward control and self-administered negative feedback, as well as local moral support. If Dr. Kelsey had relied on the local moral feedback from the people of the company that was trying to get the drug approved, she would have caved in to the company’s social pressure, with potentially disastrous consequences. She had the moral autonomy to stand up to them because she was sure of her scientific and moral principles and was able to act in accordance with them in spite of the risk to her reputation and her career. In addition, she apparently had the support of her boss. Ironically, as she later recalled, “They [her bosses] gave it [the thalidomide application] to me because they thought it would be an easy one to start on. As it turned out, it wasn’t all that easy.”15 (p.13)
CONCLUSION A functional analysis of morality as a cybernetic control system is both plausible and consistent with our evolutionary heritage, our moral emotions and intuitions, traditional moral theories, our socially constructed moral norms, and our normal practices of praising and blaming. It shows how people can enforce the basic rules that make good lives possible by supporting one another in correct moral judgments and by holding one another accountable for moral errors through the use of moral disapproval, blame, and punishment. While this analysis cannot determine which moral norms we ought to adopt as reference values, it may explain how such a control system, if it also includes provision for feedforward control and internal, psychological feedback, could support moral excellence at the high level of moral autonomy required by the professional ethics of scientists and engineers.
Science and Engineering Ethics, Volume 7, Issue 2, 2001
191
J. Chambers Acknowledgments: I wish to thank Dr. Henry Vandenburg of the Sociology Department of the State University of New York at Oswego, Dr. Stephanie Bird of Science and Engineering Ethics, and two anonymous reviewers for their helpful comments on earlier versions of this paper.
REFERENCES 1. 2.
Kekes, J. (2000) The enforcement of morality, American Philosophical Quarterly 37(1): 23-35. Pritchard, M.S. (1998) Professional responsibility: Focusing on the exemplary, Science and Engineering Ethics 4 (2): 215-233. 3. Spier, R.E. (1996) Editorial – Ethics as a control system component, Science and Engineering Ethics 2 (3): 259-262. 4. Coleman, J. (1990) Foundations of Social Theory, Harvard University Press, Cambridge, MA, USA. 5. Lewis, F.L. (1992) Applied Optimal Control and Estimation, Prentice-Hall, Englewood Cliffs, NJ, USA. 6. Rosenblueth, A., Wiener, N., Bigelow, J. (1943) Behavior, purpose, and teleology, Philosophy of Science 10: 18-24. Reprinted in Buckley, W. (1968) Modern Systems Research for the Behavioral Scientist, Aldine Publishing Co., Chicago, IL, USA. 7. Wiener, N. (1964) God and Golem: A Comment on Certain Points where Cybernetics Impinges on Religion, MIT Press, Cambridge, MA, USA. 8. Richardson, G.P. (1991) Feedback Thought in Social Science and Systems Theory, University of Pennsylvania Press, Philadelphia, PA, USA. 9. Harnden-Warwick, D. (1997) Psychological realism, morality, and chimpanzees, Zygon 32 (1): 29-40. 10. De Waal, F. (1982) Chimpanzee Politics, Johns Hopkins University Press, Baltimore, MD, USA. 11. Aristotle. (1985) Nicomachean Ethics, trans. Terence Irwin, Hackett Publishing Co., Indianapolis, IN, USA. 12. Kant, Immanuel. (1959) The Foundations of the Metaphysic of Morals, trans. Lewis White Beck. Prentice-Hall, Upper Saddle River, NJ, USA. 13. Green, M.K. (1992) Kant and moral self-deception. Kant-Studien 83 (2): 149-169. 14. James, M. (undated – Accessed 14 August 2000) Frances Kelsey: Invalidating thalidomide for prenatal use. Canadian Government World Wide Web Archives. http://collections.ic.gc.ca/heirloom_series/volume6/218-219.htm. 15. Burkholz, H. (1997) Giving thalidomide a second chance, FDA Consumer 31 (6): 12-14.
192
Science and Engineering Ethics, Volume 7, Issue 2, 2001