Monday, September 30, 2019
Moral psychology Essay
The articles in this special section express a common theme: the use of information technology in society is creating a rather unique set of ethical issues that requires the making of new moral choices on the part of society and has spawned special implications for its members. Technology itself is not the only, nor necessarily the most responsible, cause of these issues. All ethical questions arise initially out of human agency. Technology, due to its capability to augment mental and physical powers of human beings, does stand in the role of a coconspirator. The lure of power-enhancing capabilities makes technology an inducer of sorts, a necessary but not sufficient underpinning to many of the ethical issues we face today. An ethical issue is said to arise whenever one party in pursuit of its goals engages in behavior that materially affects the ability of another party to pursue its goals. When the effect is helpfulââ¬âgood, right, justââ¬âwe say the behavior is praiseworthy or exemplary. When, however, the effect is harmfulââ¬â bad, wrong, unjustââ¬âthe behavior is unethical. This purposeful theory of ethics is reflected in the issues discussed in these articles. For example, email and being online are applications of information technology, the lure of which is based on their ability to expand the scope, range, speed, and ease of interpersonal and corporate communications. Useful as they are, the schemes and the manifold of issues addressed leave one question unanswered: What moral guidance can be provided to the agents whose behavior create these issues? And, this question leads to others: How should the many knowledge workers, systems analysts, programmers, hardware designers, authors, executives, and so forth, who set in motion the actions which bring these issues to the fore, guide their own behavior? Knowing their technology-based actions will intercede in the course of human affairs, how should they direct them? The crucial point occurs when a moral agentââ¬âone that by definition has choicesââ¬âdecides to change the state of information or information technology in a human system. Changes in hardware, software, information content, information flow, knowledge-based jobs, and the rules and regulations affecting information are among the many things agents do that affect others. I call these crucial juncture points moments-of-truth. If those of us who make decisions in any of these areas are to behave ethically, we must be able to identify the significant momentsof-truth in which we participate and be able to reflect on the effects of our actions. We must use our moral imagination to guide our choices so that we can contribute positively toward making the kind of ethical world in which we want to live and want to bequeath to our future generations. How can we do this? The ACM Code of Ethics [1], as well as the schemes and other articles in this special section provide initial grist for the mill. More fundamental, however, is our conscience, aided by our understanding and expertise in information technology. If we have an inkling our behavior as information professionals might in some way harm others, we probably should examine our decisions a little more carefully and from an ethical point-of-view. Getting the Morally Relevant Facts The facts of an ethical situation can be summarized by four factors. The first factor is to clearly identify the moral agent. Whose actions will bring about the technology-induced change? The frameworks and discussions presented here will be helpful because they point to a variety of possible forms of agency. The next factor is the set of alternative courses-ofCOMMUNICATIONS OF THE ACM. December 1995/Vol. 38, No. 12 55 action available to the agent. These are the realworld acts that will have an effect on the human system under consideration. Acts have consequences, hence the third factor: a delineation of the results that are expected to occur if each act is taken. Finally, it is essential to identify the stakeholders who will be affected by the consequences of the acts. A stakeholder is any individual, group, organization, or institution that can affect as well as be affected by an agentââ¬â¢s actions. In a word, stakeholders have an interest in what an agent does [3]. These four factorsââ¬âagent, acts, results and stakeholdersââ¬âare the basic facts from which an ethical analysis proceeds. Applying Ethical Theories Renowned medical ethicist William F. May refers to the method of ethical reflection as corrective vision. ââ¬Å"Ethics supplies a type of corrective lens,â⬠he observes, ââ¬Å"and relies heavily on the distinction between what is and what ought to beâ⬠[4]. These four crucial factors serve to establish what is. For what ought to be we must turn to ethical theories. These theories are the prismsââ¬âthe optometristââ¬â¢s collection of lensesââ¬âthrough which we can observe reality and see the choices to make as we attempt to direct reality towards our ethical ideals. There are many, perhaps an infinity, of theories we might apply. The optics of ethics is very large indeed. We can be comforted in this effort, however, by the realization that the evolution of ethical thinking has resulted in four major themes. These are meta-lenses through which to look at an ethical situation. One theory emphasizes an agentââ¬â¢s duty. This theory seeks to create a good society by having people do the right things. As Immanuel Kant emphasized, there are prohibitions against taking some acts and obligations to take others. We have a prima facie or a priori duty, for example, to respect the autonomy of others according to one principle evolving out of this theory; any acts an agent might take that would invade a stakeholderââ¬â¢s privacy or right to choose for themselves should be avoided. More specifically, it is reasonable to assume members have a prima facie duty to adhere to the provisions of the ACM Code of Ethics. But, and this is a significant point, we may not always be obliged to do so. Subsequent theorists in this deontological vain, W. D. Ross in particular [5], have held that while these duties are compelling they are not definitive. When two or more duties come into conflict the agent must make a reasoned choice. For example, the advantages obtained from using email may be deemed to be more important than the exposure to loss of privacy it brings about. These moral losses, however, should be made explicit in making a moral choice. The same principle applies, as we will see, among theories themselves. The second great tradition is the pursuit of happiness. Applying this theory requires that we assess the consequences of the agentââ¬â¢s actions and deter56 December 1995/Vol. 38, No. 12 COMMUNICATIONS OF THE ACM mine how much pleasure or pain, good or bad, happiness or unhappiness, benefits or costs they inflict on stakeholders. The guiding principle, which originates with Bentham and Mill, is that an agent should choose an act resulting in the greatest good for the greatest number. The good society is reached according to this theory by doing good for others. However, since what is good for the collective-at-large may not be good for a given individual (or may violate a basic duty or right), advice emanating from this consequentialist tradition may conflict with advice deriving from other theories. A third great tradition is the pursuit of virtue. This theory focuses on improving the character or traits of the agent. The ancient Greeks averred that a moral person should take acts that enable and enhance the agentââ¬â¢s courage, prudence, temperance and justice. Their predecessors focused on accumulating individual power. ââ¬Å"Might is rightâ⬠formed the basis of their concept of virtue. One of their successors, St. Thomas Acquinas, drew on the Pauline tradition to add the more spiritual virtues of faith, hope and charity to the list. And, in the industrial age, industry, honesty, and trustworthiness were added because they were necessary for commercial relationships. All of these virtueoriented guides have the effect of creating a good society by having each agent be a good person. Finally, there is the tradition of the pursuit of justice. Justice requires that every stakeholder in the system should enjoy, so far as possible, an equal opportunity to develop his or her knowledge, skills and talents, and to reach his or her potentialities. This comes from fair dealing and right action and is usually based on rules that society has made, rules that should be the same for all and applied equally. The rules are based on criteria such as merit, need, work or other agreed-upon standards. The social contract theories to which several of the authors refer have emerged as a part of this tradition. The good society according to theories of justice is achieved by doing fairly, both in the fair allocation of privileges, duties, and goods, and in the meting out of punishments. When facing a moment-of-truth, one is well advised to view the situation through each of these ethical lenses. Each provides insight into the moral complexity of the issue being examined. Frequently, however, the guidance deriving from one of these theories will conflict with that of one or more of the others. This requires a moral judgment, one that shows how one theory or principle trumps another. The reasons behind the choice made should be grounded in at least one moral theory and justified accordingly. The pitting of facts against theories is a necessary ââ¬âand the most importantââ¬âaspect of deciding on an ethical issue. There are also four additional considerations to take into account: Who should decide? Who should benefit? How should the decision be made? And, how can the issue be prevented from arising in the future? Who Should Decide? Presumably if you are facing a moment-of-truth you are also engaged in a decision process. Should you go it alone? Often, not. Before an agent acts he or she should take into account the answers to two questions: 1) Which other stakeholders ought to participate in the making of this decision because of their knowledge, their values, or their interests? The voices of future generations should always be considered in this determination as well as the voices of contemporaries. 2) Which other stakeholders must take part in the decision and its implementation because of their institutional jobs, responsibilities or the resources they control? As debates on the basis of a ââ¬Å"just warâ⬠have concluded, a decision that does not carry legitimacy or a reasonable probability of success is unlikely to lead to a satisfactorily moral outcome. Who Should Benefit from the Decision? Many stakeholders may be affected by a decision. Some of these outcomes should have been considered during the application of ethical theories to the situation at hand. Nevertheless, before enacting a choice one should assure himself or herself the benefits of the decision flow to morally justifiable parties and that no undue harm is done. How Should the Decision be Made and Carried Out? From a stakeholderââ¬â¢s point-of-view a decision cannot be separated from the way it is made and delivered. Whenever possible, important moral decisions should be made as the result of due process. Beyond any legal requirements, the processes by which decisions are made should be fair and they should follow established procedures when applicable. It is essential the parties who are potentially harmed by decisions, as well as those who are benefited, recognize the legitimacy of the decision-making process. This, however, is not enough. Decisions should be carried out in a humane, moral way. During the trumping process just described, some ethical principles or dictates are relegated to a secondary position. But they do not go away. A decision should be framed and fulfilled in a manner which maximizes the accomplishment of all of the ethical principles identified. All decisions should be carried out with due respect, in the sense that they should preserve the dignity of all stakeholders involved to the extent possible. How Can the Issue be Prevented from Arising in the Future? Every decision becomes a precedent in the future. A decision that resolves an acute and pressing moral issue today may not look so good in light of the passage of time. It may create worse problems than the ones it solves. Or, our moral reflection may reveal flaws in our institutions that can beââ¬âperhaps, should beââ¬âchanged so the ethical issue at hand does not emerge again, at least in the same degree of intensity or severity. Thus, procedures and processes should be put in place, eliminating the root causes of this issue or handling it more effectively in the future. The essential question: In making this ethical decision, what sort of social transcript do we want to write? The last four considerations have a common thread: To be ethical, a decision-maker must think beyond just the facts and theories pertinent to the current issue. One must reach beyond the present and be sure to bring in additional voices, insure that ethical procedures are employed, adopt a humane style of conduct, and look to the future. Moving Ahead The articles in this issue form a rather gritty as well as a cerebral basis for getting on with the task of creating a good society in our information age. The ethics of being online, using tools such as email, and infusing of information technology into our lives in areas ranging from business process reengineering to installing large-scale systems are, arguably, among the most important ethical issues of our time. As good citizens in this information age we must be able to identify the crucial moments-of-truth in which our behavior as information professionals shapes the direction our society will take. By understanding the facts of each case, drawing on ethical traditions for guidance, and doing this with a concern for the broader implications of our actions, we can create the kind of ethical society we want. This is the challenge of our times [2]. C References 1. Anderson, R. E. , Johnson, D. G. , Gotterbarn, D. and Perrolle, J. Using the new ACM code of ethics in decision-making. Commun. ACM 36, 2 (Feb. 1993), pp 98ââ¬â107 2. Mason, R. O. , Mason, F. M. , and Culnan, M. J. Ethics of Information Management. Sage, Thousand Oaks, Calif. , 1995. 3. Mason, R. O. and Mitroff, I. Challenging Strategic Planning Assumptions. Wiley, New York, 1981. 4. May, W. F. The Physicianââ¬â¢s Covenant. Westminster Press, Philadelphia, 1983. 5. Ross, W. D. Moral Duties. Macmillan, London, 1969. Richard O. Mason is Carr P. Collins Professor of Management Information Sciences at the Edwin L. Cox School of Business, Southern Methodist University, Dallas, Tex. Parts of this article are based on material originally developed for Mason, R. , Mason, F. , and Culnan, M. Ethics of Information Management. Sage, Thousand Oaks, Calif. , 1995. Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee. à © ACM 0002-0782/95/1200 $3. 50 COMMUNICATIONS OF THE ACM December 1995/Vol. 38, No. 12 57.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.