One factor behind the rise of computer ethics is the lingering suspicion that computer professionals may be unprepared to deal effectively with the ethical issues that arise in their workplace. Over the years, this suspicion has been reinforced by mostly anecdotal research that seems to show that computer professionals simply do not recognize when ethical issues are present. Perhaps the earliest work of this kind was done by Donn Parker in the late 1970s at SRI International.
In 1977, Parker invited highly trained professionals from various fields to evaluate the ethical content of 47 simple hypothetical cases that he had created based in part on his expert knowledge of computer abuse. Workshop participants focused on each action or non-action of each person who played a role in these one-page scenarios. For each act that was performed or not performed, their set task was to determine whether the behavior was unethical or not, or simply raised no ethics issue at all. Parker found a surprising amount of residual disagreement among these professionals even after an exhaustive analysis and discussion of all the issues each case presented.
More surprisingly, a significant minority of professionals held to their belief that no ethics issue was present even in cases of apparent computer abuse. For example, in Scenario 3.1, a company representative routinely receives copies of the computerized arrest records for new company employees. These records are provided as a favor by a police file clerk who happens to have access to various local and federal databases containing criminal justice information. Nine of the 33 individuals who analyzed this case thought disclosure of arrest histories raised no ethics issues at all. Parker's research does not identify the professions represented by those who failed to detect ethics issues, but most of the participants in this early study were computer professionals. This left casual readers of Parker's Ethical Conflicts in Computer Science and Technology free to identify computer professionals as the ones who lacked ethical sensitivity. If some of them could not even recognize when ethical issues were present, it is hard to imagine how they could ever hope to deal responsibly with them. According to Parker, the problem may have been fostered by computer education and training programs that encouraged, or at least failed to criminalize, certain types of unethical professional conduct.
This perception of professional inadequacy is part of a largely hidden political agenda that has contributed to the development of various curricula in computer ethics. In recent years, the tacit perception that those preparing for careers in computing may need remedial moral education seems to have influenced some accreditation boards. As a result, they have been willing to mandate more and more ethical content in computer science and computer engineering programs. They may also be responding to the increased media attention given to instances of computer abuse, fraud and crime. Others demand more ethical content because they believe that catastrophic failures of computer programs are directly attributable to immoral behavior.
The growth of interest is gratifying, especially considering that, in 1976, I found it hard to convince anyone that "computer ethics" was anything other than an oxymoron. No doubt Norbert Weiner would be pleased to see his work bearing late fruit. At the same time, I am greatly disturbed when courses in social impact and computer ethics become a tool for indoctrination in appropriate standards of professional conduct. Donald Gotterbarn, for example, argues that one of the six goals of computer ethics is the "socialization" of students into "professional norms." The fact that these norms are often eminently reasonable, even recommended thoughtfully to us by our professional organizations, does not make indoctrination any less repugnant. The goal cannot be simply to criminalize or stigmatize departures from professional norms. Consider an analogy. Suppose a course in Human Sexual Relationships has for its goal the socialization of college students into "high standards" of sexual conduct, and that this goal is enforced by contradicting or discrediting anyone who violates these standards. Most people would be quick to recognize that this curriculum is more political than academic, and that such an approach would tend to create a classroom environment where bias could overwhelm inquiry.
We stand today on the threshold of a time when well-intended political motives threaten to reshape computer ethics into some form of moral education. Unfortunately, it is an easy transition from the correct belief that we ought to teach future computer scientists and engineers the meaning of responsible conduct, to the mistaken belief that we ought to train them to behave like responsible professionals. When Terrell Bynum says, for example, that he hopes the study of computer ethics will develop "good judgment" in students, he is not advocating socialization. By "good judgment" he means to refer to the reasoned and principled process by which reflective moral judgments are rendered. From this correct position, it is a tempting and subtle transition to the mistaken position that computer ethics should cause students to develop good judgments, meaning that their positions on particular moral issues conform to the norms of the profession. This self-deceiving mistake occurs because there is an undetected shift in emphasis from the process to the products of moral deliberation.
My point is that a perceived need for moral education does not and cannot provide an adequate rationale for the study of computer ethics. Rather, it must exist as a field worthy of study in its own right and not because at the moment it can provide useful means to certain socially noble ends. To exist and to endure as a separate field, there must be a unique domain for computer ethics distinct from the domain for moral education, distinct even from the domains of other kinds of professional and applied ethics. Like James Moor, I believe computers are special technology and raise special ethical issues, hence that computer ethics deserves special status.
My remaining remarks will suggest a rationale for computer ethics based on arguments and examples showing that one of the following is true:
I shall refer to the first as the "weaker view" and the second as the "stronger view." Although the weaker view provides sufficient rationale, most of my attention will be focused on establishing the stronger view. This is similar to the position I took in 1980 and 1985, except that I no longer believe that problems merely aggravated by computer technology deserve special status.
Parker, D. Ethical Conflicts in Computer
Science and Technology. SRI International, Menlo Park, California,
There was a follow-up study some years later that remedied some of the problems discovered in the original methodology. See Parker, D., Swope, S., and Baker, B., Ethical Conflicts in Information and Computer Science, Technology, and Business. QED Information Sciences, Inc., Wellesley, Massachusetts, 1990.
Parker, D. Crime By Computer. Charles Scribner's Sons, 1976.
Gotterbarn, D. The use and abuse of computer ethics. In Teaching Computer Ethics, Bynum, T., Maner, W., and Fodor, J., Eds. Research Center on Computing and Society, New Haven, Connecticut, 1991, p. 74.
I coined the term "computer ethics" in 1976 to describe a specific set of moral problems either created, aggravated or transformed by the introduction of computer technology. By the fall of 1977, I was ready to create a curriculum for computer ethics and, shortly thereafter, began to teach one of the first university courses entirely devoted to applied computer ethics. By 1978, I had become a willing promoter of computer ethics at various national conferences. Two years later, Terrell Bynum helped me publish a curriculum development kit we called the "Starter Kit in Computer Ethics." We found we could not interest the academic establishment in computer ethics, either philosophers or computer scientists, but we managed to survive as an underground movement within the American Association of Philosophy Teachers.
Weiner, N. Some moral and technical consequences of automation. Science 131 (1960), pp. 1355-1358.
Gotterbarn, D. A "capstone" course in computer ethics. In Teaching Computer Ethics, Bynum, T., Maner, W., and Fodor, J., Eds. Research Center on Computing and Society, New Haven, Connecticut, 1991, p. 42.
Bynum, T. Computer ethics in the computer science curriculum. In Teaching Computer Ethics, Bynum, T., Maner, W., and Fodor, J., Eds. Research Center on Computing and Society, New Haven, Connecticut, 1991, p. 24.
Moor, J. What is computer ethics? In Metaphilosophy 16, 4 (1985), p. 266. The article also appears in Teaching Computer Ethics, Bynum, T., Maner, W., and Fodor, J., Eds. Research Center on Computing and Society, New Haven, Connecticut, 1991.
Maner, W. Starter Kit in Computer Ethics. Helvetica Press and the National Information and Resource Center for the Teaching of Philosophy, 1980.
Pecorino, P. and Maner, W. The philosopher as teacher: A proposal for a course on computer ethics. In Metaphilosophy 16, 4 (1985), pp. 327-337.