Computer and Information Ethics

First published Tue Aug 14, 2001; substantive revision Mon Oct 26, 2015

In most countries of the world, the “information revolution” has altered many aspects of life significantly: commerce, employment, medicine, security, transportation, entertainment, and on and on. Consequently, information and communication technology (ICT) has affected – in both good ways and bad ways – community life, family life, human relationships, education, careers, freedom, and democracy (to name just a few examples). “Computer and information ethics”, in the present essay, is understood as that branch of applied ethics which studies and analyzes such social and ethical impacts of ICT.

The more specific term “computer ethics” has been used, in the past, in several different ways. For example, it has been used to refer to applications of traditional Western ethics theories like utilitarianism, Kantianism, or virtue ethics, to ethical cases that significantly involve computers and computer networks. “Computer ethics” also has been used to refer to a kind of professional ethics in which computer professionals apply codes of ethics and standards of good practice within their profession. In addition, names such as “cyberethics” and “Internet ethics” have been used to refer to computer ethics issues associated with the Internet.

During the past several decades, the robust and rapidly growing field of computer and information ethics has generated university courses, research professorships, research centers, conferences, workshops, professional organizations, curriculum materials, books and journals.

1. Founding Computer and Information Ethics

In the mid 1940s, innovative developments in science and philosophy led to the creation of a new branch of ethics that would later be called “computer ethics” or “information ethics”. The founder of this new philosophical field was the American scholar Norbert Wiener, a professor of mathematics and engineering at MIT. During the Second World War, together with colleagues in America and Great Britain, Wiener helped to develop electronic computers and other new and powerful information technologies. While engaged in this war effort, Wiener and colleagues created a new branch of applied science that Wiener named “cybernetics” (from the Greek word for the pilot of a ship). Even while the War was raging, Wiener foresaw enormous social and ethical implications of cybernetics combined with electronic computers. He predicted that, after the War, the world would undergo “a second industrial revolution” – an “automatic age” with “enormous potential for good and for evil” that would generate a staggering number of new ethical challenges and opportunities.

When the War ended, Wiener wrote the book Cybernetics (1948) in which he described his new branch of applied science and identified some social and ethical implications of electronic computers. Two years later he published The Human Use of Human Beings (1950), a book in which he explored a number of ethical issues that computer and information technology would likely generate. The issues that he identified in those two books, plus his later book God and Golem, Inc. (1963), included topics that are still important today: computers and security, computers and unemployment, responsibilities of computer professionals, computers for persons with disabilities, information networks and globalization, virtual communities, teleworking, merging of human bodies with machines, robot ethics, artificial intelligence, computers and religion, and a number of other subjects. (See Bynum 2000, 2004, 2005, 2008a, 2008b.)

Although he coined the name “cybernetics” for his new science, Wiener apparently did not see himself as also creating a new branch of ethics. As a result, he did not coin a name like “computer ethics” or “information ethics”. These terms came into use decades later. (See the discussion below.) In spite of this, Wiener’s three relevant books (1948, 1950, 1963) do lay down a powerful foundation, and do use an effective methodology, for today’s field of computer and information ethics. His thinking, however, was far ahead of other scholars; and, at the time, many people considered him to be an eccentric scientist who was engaging in flights of fantasy about ethics. Apparently, no one – not even Wiener himself – recognized the profound importance of his ethics achievements; and nearly two decades would pass before some of the social and ethical impacts of information technology, which Wiener had predicted in the late 1940s, would become obvious to other scholars and to the general public.

In The Human Use of Human Beings, Wiener explored some likely effects of information technology upon key human values like life, health, happiness, abilities, knowledge, freedom, security, and opportunities. The metaphysical ideas and analytical methods that he employed were so powerful and wide-ranging that they could be used effectively for identifying, analyzing and resolving social and ethical problems associated with all kinds of information technology, including, for example, computers and computer networks; radio, television and telephones; news media and journalism; even books and libraries. Because of the breadth of Wiener’s concerns and the applicability of his ideas and methods to every kind of information technology, the term “information ethics” is an apt name for the new field of ethics that he founded. As a result, the term “computer ethics”, as it is typically used today, names only a subfield of Wiener’s much broader concerns.

In laying down a foundation for information ethics, Wiener developed a cybernetic view of human nature and society, which led him to an ethically suggestive account of the purpose of a human life. Based upon this, he adopted “great principles of justice”, which he believed all societies ought to follow. These powerful ethical concepts enabled Wiener to analyze information ethics issues of all kinds.

1.1 A cybernetic view of human nature

Wiener’s cybernetic understanding of human nature stressed the physical structure of the human body and the remarkable potential for learning and creativity that human physiology makes possible. While explaining human intellectual potential, he regularly compared the human body to the physiology of less intelligent creatures like insects:

Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it. The fact that the mechanical rigidity of the insect is such as to limit its intelligence while the mechanical fluidity of the human being provides for his almost indefinite intellectual expansion is highly relevant to the point of view of this book. … man’s advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment. The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible. (Wiener 1954, pp. 57–58, italics in the original)

Given the physiology of human beings, it is possible for them to take in a wide diversity of information from the external world, access information about conditions and events within their own bodies, and process all that information in ways that constitute reasoning, calculating, wondering, deliberating, deciding and many other intellectual activities. Wiener concluded that the purpose of a human life is to flourish as the kind of information-processing organisms that humans naturally are:

I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium – and are indeed the key to man’s most noble flights – because variety and possibility belong to the very structure of the human organism. (Wiener 1954, pp. 51–52)

1.2 Wiener’s underlying metaphysics

Wiener’s account of human nature presupposed a metaphysical view of the universe that considers the world and all the entities within it, including humans, to be combinations of matter-energy and information. Everything in the world is a mixture of both of these, and thinking, according to Wiener, is actually a kind of information processing. Consequently, the brain

does not secrete thought “as the liver does bile”, as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948, p. 155)

According to Wiener’s metaphysical view, everything in the universe comes into existence, persists, and then disappears because of the continuous mixing and mingling of information and matter-energy. Living organisms, including human beings, are actually patterns of information that persist through an ongoing exchange of matter-energy. Thus, he says of human beings,

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves. (Wiener 1954, p. 96)

The individuality of the body is that of a flame…of a form rather than of a bit of substance. (Wiener 1954, p. 102)

Using the language of today’s “information age” (see, for example, Lloyd 2006 and Vedral 2010) we would say that, according to Wiener, human beings are “information objects”; and their intellectual capacities, as well as their personal identities, are dependent upon persisting patterns of information and information processing within the body, rather than on specific bits of matter-energy.

1.3 Justice and human flourishing

According to Wiener, for human beings to flourish they must be free to engage in creative and flexible actions and thereby maximize their full potential as intelligent, decision-making beings in charge of their own lives. This is the purpose of a human life. Because people have various levels and kinds of talent and possibility, however, one person’s achievements will be different from those of others. It is possible, nevertheless, to lead a good human life – to flourish – in an indefinitely large number of ways; for example, as a diplomat, scientist, teacher, nurse, doctor, soldier, housewife, midwife, musician, tradesman, artisan, and so on.

This understanding of the purpose of a human life led Wiener to adopt what he called “great principles of justice” upon which society should be built. He believed that adherence to those principles by a society would maximize a person’s ability to flourish through variety and flexibility of human action. Although Wiener stated his “great principles”, he did not assign names to them. For purposes of easy reference, let us call them “The Principle of Freedom”, “The Principle of Equality” and “The Principle of Benevolence”. Using Wiener’s own words yields the following list of “great principles” (1954, pp. 105–106):

The Principle of Freedom
Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”

The Principle of Equality
Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.”

The Principle of Benevolence
Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”

Given Wiener’s cybernetic account of human nature and society, it follows that people are fundamentally social beings, and that they can reach their full potential only when they are part of a community of similar beings. Society, therefore, is essential to a good human life. Despotic societies, however, actually stifle human freedom; and indeed they violate all three of the “great principles of justice”. For this reason, Wiener explicitly adopted a fourth principle of justice to assure that the first three would not be violated. Let us call this additional principle “The Principle of Minimum Infringement of Freedom”:

The Principle of Minimum Infringement of Freedom
“What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom” (1954, p. 106).

1.4 A refutation of ethical relativism

If one grants Wiener’s account of a good society and of human nature, it follows that a wide diversity of cultures – with different customs, languages, religions, values and practices – could provide a context in which humans can flourish. Sometimes ethical relativists use the existence of different cultures as proof that there is not – and could not be – an underlying ethical foundation for societies all around the globe. In response to such relativism, Wiener could argue that, given his understanding of human nature and the purpose of a human life, we can embrace and welcome a rich variety of cultures and practices while still advocating adherence to “the great principles of justice”. Those principles offer a cross-cultural foundation for ethics, even though they leave room for immense cultural diversity. The one restriction that Wiener would require in any society is that it must provide a context where humans can realize their full potential as sophisticated information-processing agents, making decisions and choices, and thereby taking responsibility for their own lives. Wiener believed that this is possible only where significant freedom, equality and human compassion prevail.

1.5 Methodology in information ethics

Because Wiener did not think of himself as creating a new branch of ethics, he did not provide metaphilosophical comments about what he was doing while analyzing an information ethics issue or case. Instead, he plunged directly into his analyses. Consequently, if we want to know about Wiener’s method of analysis, we need to observe what he does, rather than look for any metaphilosophical commentary upon his own procedures.

When observing Wiener’s way of analyzing information ethics issues and trying to resolve them, we find – for example, in The Human Use of Human Beings – that he tries to assimilate new cases by applying already existing, ethically acceptable laws, rules, and practices. In any given society, there is a network of existing practices, laws, rules and principles that govern human behavior within that society. These “policies” – to borrow a helpful word from Moor (1985) – constitute a “received policy cluster” (see Bynum and Schubert 1997); and in a reasonably just society, they can serve as a good starting point for developing an answer to any information ethics question. Wiener’s methodology is to combine the “received policy cluster” of one’s society with Wiener’s account of human nature, plus his “great principles of justice”, plus critical skills in clarifying vague or ambiguous language. In this way, he achieved a very effective method for analyzing information ethics issues. Borrowing from Moor’s later, and very apt, description of computer ethics methodology (Moor 1985), we can describe Wiener’s methodology as follows:

  1. Identify an ethical question or case regarding the integration of information technology into society. Typically this focuses upon technology-generated possibilities that could affect (or are already affecting) life, health, security, happiness, freedom, knowledge, opportunities, or other key human values.
  2. Clarify any ambiguous or vague ideas or principles that may apply to the case or the issue in question.
  3. If possible, apply already existing, ethically acceptable principles, laws, rules, and practices (the “received policy cluster”) that govern human behavior in the given society.
  4. If ethically acceptable precedents, traditions and policies are insufficient to settle the question or deal with the case, use the purpose of a human life plus the great principles of justice to find a solution that fits as well as possible into the ethical traditions of the given society.

In an essentially just society – that is, in a society where the “received policy cluster” is reasonably just – this method of analyzing and resolving information ethics issues will likely result in ethically good solutions that can be assimilated into the society.

Note that this way of doing information ethics does not require the expertise of a trained philosopher (although such expertise might prove to be helpful in many situations). Any adult who functions successfully in a reasonably just society is likely to be familiar with the existing customs, practices, rules and laws that govern a person’s behavior in that society and enable one to tell whether a proposed action or policy would be accepted as ethical. So those who must cope with the introduction of new information technology – whether they are computer professionals, business people, workers, teachers, parents, public-policy makers, or others – can and should engage in information ethics by helping to integrate new information technology into society in an ethically acceptable way. Information ethics, understood in this very broad sense, is too important to be left only to information professionals or to philosophers. Wiener’s information ethics interests, ideas and methods were very broad, covering not only topics in the specific field of “computer ethics”, as we would call it today, but also issues in related areas that, today, are called “agent ethics” (see, for example, Floridi 2013b), “Internet ethics” (Cavalier 2005), and “nanotechnology ethics” (Weckert 2002). The purview of Wiener’s ideas and methods is even broad enough to encompass subfields like journalism ethics, library ethics, and the ethics of bioengineering.

Even in the late 1940s, Wiener made it clear that, on his view, the integration into society of the newly invented computing and information technology would lead to the remaking of society – to “the second industrial revolution” – “the automatic age”. It would affect every walk of life, and would be a multi-faceted, on-going process requiring decades of effort. In Wiener’s own words, the new information technology had placed human beings “in the presence of another social potentiality of unheard-of importance for good and for evil.” (1948, p. 27) However, because he did not think of himself as creating a new branch of ethics, Wiener did not coin names, such as “computer ethics” or “information ethics”, to describe what he was doing. These terms – beginning with “computer ethics” – came into common use years later, starting in the mid 1970s with the work of Walter Maner. (see Maner 1980)

Today, the “information age” that Wiener predicted more than half a century ago has come into existence; and the metaphysical and scientific foundation for information ethics that he laid down continues to provide insight and effective guidance for understanding and resolving ethical challenges engendered by information technologies of all kinds.

2. Defining Computer Ethics

In 1976, nearly three decades after the publication of Wiener’s book Cybernetics, Walter Maner noticed that the ethical questions and problems considered in his Medical Ethics course at Old Dominion University often became more complicated or significantly altered when computers got involved. Sometimes the addition of computers, it seemed to Maner, actually generated wholly new ethics problems that would not have existed if computers had not been invented. He concluded that there should be a new branch of applied ethics similar to already existing fields like medical ethics and business ethics. After considering the name “information ethics”, he decided instead to call the proposed new field “computer ethics”.[1] (At that time, Maner did not know about the computer ethics works of Norbert Wiener.) He defined the proposed new field as one that studies ethical problems “aggravated, transformed or created by computer technology”. He developed an experimental computer ethics course designed primarily for students in university-level computer science programs. His course was a success, and students at his university wanted him to teach it regularly. He complied with their wishes and also created, in 1978, a “starter kit” on teaching computer ethics, which he prepared for dissemination to attendees of workshops that he ran and speeches that he gave at philosophy conferences and computing science conferences in America. In 1980, Helvetia Press and the National Information and Resource Center on Teaching Philosophy published Maner’s computer ethics “starter kit” as a monograph (Maner 1980). It contained curriculum materials and pedagogical advice for university teachers. It also included a rationale for offering such a course in a university, suggested course descriptions for university catalogs, a list of course objectives, teaching tips, and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. During the early 1980s, Maner’s Starter Kit was widely disseminated by Helvetia Press to colleges and universities in America and elsewhere. Meanwhile Maner continued to conduct workshops and teach courses in computer ethics. As a result, a number of scholars, especially philosophers and computer scientists, were introduced to computer ethics because of Maner’s trailblazing efforts.

2.1 The “uniqueness debate”

While Maner was developing his new computer ethics course in the mid-to-late 1970s, a colleague of his in the Philosophy Department at Old Dominion University, Deborah Johnson, became interested in his proposed new field. She was especially interested in Maner’s view that computers generate wholly new ethical problems, for she did not believe that this was true. As a result, Maner and Johnson began discussing ethics cases that allegedly involved new problems brought about by computers. In these discussions, Johnson granted that computers did indeed transform old ethics problems in interesting and important ways – that is, “give them a new twist” – but she did not agree that computers generated ethically unique problems that had never been seen before. The resulting Maner-Johnson discussion initiated a fruitful series of comments and publications on the nature and uniqueness of computer ethics – a series of scholarly exchanges that started with Maner and Johnson and later spread to other scholars. The following passage, from Maner’s ETHICOMP95 keynote address, drew a number of other people into the discussion:

I have tried to show that there are issues and problems that are unique to computer ethics. For all of these issues, there was an essential involvement of computing technology. Except for this technology, these issues would not have arisen, or would not have arisen in their highly altered form. The failure to find satisfactory non-computer analogies testifies to the uniqueness of these issues. The lack of an adequate analogy, in turn, has interesting moral consequences. Normally, when we confront unfamiliar ethical problems, we use analogies to build conceptual bridges to similar situations we have encountered in the past. Then we try to transfer moral intuitions across the bridge, from the analog case to our current situation. Lack of an effective analogy forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us. (Maner 1996, p. 152)

Over the decade that followed the publication of this provocative passage, the extended “uniqueness debate” led to a number of useful contributions to computer and information ethics. (For some example publications, see Johnson 1985, 1994, 1999, 2001; Maner 1980, 1996, 1999; Gorniak-Kocikowska 1996; Tavani 2002, 2005; Himma 2003; Floridi and Sanders 2004; Mather 2005; and Bynum 2006, 2007.)

2.2 An agenda-setting textbook

By the early 1980s, Johnson had joined the staff of Rensselaer Polytechnic Institute and had secured a grant to prepare a set of teaching materials – pedagogical modules concerning computer ethics – that turned out to be very successful. She incorporated them into a textbook, Computer Ethics, which was published in 1985 (Johnson 1985). On page 1, she noted that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” She did not grant Maner’s claim, however, that computers create wholly new ethical problems. Instead, she described computer ethics issues as old ethical problems that are “given a new twist” by computer technology.

Johnson’s book Computer Ethics was the first major textbook in the field, and it quickly became the primary text used in computer ethics courses offered at universities in English-speaking countries. For more than a decade, her textbook set the computer ethics research agenda on topics, such as ownership of software and intellectual property, computing and privacy, responsibilities of computer professionals, and fair distribution of technology and human power. In later editions (1994, 2001, 2009), Johnson added new ethical topics like “hacking” into people’s computers without their permission, computer technology for persons with disabilities, and ethics on the Internet.

Also in later editions of Computer Ethics, Johnson continued the “uniqueness-debate” discussion, noting for example that new information technologies provide new ways to “instrument” human actions. Because of this, she agreed with Maner that new specific ethics questions had been generated by computer technology – for example, “Should ownership of software be protected by law?” or “Do huge databases of personal information threaten privacy?” – but she argued that such questions are merely “new species of old moral issues”, such as protection of human privacy or ownership of intellectual property. They are not, she insisted, wholly new ethics problems requiring additions to traditional ethical theories, as Maner had claimed (Maner 1996).

2.3 An influential computer ethics theory

The year 1985 was a “watershed year” in the history of computer ethics, not only because of the appearance of Johnson’s agenda-setting textbook, but also because James Moor’s classic paper, “What Is Computer Ethics?” was published in a special computer-ethics issue of the journal Metaphilosophy. There Moor provided an account of the nature of computer ethics that was broader and more ambitious than the definitions of Maner or Johnson. He went beyond descriptions and examples of computer ethics problems by offering an explanation of why computing technology raises so many ethical questions compared to other kinds of technology. Moor’s explanation of the revolutionary power of computer technology was that computers are “logically malleable”:

Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations … . Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. (Moor, 1985, 269)

The logical malleability of computer technology, said Moor, makes it possible for people to do a vast number of things that they were not able to do before. Since no one could do them before, the question may never have arisen as to whether one ought to do them. In addition, because they could not be done before, perhaps no laws or standards of good practice or specific ethical rules had ever been established to govern them. Moor called such situations “policy vacuums”, and some of those vacuums might generate “conceptual muddles”:

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, that is, formulate policies to guide our actions … . One difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within which to formulate a policy for action. (Moor, 1985, 266)

In the late 1980s, Moor’s “policy vacuum” explanation of the need for computer ethics and his account of the revolutionary “logical malleability” of computer technology quickly became very influential among a growing number of computer ethics scholars. He added additional ideas in the 1990s, including the important notion of core human values: According to Moor, some human values – such as life, health, happiness, security, resources, opportunities, and knowledge – are so important to the continued survival of any community that essentially all communities do value them. Indeed, if a community did not value the “core values”, it soon would cease to exist. Moor used “core values” to examine computer ethics topics like privacy and security (Moor 1997), and to add an account of justice, which he called “just consequentialism” (Moor, 1999), a theory that combines “core values” and consequentialism with Bernard Gert’s deontological notion of “moral impartiality” using “the blindfold of justice” (Gert,1998).

Moor’s approach to computer ethics is a practical theory that provides a broad perspective on the nature of the “information revolution”. By using the notions of “logical malleability”, “policy vacuums”, “conceptual muddles”, “core values” and “just consequentialism”, he provides the following problem-solving method:

  1. Identify a policy vacuum generated by computing technology.
  2. Eliminate any conceptual muddles.
  3. Use the core values and the ethical resources of just consequentialism to revise existing – but inadequate – policies, or else to create new policies that justly eliminate the vacuum and resolve the original ethical issue.

The third step is accomplished by combining deontology and consequentialism – which traditionally have been considered incompatible rival ethics theories – to achieve the following practical results:

If the blindfold of justice is applied to [suggested] computing policies, some policies will be regarded as unjust by all rational, impartial people, some policies will be regarded as just by all rational, impartial people, and some will be in dispute. This approach is good enough to provide just constraints on consequentialism. We first require that all computing policies pass the impartiality test. Clearly, our computing policies should not be among those that every rational, impartial person would regard as unjust. Then we can further select policies by looking at their beneficial consequences. We are not ethically required to select policies with the best possible outcomes, but we can assess the merits of the various policies using consequentialist considerations and we may select very good ones from those that are just. (Moor, 1999, 68)

2.4 Computing and human values

Beginning with the computer ethics works of Norbert Wiener (1948, 1950, 1963), a common thread has run through much of the history of computer ethics; namely, concern for protecting and advancing central human values, such a life, health, security, happiness, freedom, knowledge, resources, power and opportunity. Thus, most of the specific issues that Wiener dealt with are cases of defending or advancing such values. For example, by working to prevent massive unemployment caused by robotic factories, Wiener tried to preserve security, resources and opportunities for factory workers. Similarly, by arguing against the use of decision-making war-game machines, Wiener tried to diminish threats to security and peace.

This “human-values approach” to computer ethics has been very fruitful. It has served, for example, as an organizing theme for major computer-ethics conferences, such as the 1991 National Conference on Computing and Values at Southern Connecticut State University (see the section below on “exponential growth”), which was devoted to the impacts of computing upon security, property, privacy, knowledge, freedom and opportunities. In the late 1990s, a similar approach to computer ethics, called “value-sensitive computer design”, emerged based upon the insight that potential computer-ethics problems can be avoided, while new technology is under development, by anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such harm. (See, for example, Brey, 2001, 2012; Friedman, 1997; Friedman and Nissenbaum, 1996; Introna, 2005a; Introna and Nissenbaum, 2000; Flanagan, et al., 2008.)

2.5 Professional ethics and computer ethics

In the early 1990s, a different emphasis within computer ethics was advocated by Donald Gotterbarn. He believed that computer ethics should be seen as a professional ethics devoted to the development and advancement of standards of good practice and codes of conduct for computing professionals. Thus, in 1991, in the article “Computer Ethics: Responsibility Regained”, Gotterbarn said:

There is little attention paid to the domain of professional ethics – the values that guide the day-to-day activities of computing professionals in their role as professionals. By computing professional I mean anyone involved in the design and development of computer artifacts. … The ethical decisions made during the development of these artifacts have a direct relationship to many of the issues discussed under the broader concept of computer ethics. (Gotterbarn, 1991)

Throughout the 1990s, with this aspect of computer ethics in mind, Gotterbarn worked with other professional-ethics advocates (for example, Keith Miller, Dianne Martin, Chuck Huff and Simon Rogerson) in a variety of projects to advance professional responsibility among computer practitioners. Even before 1991, Gotterbarn had been part of a committee of the ACM (Association for Computing Machinery) to create the third version of that organization’s “Code of Ethics and Professional Conduct” (adopted by the ACM in 1992, see Anderson, et al., 1993). Later, Gotterbarn and colleagues in the ACM and the Computer Society of the IEEE (Institute of Electrical and Electronic Engineers) developed licensing standards for software engineers. In addition, Gotterbarn headed a joint taskforce of the IEEE and ACM to create the “Software Engineering Code of Ethics and Professional Practice” (adopted by those organizations in 1999; see Gotterbarn, Miller and Rogerson, 1997).

In the late 1990s, Gotterbarn created the Software Engineering Ethics Research Institute (SEERI) at East Tennessee State University; and in the early 2000s, together with Simon Rogerson, he developed a computer program called SoDIS (Software Development Impact Statements) to assist individuals, companies and organizations in the preparation of ethical “stakeholder analyses” for determining likely ethical impacts of software development projects (Gotterbarn and Rogerson, 2005). These and many other projects focused attention upon professional responsibility and advanced the professionalization and ethical maturation of computing practitioners. (See the bibliography below for works by R. Anderson, D. Gotterbarn, C. Huff, C. D. Martin, K. Miller, and S. Rogerson.)

3. Globalization

In 1995, in her ETHICOMP95 presentation “The Computer Revolution and the Problem of Global Ethics”, Krystyna Górniak-Kocikowska, made a startling prediction (see Górniak, 1996). She argued that computer ethics eventually will evolve into a global ethic applicable in every culture on earth. According to this “Górniak hypothesis”, regional ethical theories like Europe’s Benthamite and Kantian systems, as well as the diverse ethical systems embedded in other cultures of the world, all derive from “local” histories and customs and are unlikely to be applicable world-wide. Computer and information ethics, on the other hand, Górniak argued, has the potential to provide a global ethic suitable for the Information Age:

  • a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)
  • The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)
  • Computers do not know borders. Computer networks … have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)
  • the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. … In other words, computer ethics will become universal, it will be a global ethic. (p.187)

The provocative “Górniak hypothesis” was a significant contribution to the ongoing “uniqueness debate”, and it reinforced Maner’s claim – which he made at the same ETHICOMP95 conference in his keynote address – that information technology “forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us.” (Maner 1996, p. 152) Górniak did not speculate about the globally relevant concepts and principles that would evolve from information ethics. She merely predicted that such a theory would emerge over time because of the global nature of the Internet and the resulting ethics conversation among all the cultures of the world.

Górniak may well be right. Computer ethics today appears to be evolving into a broader and even more important field, which might reasonably be called “global information ethics”. Global networks, especially the Internet, are connecting people all over the earth. For the first time in history, efforts to develop mutually agreed standards of conduct, and efforts to advance and defend human values, are being made in a truly global context. So, for the first time in the history of the earth, ethics and values will be debated and transformed in a context that is not limited to a particular geographic region, or constrained by a specific religion or culture. This could be one of the most important social developments in history (Bynum 2006; Floridi 2014). Consider just a few of the global issues:

3.1 Global laws

If computer users in the United States, for example, wish to protect their freedom of speech on the Internet, whose laws apply? Two hundred or more countries are interconnected by the Internet, so the United States Constitution (with its First Amendment protection of freedom of speech) is just a “local law” on the Internet – it does not apply to the rest of the world. How can issues like freedom of speech, control of “pornography”, protection of intellectual property, invasions of privacy, and many others to be governed by law when so many countries are involved? (Lessig 2004) If a citizen in a European country, for example, has Internet dealings with someone in a far-away land, and the government of that country considers those dealings to be illegal, can the European be tried by courts in the far-away country?

3.2 Global cyberbusiness

In recent years, there has be a rapid expansion of global “cyberbusiness”. Nations with appropriate technological infrastructure already in place have enjoyed resulting economic benefits, while the rest of the world has lagged behind. What will be the political and economic fallout from this inequality? In addition, will accepted business practices in one part of the world be perceived as “cheating” or “fraud” in other parts of the world? Will a few wealthy nations widen the already big gap between the rich and the poor? Will political and even military confrontations emerge?

3.3 Global education

If inexpensive access to a global information net is provided to rich and poor alike – to poverty-stricken people in ghettos, to poor nations in the “underdeveloped world”, etc. – for the first time in history, nearly everyone on earth will have access to daily news from a free press; to texts, documents and art works from great libraries and museums of the world; to political, religious and social practices of peoples everywhere. What will be the impact of this sudden and profound “global education” upon political dictatorships, isolated communities, coherent cultures, religious practices, etc.? As great universities of the world begin to offer degrees and knowledge modules via the Internet, will “lesser” universities be damaged or even forced out of business?

3.4 Information rich and information poor

The gap between rich and poor nations, and even between rich and poor citizens in industrialized countries, is already disturbingly wide. As educational opportunities, business and employment opportunities, medical services and many other necessities of life move more and more into cyberspace, will gaps between the rich and the poor become even worse?

4. A Metaphysical Foundation for Computer Ethics

Important recent developments, which began after 1995, appear to be confirming Górniak’s hypothesis – in particular, the metaphysical information ethics theory of Luciano Floridi (see, for example, Floridi, 1999, 2005a, 2008, 2013b) and the “Flourishing Ethics” theory of the present author which combines ideas from Aristotle, Wiener, Moor and Floridi (see Bynum, 2006).

Floridi, in developing his information ethics theory (henceforth FIE)[2], argued that the purview of computer ethics – indeed of ethics in general – should be widened to include much more than simply human beings, their actions, intentions and characters. He developed FIE as another “macroethics” (his term) which is similar to utilitarianism, deontologism, contractualism, and virtue ethics, because it is intended to be applicable to all ethical situations. On the other hand, FIE is different from these more traditional Western theories because it is not intended to replace them, but rather to supplement them with further ethical considerations that go beyond the traditional theories, and that can be overridden, sometimes, by traditional ethical considerations. (Floridi, 2006)

The name “information ethics” is appropriate to Floridi’s theory, because it treats everything that exists as “informational” objects or processes:

[All] entities will be described as clusters of data, that is, as informational objects. More precisely, [any existing entity] will be a discrete, self-contained, encapsulated package containing

  1. the appropriate data structures, which constitute the nature of the entity in question, that is, the state of the object, its unique identity and its attributes; and
  2. a collection of operations, functions, or procedures, which are activated by various interactions or stimuli (that is, messages received from other objects or changes within itself) and correspondingly define how the object behaves or reacts to them.

At this level of abstraction, informational systems as such, rather than just living systems in general, are raised to the role of agents and patients of any action, with environmental processes, changes and interactions equally described informationally. (Floridi 2006a, 9–10)

Since everything that exists, according to FIE, is an informational object or process, he calls the totality of all that exists – the universe considered as a whole – “the infosphere”. Objects and processes in the infosphere can be significantly damaged or destroyed by altering their characteristic data structures. Such damage or destruction Floridi calls “entropy”, and it results in partial “empoverishment of the infosphere”. Entropy in this sense is an evil that should be avoided or minimized, and Floridi offers four “fundamental principles”:

  1. Entropy ought not to be caused in the infosphere (null law).
  2. Entropy ought to be prevented in the infosphere.
  3. Entropy ought to be removed from the infosphere.
  4. The flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating and enriching their properties.

FIE is based upon the idea that everything in the infosphere has at least a minimum worth that should be ethically respected, even if that worth can be overridden by other considerations:

[FIE] suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy … . [FIE] holds that being/information has an intrinsic worthiness. It substantiates this position by recognizing that any informational entity has a Spinozian right to persist in its own status, and a Constructionist right to flourish, i.e., to improve and enrich its existence and essence. (Floridi 2006a, p. 11)

By construing every existing entity in the universe as “informational”, with at least a minimal moral worth, FIE can supplement traditional ethical theories and go beyond them by shifting the focus of one’s ethical attention away from the actions, characters, and values of human agents toward the “evil” (harm, dissolution, destruction) – “entropy” – suffered by objects and processes in the infosphere. With this approach, every existing entity – humans, other animals, plants, organizations, even non-living artifacts, electronic objects in cyberspace, pieces of intellectual property – can be interpreted as potential agents that affect other entities, and as potential patients that are affected by other entities. In this way, Floridi treats FIE as a “patient-based” non-anthropocentric ethical theory to be used in addition to the traditional “agent-based” anthropocentric ethical theories like utilitarianism, deontologism and virtue theory.

FIE, with its emphasis on “preserving and enhancing the infosphere”, enables Floridi to provide, among other things, an insightful and practical ethical theory of robot behavior and the behavior of other “artificial agents” like softbots and cyborgs. (See, for example, Floridi and Sanders, 2004.) FIE is an important component of a more ambitious project covering the entire new field of the “Philosophy of Information” (his term). (See Floridi 2011)

5. Exponential Growth

The paragraphs above describe key contributions to “the history of ideas” in information and computer ethics, but the history of a discipline includes much more. The birth and development of a new academic field require cooperation among a “critical mass” of scholars, plus the creation of university courses, research centers, conferences, academic journals, and more. In this regard, the year 1985 was pivotal for information and computer ethics. The publication of Johnson’s textbook, Computer Ethics, plus a special issue of the journal Metaphilosophy (October 1985) – including especially Moor’s article “What Is Computer Ethics?” – provided excellent curriculum materials and a conceptual foundation for the field. In addition, Maner’s earlier trailblazing efforts, and those of other people who had been inspired by Maner, had generated a “ready-made audience” of enthusiastic computer science and philosophy scholars. The stage was set for exponential growth. (The formidable foundation for computer and information ethics, which Wiener had laid down in the late 1940s and early 1950s, was so far ahead of its time that social and ethical thinkers then did not follow his lead and help to create a vibrant and growing field of computer and information ethics even earlier than the 1980s.)

In the United States, rapid growth occurred in information and computer ethics beginning in the mid-1980s. In 1987 the Research Center on Computing & Society was founded at Southern Connecticut State University. Shortly thereafter, the Director (the present author) joined with Walter Maner to organize “the National Conference on Computing and Values” (NCCV), funded by America’s National Science Foundation, to bring together computer scientists, philosophers, public policy makers, lawyers, journalists, sociologists, psychologists, business people, and others. The goal was to examine and push forward some of the major sub-areas of information and computer ethics; namely, computer security, computers and privacy, ownership of intellectual property, computing for persons with disabilities, and the teaching of computer ethics. More than a dozen scholars from several different disciplines joined with Bynum and Maner to plan NCCV, which occurred in August 1991 at Southern Connecticut State University. Four hundred people from thirty-two American states and seven other countries attended; and the conference generated a wealth of new computer ethics materials – monographs, video programs and an extensive bibliography – which were disseminated to hundreds of colleges and universities during the following two years.

In that same decade, professional ethics advocates, such as Donald Gotterbarn, Keith Miller and Dianne Martin – and professional organizations, such as Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, and the Special Interest Group on Computing and Society (SIGCAS) of the ACM – spearheaded projects focused upon professional responsibility for computer practitioners. Information and computer ethics became a required component of undergraduate computer science programs that were nationally accredited by the Computer Sciences Accreditation Board. In addition, the annual “Computers, Freedom and Privacy” conferences began in 1991 (see www.cfp.org), and the ACM adopted a new version of its Code of Ethics and Professional Conduct in 1992.

In 1995, rapid growth of information and computer ethics spread to Europe when the present author joined with Simon Rogerson of De Montfort University in England to create the Centre for Computing and Social Responsibility and to organize the first computer ethics conference in Europe, ETHICOMP95. That conference included attendees from fourteen different countries, mostly in Europe, and it became a key factor in generating a “critical mass” of computer ethics scholars in Europe. After 1995, every 18 months, another ETHICOMP conference occurred, moving from country to country in Europe and beyond – Spain, the Netherlands, Italy, Poland, Portugal, Greece, Sweden, Japan, China, Argentina, Denmark, France. In addition, in 1999, with assistance from Bynum and Rogerson, the Australian scholars John Weckert and Christopher Simpson created the Australian Institute of Computer Ethics and organized AICEC99 (Melbourne, Australia), which was the first international computer ethics conference south of the equator. A number of AICE conferences have occurred since then (see http://auscomputerethics.com).

A central figure in the rapid growth of information and computer ethics in Europe was Simon Rogerson. In addition to creating the Centre for Computing and Social Responsibility at De Montfort University and co-heading the influential ETHICOMP conferences, he also (1) added computer ethics to De Montfort University’s curriculum, (2) created a graduate program with advanced computer ethics degrees, including PhDs, and (3) co-founded and co-edited (with Ben Fairweather) two computer ethics journals – The Journal of Information, Communication and Ethics in Society in 2003 (see the section “Other Internet Resources” below), and the electronic journal The ETHICOMP Journal in 2004 (see Other Internet Resources below). Rogerson also served on the Information Technology Committee of the British Parliament, and he participated in several computer ethics projects with agencies of the European Union.

Other important computer ethics developments in Europe in the late 1990s and early 2000s included, for example, (1) Luciano Floridi’s creation of the Information Ethics Research Group at Oxford University in the mid 1990s; (2) Jeroen van den Hoven’s founding, in 1997, of the CEPE (Computer Ethics: Philosophical Enquiry) series of conferences, which occurred alternately in Europe and America; (3) van den Hoven’s creation of the journal Ethics and Information Technology in 1999; (4) Rafael Capurro’s creation of the International Center for Information Ethics in 1999; (5) Capurro’s creation of the journal International Review of Information Ethics in 2004; and Bernd Carsten Stahl’s creation of The International Journal of Technology and Human Interaction in 2005.

In summary, since 1985 computer ethics developments have proliferated exponentially with new conferences and conference series, new organizations, new research centers, new journals, textbooks, web sites, university courses, university degree programs, and distinguished professorships. Additional “sub-fields” and topics in information and computer ethics continually emerge as information technology itself grows and proliferates. Recent new topics include on-line ethics, “agent” ethics (robots, softbots), cyborg ethics (part human, part machine), the “open source movement”, electronic government, global information ethics, information technology and genetics, computing for developing countries, computing and terrorism, ethics and nanotechnology, to name only a few examples. (For specific publications and examples, see the list of selected resources below.)

Compared to many other scholarly disciplines, the field of computer ethics is very young. It has existed only since the late 1940s when Norbert Wiener created it. During the next few decades, it grew very little because Wiener’s insights were so far ahead of everyone else’s. Beginning in 1985, however, information and computer ethics has grown exponentially, first in America, then in Europe, and then globally.

Bibliography

  • Adam, A. (2000), “Gender and Computer Ethics,” Computers and Society, 30(4): 17–24.
  • Adam, A. and J. Ofori-Amanfo (2000), “Does Gender Matter in Computer Ethics?” Ethics and Information Technology, 2(1): 37–47.
  • Anderson, R, D. Johnson, D. Gotterbarn and J. Perrolle (1993), “Using the New ACM Code of Ethics in Decision Making,” Communications of the ACM, 36: 98–107.
  • Bohman, James (2008), “The Transformation of the Public Sphere: Political Authority, Communicative Freedom, and Internet Publics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 66–92.
  • Brennan, G. and P. Pettit (2008), “Esteem, Identifiability, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 175–94.
  • Brey, P. (2001), “Disclosive Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett.
  • ––– (2006a), “Evaluating the Social and Cultural Implications of the Internet,” Computers and Society, 36(3): 41–44.
  • ––– (2006b), “Social and Ethical Dimensions of Computer-Mediated Education,” Journal of Information, Communication & Ethics in Society, 4(2): 91–102.
  • ––– (2008), “Do We Have Moral Duties Toward Information Objects,” Ethics and Information Technology, 10(2–3): 109–114.
  • ––– (2012), “Anticipatory Ethics for Emerging Technologies,” Nanoethics, 6(1): 1–13.
  • ––– (eds.) (2012), The Good Life in a Technological Age, New York, NY: Routledge.
  • Bynum, T. (1982), “A Discipline in its Infancy,” The Dallas Morning News, January 12, 1982, D/1, D/6.
  • ––– (1999), “The Development of Computer Ethics as a Philosophical Field of Study,” The Australian Journal of Professional and Applied Ethics, 1(1): 1–29.
  • ––– (2000), “The Foundation of Computer Ethics,” Computers and Society, 30(2): 6–13.
  • ––– (2004), “Ethical Challenges to Citizens of the ‘Automatic Age’: Norbert Wiener on the Information Society,” Journal of Information, Communication and Ethics in Society, 2(2): 65–74.
  • ––– (2005), “Norbert Wiener’s Vision: the Impact of the ‘Automatic Age’ on our Moral Lives,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany, NY: SUNY Press, 11–25.
  • ––– (2006), “Flourishing Ethics,” Ethics and Information Technology, 8(4): 157–173.
  • ––– (2008a), “Milestones in the History of Information and Computer Ethics,” in K. Himma and H. Tavani (eds.), The Handbook of Information and Computer Ethics, New York: John Wiley, 25–48.
  • ––– (2008b), “Norbert Wiener and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge, UK: Cambridge University Press, 8–25.
  • ––– (2008c), “A Copernican Revolution in Ethics?,” in G. Crnkovic and S. Stuart (eds.), Computation, Information, Cognition: The Nexus and the Liminal, Cambridge, UK: Cambridge Scholars Publishing, 302–329.
  • ––– (2010a), “Historical Roots of Information Ethics,” in L. Floridi (ed.), Handbook of Information and Computer Ethics, Oxford, UK: Wiley-Blackwell, 20–38.
  • ––– (2010b), “Philosophy in the Information Age,” in P. Allo (ed.), Luciano Floridi and the Philosophy of Information, Cambridge, UK: Cambridge University Press, 420–442.
  • Bynum, T. and P. Schubert (1997), “How to do Computer Ethics – A Case Study: The Electronic Mall Bodensee,” in J. van den Hoven (ed.), Computer Ethics – Philosophical Enquiry, Rotterdam: Erasmus University Press, 85–95.
  • Capurro, R. (2007a), “Information Ethics for and from Africa,” International Review of Information Ethics, 2007: 3–13.
  • ––– (2007b), “Intercultural Information Ethics,” in R. Capurro, J. Frühbauer and T. Hausmanninger (eds.), Localizing the Internet: Ethical Issues in Intercultural Perspective, (ICIE Series, Volume 4), Munich: Fink, 2007: 21–38.
  • ––– (2006), “Towards an Ontological Foundation for Information Ethics,” Ethics and Information Technology, 8(4): 157–186.
  • ––– (2004), “The German Debate on the Information Society,” The Journal of Information, Communication and Ethics in Society, 2 (Supplement): 17–18.
  • Capurro, R. and J. Britz (2010), “In Search of a Code of Global Information Ethics: The Road Travelled and New Horizons, ” Ethical Space, 7(2/3): 28–36.
  • Capurro, R. and M. Nagenborg (eds.) (2009) Ethics and Robotics, Heidelberg: Akademische Verlagsgesellschaft, IOS Press.
  • Cavalier, R. (ed.) (2005), The Impact of the Internet on Our Moral Lives, Albany, NY: SUNY Press.
  • Cocking, D. (2008), “Plural Selves and Relational Identity: Intimacy and Privacy Online,” In J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 123–41.
  • de Laat, P., (2010), “How Can Contributions to Open-Source Communities be Trusted?,” Ethics and Information Technology, 12(4): 327–341.
  • ––– (2012), “Coercion or Empowerment? Moderation of Content in Wikipedia as Essentially Contested Bureaucratic Rules,” Ethics and Information Technology, 14(2): 123–135.
  • Edgar, S. (1997), Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett.
  • Elgesem, D. (1995), “Data Privacy and Legal Argumentation,” Communication and Cognition, 28(1): 91–114.
  • ––– (1996), “Privacy, Respect for Persons, and Risk,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 45–66.
  • ––– (2002), “What is Special about the Ethical Problems in Internet Research?” Ethics and Information Technology, 4(3): 195–203.
  • ––– (2008), “Information Technology Research Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 354–75.
  • Ess, C. (1996), “The Political Computer: Democracy, CMC, and Habermas,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication, Albany: SUNY Press, 197–230.
  • ––– (ed.) (2001a), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press.
  • ––– (2001b), “What’s Culture got to do with it? Cultural Collisions in the Electronic Global Village,” in C. Ess (ed.), Culture, Technology, Communication: Towards an Intercultural Global Village, Albany: SUNY Press, 1–50.
  • ––– (2004), “Computer-Mediated Communication and Human-Computer Interaction,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 76–91.
  • ––– (2005), “Moral Imperatives for Life in an Intercultural Global Village, ” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 161–193.
  • ––– (2008), “Culture and Global Networks: Hope for a Global Ethics?” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 195–225.
  • ––– (2013), “Global? Media Ethics: Issues, Challenges, Requirements, Resolutions” in S. Ward (ed.), Global Media Ethics: Problems and Perspectives, Oxford: Wiley-Blackwell, 253–271.
  • Fairweather, B. (1998), “No PAPA: Why Incomplete Codes of Ethics are Worse than None at all,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers.
  • ––– (2011), “Even Greener IT: Bringing Green Theory and Green IT Together,” Journal of Information, Communication and Ethics in Society, 9(2): 68–82.
  • Flanagan, M., D. Howe, and H. Nissenbaum (2008), “Embodying Value in Technology: Theory and Practice,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 322–53.
  • Flanagan, M. and H. Nissenbaum (2014), Values at Play in Digital Games, Cambridge, MA: MIT Press.
  • Floridi, L. (1999), “Information Ethics: On the Theoretical Foundations of Computer Ethics”, Ethics and Information Technology, 1(1): 37–56.
  • ––– (ed.) (2004), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell.
  • ––– (2005b), “Internet Ethics: The Constructionist Values of Homo Poieticus,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 195–214.
  • ––– (2006a), “Information Ethics: Its Nature and Scope,” Computers and Society, 36(3): 21–36.
  • ––– (2006b), “Information Technologies and the Tragedy of the Good Will,” Ethics and Information Technology, 8(4): 253–262.
  • ––– (2008), “Information Ethics: Its Nature and Scope,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 40–65.
  • ––– (ed.) (2010), Handbook of Information and Computer Ethics, Cambridge: Cambridge University Press.
  • ––– (2011), The Philosophy of Information, Oxford: Oxford University Press.
  • ––– (2013a), “Distributed Morality in an Information Society,” Science and Engineering Ethics, 19(3): 727–743.
  • ––– (2013b), The Ethics of Information, Oxford: Oxford University Press.
  • ––– (2014), The Fourth Revolution - How the Infosphere is Reshaping Human Reality, Oxford: Oxford University Press.
  • Floridi, L. and J. Sanders (2004), “The Foundationalist Debate in Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, 2nd edition, Sudbury, MA: Jones and Bartlett, 81–95.
  • Forester, T. and P. Morrison (1990), Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, Cambridge, MA: MIT Press.
  • Fried, C. (1984), “Privacy,” in F. Schoeman (ed.), Philosophical Dimensions of Privacy, Cambridge: Cambridge University Press.
  • Friedman, B. (ed.) (1997), Human Values and the Design of Computer Technology, Cambridge: Cambridge University Press.
  • Friedman, B. and H. Nissenbaum (1996), “Bias in Computer Systems,” ACM Transactions on Information Systems, 14(3): 330–347.
  • Gerdes, A. (2013), “Ethical Issues in Human Robot Interaction,” in H. Nykänen, O. Riis, and J. Zelle (eds.), Theoretical and Applied Ethics, Aalborg, Denmark: Aalborg University Press, 125–143.
  • Gert, B. (1999), “Common Morality and Computing,” Ethics and Information Technology, 1(1): 57–64.
  • Goldman, A. (2008), “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 111–22.
  • Gordon, W. (2008), “Moral Philosophy, Information Technology, and Copyright: The Grokster Case,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 270–300.
  • Gorniak-Kocikowska, K. (1996), “The Computer Revolution and the Problem of Global Ethics,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guildford, UK: Opragen Publications, 177–90.
  • ––– (2005) “From Computer Ethics to the Ethics of the Global ICT Society,” in T. Bynum, G. Collste, and S. Rogerson (eds.), Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University. Also in Library Hi Tech, 25(1): 47–57.
  • ––– (2007), “ICT, Globalization and the Pursuit of Happiness: The Problem of Change,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • ––– (2008), “ICT and the Tension between Old and New: The Human Factor,” Journal of Information, Communication and Ethics in Society, 6(1): 4–27.
  • Gotterbarn, D. (1991), “Computer Ethics: Responsibility Regained,” National Forum: The Phi Beta Kappa Journal, 71: 26–31.
  • ––– (2001), “Informatics and Professional Responsibility,” Science and Engineering Ethics, 7(2): 221–30.
  • ––– (2002) “Reducing Software Failures: Addressing the Ethical Risks of the Software Development Life Cycle,” Australian Journal of Information Systems, 9(2): 155–65.
  • ––– (2008) “Once More unto the Breach: Professional Responsibility and Computer Ethics,” Science and Engineering Ethics, 14(1): 235–239.
  • ––– (2009) “The Public is the Priority: Making Decisions Using the SE Code of Ethics,” IEEE Computer, June: 42–49.
  • Gotterbarn, D., K. Miller, and S. Rogerson (1997), “Software Engineering Code of Ethics,” Information Society, 40(11): 110–118.
  • Gotterbarn, D. and K. Miller (2004), “Computer Ethics in the Undergraduate Curriculum: Case Studies and the Joint Software Engineer’s Code,” Journal of Computing Sciences in Colleges, 20(2): 156–167.
  • Gotterbarn, D. and S. Rogerson (2005), “Responsible Risk Analysis for Software Development: Creating the Software Development Impact Statement,” Communications of the Association for Information Systems, 15(40): 730–50.
  • Grodzinsky, F. (1997), “Computer Access for Students with Disabilities,” SIGSCE Bulletin, 29(1): 292–295; [Available online].
  • ––– (1999), “The Practitioner from Within: Revisiting the Virtues,” Computers and Society, 29(2): 9–15.
  • Grodzinsky, F., A. Gumbus and S. Lilley (2010), “Ethical Implications of Internet Monitoring: A Comparative Study,” Information System Frontiers, 12(4):433–431.
  • Grodzinsky, F., K. Miller and M. Wolf (2003), “Ethical Issues in Open Source Software,” Journal of Information, Communication and Ethics in Society, 1(4): 193–205.
  • ––– (2008), “The Ethics of Designing Artificial Agents,” Ethics and Information Technology, 10(2–3): 115–121.
  • ––– (2011), “Developing Artificial Agents Worthy of Trust,” Ethics and Information Technology, 13(1): 17–27.
  • Grodzinsky, F. and H. Tavani (2002), “Ethical Reflections on Cyberstalking,” Computers and Society, 32(1): 22–32.
  • ––– (2004), “Verizon vs. the RIAA: Implications for Privacy and Democracy,” in J. Herkert (ed.), Proceedings of ISTAS 2004: The International Symposium on Technology and Society, Los Alamitos, CA: IEEE Computer Society Press.
  • ––– (2010), “Applying the Contextual Integrity Model of Privacy to Personal Blogs in the Blogosphere,” International Journal of Internet Research Ethics, 3(1): 38–47.
  • Grodzinsky, F. and M. Wolf (2008), “Ethical Issues in Free and Open Source Software,” in K. Himma and H. Tavani (eds.), The Handbook of Information and Computer Ethics, Hoboken, NJ: Wiley, 245–272.
  • Himma, K. (2003), “The Relationship Between the Uniqueness of Computer Ethics and its Independence as a Discipline in Applied Ethics,” Ethics and Information Technology, 5(4): 225–237.
  • ––– (2004), “The Moral Significance of the Interest in Information: Reflections on a Fundamental Right to Information,” Journal of Information, Communication, and Ethics in Society, 2(4): 191–202.
  • ––– (2007), “Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent?” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • ––– (2004), “There’s Something about Mary: The Moral Value of Things qua Information Objects”, Ethics and Information Technology, 6(3): 145–159.
  • ––– (2006), “Hacking as Politically Motivated Civil Disobedience: Is Hacktivism Morally Justified?” in K. Himma (ed.), Readings in Internet Security: Hacking, Counterhacking, and Society, Sudbury, MA: Jones and Bartlett.
  • Himma, K. and H. Tavani (eds.) (2008), The Handbook of Information and Computer Ethics, Hoboken, NJ: Wiley.
  • Hongladarom, S. (2011), “Personal Identity and the Self in the Online and Offline Worlds,” Minds and Machines, 21(4): 533–548.
  • ––– (2013), “Ubiquitous Computing, Empathy and the Self,” AI and Society, 28(2): 227–236.
  • Huff, C. and T. Finholt (eds.) (1994), Social Issues in Computing: Putting Computers in Their Place, New York: McGraw-Hill.
  • Huff, C. and D. Martin (1995), “Computing Consequences: A Framework for Teaching Ethical Computing,” Communications of the ACM, 38(12): 75–84.
  • Huff, C. (2002), “Gender, Software Design, and Occupational Equity,” SIGCSE Bulletin: Inroads, 34: 112–115.
  • ––– (2004), “Unintentional Power in the Design of Computing Systems.” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell.
  • Huff, C., D. Johnson, and K. Miller (2003), “Virtual Harms and Real Responsibility,” Technology and Society Magazine (IEEE), 22(2): 12–19.
  • Huff, C. and L. Barnard (2009), “Good Computing: Life Stories of Moral Exemplars in the Computing Profession,” IEEE Technology and Society, 28(3): 47–54.
  • Introna, L. (1997), “Privacy and the Computer: Why We Need Privacy in the Information Society,” Metaphilosophy, 28(3): 259–275.
  • ––– (2002), “On the (Im)Possibility of Ethics in a Mediated World,” Information and Organization, 12(2): 71–84.
  • ––– (2005a), “Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems,” Ethics and Information Technology, 7(2): 75–86.
  • ––– (2005b) “Phenomenological Approaches to Ethics and Information Technology,” The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2005/entries/ethics-it-phenomenology/>.
  • Introna, L. and H. Nissenbaum (2000), “Shaping the Web: Why the Politics of Search Engines Matters,” The Information Society, 16(3): 1–17.
  • Introna, L. and N. Pouloudi (2001), “Privacy in the Information Age: Stakeholders, Interests and Values.” in J. Sheth (ed.), Internet Marketing, Fort Worth, TX: Harcourt College Publishers, 373–388.
  • Johnson, D. (1985), Computer Ethics, First Edition, Englewood Cliffs, NJ: Prentice-Hall; Second Edition, Englewood Cliffs, NJ: Prentice-Hall, 1994; Third Edition Upper Saddle River, NJ: Prentice-Hall, 2001; Fourth Edition (with Keith Miller), New York: Pearson, 2009.
  • ––– (1997a), “Ethics Online,” Communications of the ACM, 40(1): 60–65.
  • ––– (1997b), “Is the Global Information Infrastructure a Democratic Technology?” Computers and Society, 27(4): 20–26.
  • ––– (2004), “Computer Ethics,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information, Oxford: Blackwell, 65–75.
  • ––– (2011), “Software Agents, Anticipatory Ethics, and Accountability,” in G. Merchant, B. Allenby, and J. Herkert, (eds.), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The International Library of Ethics, Law and Technology, 7: 61–76. Heidelberg, Germany: Springer.
  • Johnson, D. and H. Nissenbaum (eds.) (1995), Computing, Ethics & Social Values, Englewood Cliffs, NJ: Prentice Hall.
  • Johnson, D. and T. Powers (2008), “Computers as Surrogate Agents,” in J. van den Hoven and J. Weckert, (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 251–69.
  • Kocikowski, A. (1996), “Geography and Computer Ethics: An Eastern European Perspective,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 201–10.
  • Lane, J., V. Stodden, S. Bender, and H. Nissenbaum (eds.) (2014), Privacy, Big Data and the Public Good, Cambridge: Cambridge University Press.
  • Lessig, L. (2004), “The Laws of Cyberspace,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett, Second Edition, 134–144.
  • Lloyd, S. (2006), Programming the Universe, New York: Alfred A. Knopf Publishers.
  • Maner, W. (1980), Starter Kit in Computer Ethics, Hyde Park, NY: Helvetia Press and the National Information and Resource Center for Teaching Philosophy.
  • ––– (1996), “Unique Ethical Problems in Information Technology,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 137–154.
  • Martin, C. and D. Martin (1990), “Professional Codes of Conduct and Computer Ethics Education,” Social Science Computer Review, 8(1): 96–108.
  • Martin, C., C. Huff, D. Gotterbarn, K. Miller, et al. (1996), “A Framework for Implementing and Teaching the Social and Ethical Impact of Computing,” Education and Information Technologies, 1(2): 101–122.
  • Martin, C., C. Huff, D. Gotterbarn, and K. Miller (1996), “Implementing a Tenth Strand in the Computer Science Curriculum” (Second Report of the Impact CS Steering Committee), Communications of the ACM, 39(12): 75–84.
  • Marx, G. (2001), “Identity and Anonymity: Some Conceptual Distinctions and Issues for Research,” in J. Caplan and J. Torpey (eds.), Documenting Individual Identity, Princeton: Princeton University Press.
  • Mather, K. (2005), “The Theoretical Foundation of Computer Ethics: Stewardship of the Information Environment,” in Contemporary Issues in Governance (Proceedings of GovNet Annual Conference, Melbourne, Australia, 28–30 November, 2005), Melbourne: Monash University.
  • Matthews, S. (2008), “Identity and Information Technology.” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 142–60.
  • Miller, K. (2005), “Web standards: Why So Many Stray from the Narrow Path,” Science and Engineering Ethics, 11(3): 477–479.
  • Miller, K. and D. Larson (2005a), “Agile Methods and Computer Ethics: Raising the Level of Discourse about Technological Choices,” IEEE Technology and Society, 24(4): 36–43.
  • ––– (2005b), “Angels and Artifacts: Moral Agents in the Age of Computers and Networks,” Journal of Information, Communication & Ethics in Society, 3(3): 151–157.
  • Miller, S. (2008), “Collective Responsibility and Information and Communication Technology.” in J. van den Hoven and J> Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 226–50.
  • Moor, J. (1979), “Are there Decisions Computers Should Never Make?” Nature and System, 1: 217–29.
  • ––– (1985) “What Is Computer Ethics?” Metaphilosophy, 16(4): 266–75.
  • ––– (1996), “Reason, Relativity and Responsibility in Computer Ethics,” in Computers and Society, 28(1) (1998): 14–21; originally a keynote address at ETHICOMP96 in Madrid, Spain, 1996.
  • ––– (1997), “Towards a Theory of Privacy in the Information Age,” Computers and Society, 27(3): 27–32.
  • ––– (1999), “Just Consequentialism and Computing,” Ethics and Information Technology, 1(1): 65–69.
  • ––– (2001), “The Future of Computer Ethics: You Ain’t Seen Nothin’ Yet,” Ethics and Information Technology, 3(2): 89–91.
  • ––– (2005), “Should We Let Computers Get under Our Skin?” in R. Cavalier, The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 121–138.
  • ––– (2006), “The Nature, Importance, and Difficulty of Machine Ethics,” IEEE Intelligent Systems, 21(4): 18–21.
  • ––– (2007), “Taking the Intentional Stance Toward Robot Ethics,” American Philosophical Association Newsletters, 6(2): 111–119.
  • ––– (2008) “Why We Need Better Ethics for Emerging Technologies,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 26–39.
  • Murata, K. and Y. Orito (2010), “Japanese Risk Society: Trying to Create Complete Security and Safety Using Information and Communication Technology”, Computers and Society, ACM SIGCAS 40(3): 38–49.
  • Murata, K., Y. Orito and Y. Fukuta (2014), “Social Attitudes of Young People in Japan Towards Online Privacy”, Journal of Law, Information and Science, 23(1): 137–157.
  • Nissenbaum, H. (1995), “Should I Copy My Neighbor’s Software?” in D. Johnson and H. Nissenbaum (eds), Computers, Ethics, and Social Responsibility, Englewood Cliffs, NJ: Prentice Hall.
  • ––– (1997), “Can We Protect Privacy in Public?” in Proceedings of Computer Ethics – Philosophical Enquiry 97 (CEPE97), Rotterdam: Erasmus University Press, 191–204; reprinted Nissenbaum 1998a.
  • ––– (1998a), “Protecting Privacy in an Information Age: The Problem of Privacy in Public,” Law and Philosophy, 17: 559–596.
  • ––– (1998b), “Values in the Design of Computer Systems,” Computers in Society, 1998: 38–39.
  • ––– (1999), “The Meaning of Anonymity in an Information Age,” The Information Society, 15: 141–144.
  • ––– (2005a), “Hackers and the Contested Ontology of Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 139–160.
  • ––– (2005b), “Where Computer Security Meets National Security,” Ethics and Information Technology, 7(2): 61–73.
  • ––– (2011), “A Contextual Approach to Privacy Online,” Daedalus, 140(4): 32–48.
  • Ocholla, D, J. Britz, R. Capurro, and C. Bester, (eds.) (2013), Information Ethics in Africa: Cross-Cutting Themes, African Center of Excellence for Information Ethics, Pretoria, South Africa.
  • Orito, Y. (2011), “The Counter-Control Revolution: Silent Control of Individuals Through Dataveillance Systems,” Journal of Information, Communication and Ethics in Society, 9(1): 5–19.
  • Parker, D. (1968), “Rules of Ethics in Information Processing,” Communications of the ACM, 11: 198–201.
  • ––– (1979), Ethical Conflicts in Computer Science and Technology. Arlington, VA: AFIPS Press.
  • Parker, D., S. Swope and B. Baker (1990), Ethical Conflicts in Information & Computer Science, Technology & Business, Wellesley, MA: QED Information Sciences.
  • Pecorino, P. and W. Maner (1985), “A Proposal for a Course on Computer Ethics,” Metaphilosophy, 16(4): 327–337.
  • Pettit, P. (2008), “Trust, Reliance, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 161–74.
  • Powers, T. M. (2006), “Prospects for a Kantian Machine,” IEEE Intelligent Systems, 21(4): 46–51. Also in M. Anderson and S. Anderson (eds.), IEEE Intelligent Systems, Cambridge, UK: Cambridge University Press, 2011.
  • ––– (2009), “Machines and Moral Reasoning,” Philosophy Now, 72: 15–16.
  • ––– (2011), “Incremental Machine Ethics,” IEEE Robotics and Automation, 18(1): 51–58.
  • ––– (2013), “On the Moral Agency of Computers,” Topoi: An International Review of Philosophy, 32(2): 227–236.
  • Rogerson, S. (1996), “The Ethics of Computing: The First and Second Generations,” The UK Business Ethics Network News, 6: 1–4.
  • ––– (1998), “Computer and Information Ethics,” in R. Chadwick (ed.), Encyclopedia of Applied Ethics, San Diego, CA: Academic Press, 563–570.
  • ––– (2004), “The Ethics of Software Development Project Management,” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility, Oxford: Blackwell, 119–128.
  • ––– (1995), “Cyberspace: The Ethical Frontier,” The Times Higher Education Supplement (The London Times), No. 1179, June, 9, 1995, iv.
  • ––– (2002), “The Ethical Attitudes of Information Systems Professionals: Outcomes of an Initial Survey,” Telematics and Informatics, 19: 21–36.
  • ––– (1998), “The Ethics of Software Project Management,” in G. Collste (ed.), Ethics and Information Technology, New Delhi: New Academic Publishers, 137–154.
  • Sojka, J. (1996), “Business Ethics and Computer Ethics: The View from Poland,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics, Guilford, UK: Opragen Publications (a special issue of Science and Engineering Ethics) 191–200.
  • Søraker, J. (2012), “How Shall I Compare Thee? Comparing the Prudential Value of Actual and Virtual Friendship” Ethics and Information Technology, 14(3): 209–219.
  • Spafford, E., K. Heaphy, and D. Ferbrache (eds.) (1989), Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats, Arlington, VA: ADAPSO (now ITAA).
  • Spafford, E. (1992), “Are Computer Hacker Break-Ins Ethical?” Journal of Systems and Software, 17: 41–47.
  • Spinello, R. (1997), Case Studies in Information and Computer Ethics, Upper Saddle River, NJ: Prentice-Hall.
  • ––– (2000), CyberEthics: Morality and Law in Cyberspace, Sudbury, MA: Jones and Bartlett; Fifth Edition, 2014.
  • Spinello, R. and H. Tavani (2001a), “The Internet, Ethical Values, and Conceptual Frameworks: An Introduction to Cyberethics,” Computers and Society, 31(2): 5–7.
  • ––– (eds.) (2001b), Readings in CyberEthics, Sudbury, MA: Jones and Bartlett; Second Edition, 2004.
  • ––– (eds.) (2005), Intellectual Property Rights in a Networked World: Theory and Practice, Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2004a), “Information, Ethics and Computers: The Problem of Autonomous Moral Agents,” Minds and Machines, 14: 67–83.
  • ––– (2004b), Responsible Management of Information Systems, Hershey, PA: Idea Group/Information Science Publishing.
  • ––– (2005), “The Ethical Problem of Framing E-Government in Terms of E-Commerce,” Electronic Journal of E-Government, 3(2): 77–86.
  • ––– (2006), “Responsible Computers? A Case for Ascribing Quasi-responsibility to Computers Independent of Personhood or Agency,” Ethics and Information Technology, 8(4):205–213.
  • ––– (2011), “IT for a Better Future: How to Integrate Ethics, Politics and Innovation,” Journal of Information, Communication and Ethics in Society, 9(3): 140–156.
  • ––– (2013), “Virtual Suicide and Other Ethical Issues of Emerging Information Technologies,” Futures, 50: 35–43.
  • ––– (2014), “Participatory Design as Ethical Practice -- Concepts, Reality and Conditions,” Journal of Information, Communication and Ethics in Society, 12(1): 10–13.
  • Stahl, B., R. Heersmink, P. Goujon, C. Flick. J. van den Hoven, K. Wakunuma, V. Ikonen, and M. Rader (2010), “Identifying the Ethics of Emerging Information and Communication Technologies,” International Journal of Technoethics, 1(4): 20–38.
  • Sullins, J. (2006), “When Is a Robot a Moral Agent?,” International Review of Information Ethics, 6(1): 23–30.
  • ––– (2010), “Robo Warfare: Can Robots Be More Ethical than Humans on the Battlefield?,” Ethics and Information Technology, 12(3): 263–275.
  • ––– (2013), “Roboethics and Telerobot Weapons Systems,” in D. Michelfelder, N. McCarthy and D. Goldberg (eds.), Philosophy and Engineering: Reflections on Practice, Principles and Process, Dordrecht: Springer, 229–237.
  • Sunstein, C. (2008), “Democracy and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 93–110.
  • Taddeo, M. (2012), “Information Warfare: A Philosophical Perspective,” Philosophy and Technology, 25(1): 105–120.
  • Tavani, H. (ed.) (1996), Computing, Ethics, and Social Responsibility: A Bibliography, Palo Alto, CA: Computer Professionals for Social Responsibility Press.
  • ––– (1999a), “Privacy and the Internet,” Proceedings of the Fourth Annual Ethics and Technology Conference, Chestnut Hill, MA: Boston College Press, 114–25.
  • ––– (1999b), “Privacy On-Line,” Computers and Society, 29(4): 11–19.
  • ––– (2002), “The Uniqueness Debate in Computer Ethics: What Exactly is at Issue and Why Does it Matter?” Ethics and Information Technology, 4(1): 37–54.
  • ––– (2004), Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology, Hoboken, NJ: Wiley; Second Edition, 2007; Third Edition, 2011; Fourth Edition, 2013.
  • ––– (2005), “The Impact of the Internet on our Moral Condition: Do We Need a New Framework of Ethics?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 215–237.
  • ––– (2006), Ethics, Computing, and Genomics, Sudbury, MA: Jones and Bartlett.
  • Tavani, H. and J. Moor (2001), “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies,” Computers and Society, 31(1): 6–11.
  • Turilli, M. and L. Floridi, (2009), “The Ethics of Information Transparency,” Ethics and Information Technology, 11(2): 105–112.
  • Turilli, M., A. Vacaro and M. Taddeo, (2010), “The Case of Online Trust,” Knowledge, Technology and Policy, 23(3/4): 333–345.
  • Turkle, S. (1984), The Second Self: Computers and the Human Spirit, New York: Simon & Schuster.
  • ––– (2011), Alone Together: Why We Expect More from Technology and Less from Each Other, New York: Basic Books.
  • Turner, A.J. (2011), “Summary of the ACM/IEEE-CS Joint Curriculum Task Force Report: Computing Curricula, 1991,” Communications of the ACM, 34(6): 69–84.
  • Turner, E. (2006), “Teaching Gender-Inclusive Computer Ethics, ” in I. Trauth (ed.), Encyclopedia of Gender and Information Technology: Exploring the Contributions, Challenges, Issues and Experiences of Women in Information Technology, Hershey, PA: Idea Group/Information Science Publishing, 1142–1147.
  • van den Hoven, J. (1997a), “Computer Ethics and Moral Methodology,” Metaphilosophy, 28(3): 234–48.
  • ––– (1997b), “Privacy and the Varieties of Informational Wrongdoing,” Computers and Society, 27(3): 33–37.
  • ––– (1998), “Ethics, Social Epistemics, Electronic Communication and Scientific Research,” European Review, 7(3): 341–349.
  • ––– (2008a), “Information Technology, Privacy, and the Protection of Personal Data,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 301–321.
  • van den Hoven, J. and E. Rooksby (2008), “Distributive Justice and the Value of Information: A (Broadly) Rawlsian Approach,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press, 376–96.
  • van den Hoven, J. and J. Weckert (2008), Information Technology and Moral Philosophy, Cambridge: Cambridge University Press.
  • Vedral, V. (2010), Decoding Reality, Oxford: Oxford University Press.
  • Volkman, R. (2003), “Privacy as Life, Liberty, Property,” Ethics and Information Technology, 5(4): 199–210.
  • ––– (2005), “Dynamic Traditions: Why Globalization Does Not Mean Homogenization,” in Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • ––– (2007), “The Good Computer Professional Does Not Cheat at Cards,” in Proceedings of ETHICOMP2007, Tokyo: Meiji University Press.
  • Weckert, J. (2002), “Lilliputian Computer Ethics,” Metaphilosophy, 33(3): 366–375.
  • ––– (2005), “Trust in Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives, Albany: SUNY Press, 95–117.
  • ––– (2007), “Giving and Taking Offence in a Global Context,” International Journal of Technology and Human Interaction, 25–35.
  • Weckert, J. and D. Adeney (1997), Computer and Information Ethics, Westport, CT: Greenwood Press.
  • Weizenbaum, J. (1976), Computer Power and Human Reason: From Judgment to Calculation, San Francisco, CA: Freeman.
  • Westin, A. (1967), Privacy and Freedom, New York: Atheneum.
  • Wiener, N. (1948), Cybernetics: or Control and Communication in the Animal and the Machine, New York: Technology Press/John Wiley & Sons.
  • ––– (1950), The Human Use of Human Beings: Cybernetics and Society, Boston: Houghton Mifflin; Second Edition Revised, New York, NY: Doubleday Anchor 1954.
  • ––– (1964), God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion, Cambridge, MA: MIT Press.
  • Wolf. M., K. Miller and F. Grodzinsky (2011), “On the Meaning of Free Software,” Ethics and Information Technology, 11(4): 279–286.

Copyright © 2015 by
Terrell Bynum

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]