Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Workforce LibreTexts

16.7: Assignment- Computer Ethics

  • Last updated
  • Save as PDF
  • Page ID 18705

Assignment Introduction

You will learn the following:

  • What a Code of Ethics is
  • Giving Credit where Credit is Due
  • Privacy on the Internet
  • E-mail Uses and Abuses
  • The School Acceptable Use Policy as compared with other AUPs
  • What is Copyright and how does it effect the use of
  • Written work on the Internet
  • Computer Software
  • Pictures on the Internet
  • Music on the Internet
  • Movies on the Internet

Assignment Directions

After going through the information in tasks (1-6) each student will:

  • Create a page on the Ethics Blog
  • They will choose a good title for their page
  • Summarize what they learned from steps 1-6. The summary should clearly identify each of the six main areas (Ethics, Copyright, Plagerism, Privacy, Email, & Acceptable Use)

After completing the blog entry, each student will comment (provide feedback) to at least two (2) other students. All comments are to be at least two complete sentences.

Task 1: Ethics

You will need a notebook to take notes in.  Now you will visit at least three of the following sites about ethics and make notes on what a Code of Ethics is and how it affects your use of computers at school and how it affect your computer use at your place of employment.  In your notes, make sure you document which site you got the information. Use the grading rubric to guide your note taking.

  • What are Ethics? by Duke Mosman – USOE
  • Business Ethics Pledge
  • Computer Ethics and Coypright Issues? Right vs. Wrong
  • NYU Ethics Pledge
  • Computer Ethics: Basic Concepts and Historical Overview
  • Youth Ethics
  • Ethics in Computing
  • A Framework for Ethical Thinking
  • The Napster Cantata
  • Computer & Information Literacy

Task 2: Copyright

Visit at least three of the following sites about copyright and continue to take notes on what Copyright is and how it effects what you can and cannot copy and use from the Internet.  In your notes make sure you document where you got the information.

  • Copyright Tutorial
  • Cyberbee Copyright
  • Copyright Questions and Answers
  • Copyright Basics
  • Digital Copyright Fight
  • Taking the Mystery out of Copyright
  • 10 Big Myths about copyright explained
  • DVD Copyright Battle MSNBC

Task 3: Plagiarism

Continue taking notes as you visit at least three of the following sites on What is Plagiarism and How to Cite Resources or Give Credit Where Credit is Due.  In your notes make sure you document where you got the information.

  • Plagiarism Definition
  • What Is Plagiarism
  • What Is Plagiarism Kid’s Health
  • How to recognize and avoid plagiarism
  • Summarizing, Paraphrasing, and Quoting
  • Avoiding Plagiarism
  • Plagiarism:Why it’s wrong
  • Plagiarism: What It is and How to Recognize and Avoid It
  • Plagiarism: expulsion for a guilty verdict

Task 4: Privacy

Fourth visit at least three of the following sites and add to the previous notes information about Privacy (what belongs to me and my rights; cookies) on the Internet.

  • Privacy on the Internet, What can others learn about you?
  • Be Careful Online: Not Everyone Is a True ‘Friend’
  • Internet Privacy and Email Security
  • How Internet Cookies Work
  • Phishing Arrests
  • On Guard Online
  • Virginia tries to Ensure Students’ Safety in Cyberspace
  • Privacy and the Internet: Traveling in Cyberspace Safely
  • Identity theft gets personal
  • Privacy Rights
  • Ethical Issues of Internet Privacy

Task 5: Email and Abuses

Now visit at least three of the following sites and continue taking notes on E-mail uses and abuses and proper usage, viruses etc.

  • Email etiquette
  • Professional Email Writing
  • Keep Your E-mails Professional – Personal & Home Business Ones
  • http://www.emailreplies.com/
  • http://www.iwillfollow.com/email.htm
  • Sending & Responding To Group Emails – What Not To Do – Email Etiquette
  • Professional Online Newsletters and Emails Exactly How To Create Your Own

Task 6: Acceptable Use Policies

V isit at least three of these sites and finish taking notes on Acceptable Use Policies (AUPs) what our school policy is and how it compares to other policies.

  • LFCC Computer Use Policy
  • Acceptable Use Policy
  • http://www.enterasys.com/solutions/secure-networks/acceptable_use/
  • http://bizsecurity.about.com/od/acceptableusagepolicies/a/creatingaup.htm
  • Business AUPs
  • Nine Employees Fired for Internet Porn
  • Introduction to Computer Applications and Concepts Ethics and Security Assignment. Authored by : Melissa Stange. Provided by : Lord Fairfax Community College. Located at : http://www.lfcc.edu/ . License : CC BY: Attribution

Assignments

Table of contents, disclaimer 1, disclaimer 2, step 1: research the problem statement, step 2: conduct interviews, step 3: summarize findings, grading rubric, assignment 1: user interviews.

Imagine that you are hired as a consultant by a non-profit app development company. They ask you to consider the following problem statement:

Participating in a protest or a social movement allows an individual to exercise their right to free speech, but carries risks with respect to their personal and legal safety. Online technologies enable both greater cooperation and communication between activists as well as elevated risks of surveillance by law enforcement and other malicious actors. We are considering developing a mobile application that would allow users to safely organize and participate in protests and social movements. However, we would like to evaluate the limitations of existing solutions, perceptions of relevant stakeholders, and considerations for implementing the requested features before going ahead with the application development.

As part of your assignment, you will research existing solutions and conduct at least two interviews to help you answer the following questions:

  • What technological solutions (e.g., apps, websites, services, social networks, etc.) are currently used to organize and participate in protests?
  • What issues exist with the current solutions?
  • What are the concerns and needs of the prospective users of the application?
  • What features should the proposed application support?
  • Which factors should the company consider when implementing those features?

You will be paired with a partner to complete this assignment. Please read the rest of this document carefully as it breaks down this assignment into concrete steps and describes the deliverables that you will need to submit by the due date.

Note that although you would usually conduct interviews with the target users of an application or a service that you are developing, that is not a requirement in this assignment due to the sensitive nature of the topic . You are, however, still welcome to reach out and interview individuals who have experience in organizing or participating in protests if you wish to gain a greater understanding of their needs and concerns, as long as they are comfortable participating in such a discussion.

Also note that this assignment does not assume any experience or knowledge of app development . However, you should still be able to reason about design and implementation factors at a higher level, such as the implications of processing user data on a server vs. processing it locally on a device, or the implications of using end-to-end encryption vs. using encryption-in-transit vs. not using any encryption.

As a first step, you should familiarize yourself with existing ways in which individuals use their devices, social networks, services, and other online technologies to organize and participate in protests and social movements.

Consider the following questions to guide your research process:

  • How do activists currently organize and coordinate protests and social movements?
  • Which existing solutions do activists use to facilitate this process?
  • Are these solutions predominantly technical or non-technical in nature, or is it a combination of both?
  • What tasks do these solutions aim to achieve (e.g., communication with other activists, safety reports, real-time status updates, documentation of any unlawful activities, etc.)?
  • What technical, social, and legal issues exist with the current solutions?
  • Can those issues be mitigated using a different design approach or are they inherent to the problem at hand?

When researching the answers to these questions, you should focus specifically on the issues surrounding online surveillance and privacy. You may consult any resources that you find useful including news articles, blog posts, reports, white papers, and academic papers. You will include your sources and a summary of your findings in the final report, described towards the end of the document.

Once you have done your research around the topic, you are ready to start preparing for the interviews. Before you conduct the interviews, you should complete the following tasks:

Recruit interview participants

You are welcome, but not required, to recruit participants who have experience in organizing or participating in protests. Instead, you may also recruit participants with an interest in this topic or who might have experience with issues of privacy and surveillance in other contexts.

As part of this assignment, you need to interview at least two different individuals, who can be your friends, other students, or anyone else interested in discussing this topic.

Consider the ethics and the logistics of the interviews

Decide on a location for the interview that is comfortable both for you and the interviewee. You can conduct the interview either in-person or online using a video or an audio call.

After the interviews, you will need to summarize the attitudes, concerns, and suggestions of your participants for the final report. For this reason, decide in advance if you are going to take notes, make a recording of the interview, or both. Make sure to ask for consent before making any voice recordings.

If you decide to record the interviews, there are several free automated transcription tools and services that can save you time compared to manual transcription:

  • Zoom audio transcription (for interviews performed on Zoom)
  • Youtube (by uploading the recording privately and downloading the transcript file)

Prepare an interview guide

Before conducting the interviews, you will need to decide on the questions that you will ask the participants. These questions should be open to encourage conversation and non-leading so as not to bias the participants. Consider both the problem statement as well as the insights that you gained from researching the problem when developing the interview guide.

This document from the Department of Sociology at Harvard University includes useful strategies, tips, and guidelines on conducting a successful interview and selecting questions for your interview guide.

Once you have selected your participants, considered the logistics, and developed the guide, you are ready to conduct the interviews. Make sure that you practice going over the interview guide before speaking to the participants and have determined how you are going to take notes. Although there is no required minimum duration for the interviews, aim to spend at least 30 minutes discussing the proposed application.

Conducting the interviews and analyzing the notes and transcripts takes longer than you might expect, so don’t delay conducting the interviews. We recommend that you finish your interviews at least a week before the final report deadline.

Once you have conducted your research and performed the interviews, you are ready to prepare a report describing your findings. Your report should include the following:

  • Names and a description of contributions for each team member.
  • A summary of findings from your independent research in Step 1.
  • Themes that emerged from at least two of your interviews in Step 2 and whether those themes support or contradict the findings from your independent research.
  • Recommendations to the company on the features that the app should include and any considerations when implementing those features.
  • Sources and references used when writing up your report.

The expected length of your report is about 1000 words. Please submit your report double-spaced as a PDF. When preparing your report, don’t forget to address the questions listed in the beginning of this document. Each group of students needs to submit only one report.

In addition to the report, please include the following with your submission:

  • The interview guide.
  • Any notes or recordings from the interviews (at least two) that you used in preparing your report.

Total number of points: 100.

  • There is evidence that students conducted at least two interviews. (10 per interview | 20)
  • There is evidence that students considered the recruitment process, logistics, and ethics of the interviews. (5)
  • The report includes names and a description of contributions for each team member. (2)
  • The summary of findings considers issues of online surveillance and privacy. (5)
  • Themes that emerged from at least two of your interviews; and (7.5 per interview | 15)
  • Whether those themes support or contradict the findings from your independent research. (5)
  • Recommendations to the company on the features that the app should include; and (5)
  • Considerations when implementing those features. (5)
  • The report includes sources and references used when writing up your report. (3)
  • The report is well-written, coherent, presented well, and adheres to the word limit. (5)
  • The submission includes the interview guide. (5)
  • The submission includes notes or recordings from the interviews, as applicable, that were used in preparing the report. (5)

*These questions are just examples and other non-trivial considerations would also be awarded points, up to a maximum of 20 for this part of the assignment.

Embedding Ethics in Computer Science

This site provides access to curricular materials created by the Embedded Ethics team at Stanford for undergraduate computer science courses. The materials are designed to expose students to ethical issues that are relevant to the technical content of CS courses, to provide students structured opportunities to engage in ethical reflection through lectures, problem sets, and assignments, and to build the ethical character and habits of young computer scientists.

banner logo

Modules (Assignments + Lectures)

Banking on security.

Course: Intro to Systems

This assignment is about assembly, reverse engineering, security, privacy and trust. An earlier version of the assignment by Randal Bryant & David O'Hallaron (CMU), [accessible here](http://csapp.cs.cmu.edu/public/labs.html), used the framing story that students were defusing a ‘bomb’.

Bits, Bytes, and Overflows

The assignment is the first in an introduction to systems course. It covers bits, bytes and overflow, continuing students’ introduction to bitwise and arithmetic operations. Following the saturating arithmetic problem, we added a case study analysis about the Ariane-5 rocket launch failure. This provided students with a vivid illustration of the potential consequences of overflows as well as an opportunity to reflect on their responsibilities as engineers. The starter code is the full project provided to students.

Climate Change & Calculating Risk

Course: Probability

This assignment uses the tools of probability theory to introduce students to _risk weighted expected utility_ models of decision making. The risk weighted expected utility framework is then used to understand decision-making under uncertainty in the context of climate change. Which of the IPCC’s forecasts should we use? Do we owe it to future people to adopt a conservative risk profile when making decisions on their behalf? The assignment also introduces normative principles for allocating responsibility for addressing climate change. Students apply these formal tools and frameworks to understanding the ethical dimensions of climate change.

Fairness, Representation, and Machine Learning

This assignment builds on introductory knowledge of machine learning techniques, namely the naïve Bayes algorithm and logistic regression, to introduce concepts and definitions of algorithmic fairness. Students analyze sources of bias in algorithmic systems, then learn formal definitions of algorithmic fairness such as independence, separation, and fairness through awareness or unawareness. They are also introduced to notions of fairness that complicate the formal paradigms, including intersectionality and subgroup analysis, representation, and justice beyond distribution.

Lab: Therac-25 Case Study

This lab, the last of the course, asks students to discuss the case of Therac-25, a medical device that delivered lethal radiation due to a race condition.

Responsible Disclosure & Partiality

This assignment is about void * and generics. We added a case study about responsible disclosure and partiality. Students read a summary of researcher Dan Kaminsky’s discovery of a DNS vulnerability and answer questions about his decisions regarding disclosure of vulnerabilities as well as their own thoughts on partiality. The starter code is the full project provided to students.

Responsible Documentation

When functions have assumptions, limitations or flaws, it is vital that the documentation makes those clear. Without documentation, developers don’t have the information they need to make good decisions when writing their programs. We added a documentation component to this C string assignment. Students write a manual page for the skan_token function they have implemented, learning responsible documentation practice as they go. The starter code is the full project provided to students.

Design Discovery and Needfinding

Course: Intro to HCI

The lecture covers topics associated with power relations, the use of language, standpoint and inclusion as they arise in the context of design discovery.

Values in Design

The lecture presents the concept of values in design. It introduced the distinction between intended and collateral values; discusses the importance of assumptions in the value encoding process; and presents three strategies to address value conflicts that arise as a result of design decisions.

Assignments

Concept video.

This assignment asks students to consider what values are encoded in their product and the decisions they make in the design process; whether there are conflicting values; and how they address existing value conflicts.

Ethics in Advanced Technology

Course: AI Principles

After successfully creating a component of a self-driving car – a (virtual) sensor system that tracks other surrounding cars based on noisy sensor readings – students are prompted to reflect on ethical issues related to the creation, deployment, and policy governance of advanced technologies like self-driving cars. Students encounter classic concerns in the ethics of technology such as surveillance, ethics dumping, and dual-use technologies, and apply these concepts to the case of self-driving cars.

Foundations: Code of Ethics

In Problem 3 of this assignment, “Ethical Issue Spotting,” students explore the ethics of four different real-world scenarios using the ethics guidelines produced by a machine learning research venue, the NeurIPS conference. Students write a potential negative social impacts statement for each scenario, determining if the algorithm violates one of the sixteen guidelines listed in the NeurIPS Ethical Guidelines. In doing so, they practice spotting potential ethical concerns in real-world applications of AI and begin taking on the role of a responsible AI practitioner.

Heuristic Evaluation

This assignment asks students to evaluate their peers’ projects through a series of heuristics and to respond to others’ evaluations of their projects. By incorporating ethics questions to this evaluation, we prompt them to consider ethical aspects as part of a product’s design features which should be evaluated alongside other design aspects.

Medium-Fi Prototype

Modeling sea level rise.

This assignment is about Markov Decision Processes (MDPs). In Problem 5, we use the MDP the students have created to model how a coastal city government’s mitigation choices will affect its ability to adapt to rising sea levels over the course of multiple decades. At each timestep, the government may choose to invest in infrastructure or save its surplus budget. But the amount that the sea will rise is uncertain: each choice is a risk. Students model the city’s decision-making under two different time horizons, 40 or 100 years, and with different discount factors for the well-being of future people. In both cases, they see that choosing a longer time horizon or a smaller discount factor will lead to more investment now. Students then are introduced to five ethical positions on the comparative value of current and future generations’ well being. They evaluate their modeling choices in light of their choice of ethical position.

Needfinding

With this assignment, students will reflect on the group of users their project is intended to serve; their reasons for selecting these users; the notion of an “extreme user;” and the reason why their perspectives are valuable for the design process. I also asks them to reflect on what accommodations they make for their interviewees.

POV and Experience Prototypes

With this assignment, students are prompted to reflect on how proposed solutions to the problems they identify may exclude members of certain communities.

Residency Hours Scheduling

In this assignment, students explore constraint satisfaction problems (CSP) and use backtracking search to solve them. Many uses of constraint satisfaction in real-world scenarios involve assignment of resources to entities, like assigning packages to different trucks to optimize delivery. However, when the agents are people, the issue of fair division arises. In this question, students will consider the ethics of what constraints to remove in a CSP when the CSP is unsatisfiable.

Sentiment Classification and Maximum Group Loss

Although each of the problems in the problem set build on one another, the ethics assignment itself begins with Problem 4: Toxicity Classification and Maximum Group Loss. Toxicity classifiers are designed to assist in moderating online forums by predicting whether an online comment is toxic or not so that comments predicted to be toxic can be flagged for humans to review. Unfortunately, such models have been observed to be biased: non-toxic comments mentioning demographic identities often get misclassified as toxic (e.g., “I am a [demographic identity]”). These biases arise because toxic comments often mention and attack demographic identities, and as a result, models learn to _spuriously correlate_ toxicity with the mention of these identities. Therefore, some groups are more likely to have comments incorrectly flagged for review: their group-level loss is higher than other groups.

We could't find any materials that match that combination of CS and ethics topics.

Artificial Intelligence: Principles and Techniques

Artificial intelligence (AI) has had a huge impact in many areas, including medical diagnosis, speech recognition, robotics, web search, advertising, and scheduling. This course focuses on the foundational concepts that drive these applications. In short, AI is the mathematics of making good decisions given incomplete information (hence the need for probability) and limited computation (hence the need for algorithms). Specific topics include search, constraint satisfaction, game playing,n Markov decision processes, graphical models, machine learning, and logic.

Introduction to Computer Organization & Systems

Introduction to the fundamental concepts of computer systems. Explores how computer systems execute programs and manipulate data, working from the C programming language down to the microprocessor.

Introduction to Human-Computer Interaction

Introduces fundamental methods and principles for designing, implementing, and evaluating user interfaces. Topics: user-centered design, rapid prototyping, experimentation, direct manipulation, cognitive principles, visual design, social software, software tools. Learn by doing: work with a team on a quarter-long design project, supported by lectures, readings, and studios.

Probability for Computer Scientists

Introduction to topics in probability including counting and combinatorics, random variables, conditional probability, independence, distributions, expectation, point estimation, and limit theorems. Applications of probability in computer science including machine learning and the use of probability in the analysis of algorithms.

Programming Methodology

Introduction to the engineering of computer applications emphasizing modern software engineering principles: object-oriented design, decomposition, encapsulation, abstraction, and testing. Emphasis is on good programming style and the built-in facilities of respective languages. No prior programming experience required.

Programming Abstractions

Abstraction and its relation to programming. Software engineering principles of data abstraction and modularity. Object-oriented programming, fundamental data structures (such as stacks, queues, sets) and data-directed design. Recursion and recursive data structures (linked lists, trees, graphs). Introduction to time and space complexity analysis. Uses the programming language C++ covering its basic facilities.

Reinforcement Learning

To realize the dreams and impact of AI requires autonomous systems that learn to make good decisions. Reinforcement learning is one powerful paradigm for doing so, and it is relevant to an enormous range of tasks, including robotics, game playing, consumer modeling and healthcare. This class will provide a solid introduction to the field of reinforcement learning and students will learn about the core challenges and approaches, including generalization and exploration. Through a combination of lectures, and written and coding assignments, students will become well versed in key ideas and techniques for RL. Assignments will include the basics of reinforcement learning as well as deep reinforcement learning — an extremely promising new area that combines deep learning techniques with reinforcement learning.

Operating Systems Principles

This class introduces the basic facilities provided by modern operating systems. The course divides into three major sections. The first part of the course discusses concurrency: how to manage multiple tasks that execute at the same time and share resources. Topics in this section include processes and threads, context switching, synchronization, scheduling, and deadlock. The second part of the course addresses the problem of memory management; it will cover topics such as linking, dynamic memory allocation, dynamic address translation, virtual memory, and demand paging. The third major part of the course concerns file systems, including topics such as storage devices, disk management and scheduling, directories, protection, and crash recovery. After these three major topics, the class will conclude with a few smaller topics such as virtual machines.

Design and Analysis of Algorithms

Worst and average case analysis. Recurrences and asymptotics. Efficient algorithms for sorting, searching, and selection. Data structures: binary search trees, heaps, hash tables. Algorithm design techniques: divide-and-conquer, dynamic programming, greedy algorithms, amortized analysis, randomization. Algorithms for fundamental graph problems: minimum-cost spanning tree, connected components, topological sort, and shortest paths. Possible additional topics: network flow, string searching.

Design for Behavior Change

Over the last decade, tech companies have invested in shaping user behavior, sometimes for altruistic reasons like helping people change bad habits into good ones, and sometimes for financial reasons such as increasing engagement. In this project-based hands-on course, students explore the design of systems, information and interface for human use. We will model the flow of interactions, data and context, and crafting a design that is useful, appropriate and robust. Students will design and prototype utility apps or games as a response to the challenges presented. We will also examine the ethical consequences of design decisions and explore current issues arising from unintended consequences. Prerequisite: CS147 or equivalent.

COS 350: Ethics of computing

The course aims to help students think critically about the ethical and social aspects of computing technology and develop the skills needed to make ethical decisions when building and deploying technology. Activities will include readings, technical work, and case studies of contemporary debates.

Course staff

Instructor: Arvind Narayanan (arvindn)

TAs: Aatmik Gupta (aatmikg; Precept 4), Sayash Kapoor (sayashk; Precept 2), Kaiqu Liang (kl2741), Varun Rao (vn1332; Precept 3), Madelyne Xiao (mx3521; Precept 1)

  • Part 1: basics - Introduction, ethical foundations, political economy of the tech industry.
  • Part 2: AI ethics - Fairness and machine learning, AI safety / AGI / alignment, AI and labor, AI and climate.
  • Part 3: various other topics in computing ethics - Social media and platform power, information security, privacy, ethics in design, digital colonialism.
  • Part 4: practical topics - Research ethics, professional ethics, law and policy.
Solon Barocas, Moritz Hardt, Arvind Narayanan. Fairness and machine learning: limitations and opportunities . MIT Press, 2024. Available online.

Meeting times

Lecture: Tuesdays and Thursday, 11:00 am – 12:20 pm, McCosh 28

For precepts and office hours, see the calendar below.

Graded work

  • Assignments / problem sets 70%
  • Take-home final examination: 30%
  • Class Participation: to bump up a grade if near a grade boundary
  • No midterm.

Course policies

Contact policy

Use Ed Discussion for nearly all course-related questions you have. You may use private posts at your discretion. In general, all questions about coursework, course logistics, etc. should be asked publicly (you can post a question anonymously if you prefer).

For assignment-related questions, go to TA office hours. If the TA is not able to resolve it, you may escalate to the instructor. Note that we won't offer debugging help. You are welcome to go to instructor office hours for questions about the lecture or any other discussion about course-related topics.

Regrade policy

We will return graded work together with the grading rubric that was used. If you believe a question has been misgraded, you have 3 days to request a regrade on Gradescope with a few lines of explanation of how the question was misgraded contrary to the rubric. Note that a regrade request will trigger a regrade of the entire assignment, not just the problem in question, and your grade may increase or decrease as a result.

Late policy

We will allow you to take 6 late days in total on assignments over the term. You can use up to 3 late days per assignment. Late hours will round up to a full day (i.e. if you submitted 4 hours late, it will cost you 1 late day). Once you run out late days, each additional day late will incur a fixed 10% deduction for the given assignment (e.g. –10 points off a 100-point assignment). We will not accept assignments 3 days after the due date.

Extenuating circumstances

Our late and drop policies are meant to cover a variety of “life happens” circumstances, including but not limited to the following: students who may have to travel (including for sports), observe religious holidays, experience routine illnesses, etc. If an extenuating circumstance arises, you must use up your late days and dropped components first before requesting an assignment extension. Extensions due to extenuating circumstances require an email to the instructors from a university official, typically a Dean or a Director of Study (a note from McCosh does not suffice).

Unless otherwise stated, you are welcome to AI tools such as code generators and text generators, both for the assignments and for the take-home final. Of course, you are responsible for the accuracy of the work that you submit.

You must submit a writeup describing which AI tools you used and how. If you didn’t use any tools, say so. Use this as an opportunity to reflect on what worked and what didn’t, and to how to use the tools more effectively.

In our experience, there is currently a significant difference in quality between GPT-4 and free tools. If no one in your group has access, let us know in advance and we will reimburse a subscription to ChatGPT Plus.

Collaboration policy

Additional Resources

Assignment: computer ethics, assignment introduction.

You will learn the following:

  • What a Code of Ethics is
  • Giving Credit where Credit is Due
  • Privacy on the Internet
  • E-mail Uses and Abuses
  • The School Acceptable Use Policy as compared with other AUPs
  • What is Copyright and how does it effect the use of
  • Written work on the Internet
  • Computer Software
  • Pictures on the Internet
  • Music on the Internet
  • Movies on the Internet

Assignment Directions

After going through the information in tasks (1-6) each student will:

  • Create a page on the Ethics Blog
  • They will choose a good title for their page
  • Summarize what they learned from steps 1-6. The summary should clearly identify each of the six main areas (Ethics, Copyright, Plagerism, Privacy, Email, & Acceptable Use)

After completing the blog entry, each student will comment (provide feedback) to at least two (2) other students. All comments are to be at least two complete sentences.

Task 1: Ethics

You will need a notebook to take notes in.  Now you will visit at least three of the following sites about ethics and make notes on what a Code of Ethics is and how it affects your use of computers at school and how it affect your computer use at your place of employment.  In your notes, make sure you document which site you got the information. Use the grading rubric to guide your note taking.

  • What are Ethics? by Duke Mosman —USOE
  • Business Ethics Pledge
  • Computer Ethics and Coypright Issues? Right vs. Wrong
  • NYU Ethics Pledge
  • Computer Ethics: Basic Concepts and Historical Overview
  • Youth Ethics
  • Ethics in Computing
  • A Framework for Ethical Thinking
  • The Napster Cantata
  • Computer & Information Literacy

Task 2: Copyright

Visit at least three of the following sites about copyright and continue to take notes on what Copyright is and how it effects what you can and cannot copy and use from the Internet.  In your notes make sure you document where you got the information.

  • Copyright Tutorial
  • Cyberbee Copyright
  • Copyright Questions and Answers
  • Copyright Basics
  • Digital Copyright Fight
  • Taking the Mystery out of Copyright
  • 10 Big Myths about copyright explained
  • DVD Copyright Battle MSNBC

Task 3: Plagiarism

Continue taking notes as you visit at least three of the following sites on What is Plagiarism and How to Cite Resources or Give Credit Where Credit is Due.  In your notes make sure you document where you got the information.

  • Plagiarism Definition
  • What Is Plagiarism
  • What Is Plagiarism Kid’s Health
  • How to recognize and avoid plagiarism
  • Summarizing, Paraphrasing, and Quoting
  • Avoiding Plagiarism
  • Plagiarism:Why it’s wrong
  • Plagiarism: What It is and How to Recognize and Avoid It
  • Plagiarism: expulsion for a guilty verdict

Task 4: Privacy

Fourth visit at least three of the following sites and add to the previous notes information about Privacy (what belongs to me and my rights; cookies) on the Internet.

  • Privacy on the Internet, What can others learn about you?
  • Be Careful Online: Not Everyone Is a True ‘Friend’
  • Internet Privacy and Email Security
  • How Internet Cookies Work
  • Phishing Arrests
  • On Guard Online
  • Virginia tries to Ensure Students’ Safety in Cyberspace
  • Privacy and the Internet: Traveling in Cyberspace Safely
  • Identity theft gets personal
  • Privacy Rights
  • Ethical Issues of Internet Privacy

Task 5: Email and Abuses

Now visit at least three of the following sites and continue taking notes on E-mail uses and abuses and proper usage, viruses etc.

  • Email etiquette
  • Professional Email Writing
  • Keep Your E-mails Professional—Personal & Home Business Ones
  • http://www.emailreplies.com/
  • http://www.iwillfollow.com/email.htm
  • Sending & Responding To Group Emails—What Not To Do—Email Etiquette
  • Professional Online Newsletters and Emails Exactly How To Create Your Own

Task 6: Acceptable Use Policies

V isit at least three of these sites and finish taking notes on Acceptable Use Policies (AUPs) what our school policy is and how it compares to other policies.

  • LFCC Computer Use Policy
  • Acceptable Use Policy
  • http://www.enterasys.com/solutions/secure-networks/acceptable_use/
  • http://bizsecurity.about.com/od/acceptableusagepolicies/a/creatingaup.htm
  • Business AUPs
  • Nine Employees Fired for Internet Porn
  • Introduction to Computer Applications and Concepts Ethics and Security Assignment. Authored by : Melissa Stange. Provided by : Lord Fairfax Community College. Located at : http://www.lfcc.edu/ . License : CC BY: Attribution

How a new program at Stanford is embedding ethics into computer science

Shortly after Kathleen Creel started her position at Stanford as the inaugural Embedded EthiCS fellow some two years ago, a colleague sent her a 1989 newspaper clipping about the launch of Stanford’s first computer ethics course to show her how the university has long been committed to what Creel was tasked with: helping Stanford students understand the moral and ethical dimensions of technology .

computer ethics assignment

Kathleen Creel is training the next generation of entrepreneurs and engineers to identify and work through various ethical and moral problems they will encounter in their careers. (Image credit: Courtesy Kathleen Creel)

While much has changed since the article was first published in the San Jose Mercury News , many of the issues that reporter Tom Philp discussed with renowned Stanford computer scientist Terry Winograd in the article remain relevant.

Describing some of the topics Stanford students were going to deliberate in Winograd’s course – a period Philp described as “rapidly changing” – he wrote: “Should students freely share copyrighted software? Should they be concerned if their work has military applications? Should they submit a project on deadline if they are concerned that potential bugs could ruin peoples’ work?”

Three decades later, Winograd’s course on computer ethics has evolved , but now it is joined by a host of other efforts to expand ethics curricula at Stanford. Indeed, one of the main themes of the university’s Long Range Vision is embedding ethics across research and education. In 2020, the university launched the Ethics, Society, and Technology (EST) Hub , whose goal is to help ensure that technological advances born at Stanford address the full range of ethical and societal implications.

That same year, the EST Hub, in collaboration with Stanford Institute for Human-Centered Artificial Intelligence (HAI), the McCoy Family Center for Ethics in Society , and the Computer Science Department, created the Embedded EthiCS program, which will embed ethics modules into core computer science courses. Creel is Embedded EthiCS’ first fellow.

Stanford University, situated in the heart of Silicon Valley and intertwined with the influence and impact inspired by technological innovations in the region and beyond, is a vital place for future engineers and technologists to think through their societal responsibilities, Creel said.

“I think teaching ethics specifically at Stanford is very important because many Stanford students go on to be very influential in the world of tech,” said Creel, whose own research explores the moral, political, and epistemic implications of how machine learning is used in the world.

“If we can make any difference in the culture of tech, Stanford is a good place to be doing it,” she said.

Establishing an ethical mindset

Creel is both a computer scientist and a philosopher. After double-majoring in both fields at Williams College in Massachusetts, she worked as a software engineer at MIT Lincoln Laboratory on a large-scale satellite project. There, she found herself asking profound, philosophical questions about the dependence on technology in high-stake situations, particularly when it comes to how AI-based systems have evolved to inform people’s decision-making. She wondered, how do people know they can trust these tools and what information do they need to have in order to believe that it can be a reliable addition or substitution for human judgment?

Creel decided to confront these questions head-on at graduate school, and in 2020, she earned her PhD in history and the philosophy of science at the University of Pittsburgh.

During her time at Stanford, Creel has collaborated with faculty and lecturers across Stanford’s Computer Science department to identify various opportunities for students to think through the social consequences of technology – even if it’s just one or five minutes at a time.

Rather than have ethics be its own standalone seminar or dedicated class topic that is often presented at either the beginning or end of a course, the Embedded EthiCS program aims to intersperse ethics throughout the quarter by integrating it into core course assignments, class discussions, and lectures.

“The objective is to weave ethics into the curriculum organically so that it feels like a natural part of their practice,” said Creel. Creel has worked with professors on nine computer science courses, including: CS106A: Programming Methodology ; CS106B: Programming Abstractions ; CS107: Computer Organization and Systems ; CS109: Introduction to Probability for Computer Scientists ; CS221: Artificial Intelligence: Principles and Techniques ; CS161: Design and Analysis of Algorithms; and CS47B: Design for Behavior Change.

During her fellowship, Creel gave engaging lectures about specific ethical issues and worked with professors to develop new coursework that demonstrates how the choices students will make as engineers carry broader implications for society.

One of the instructors Creel worked with was Nick Troccoli , a lecturer in the Computer Science Department. Troccoli teaches CS 107: Computer Organization & Systems , the third course in Stanford’s introductory programming sequence, which focuses mostly on how computer systems execute programs. Although some initially wondered how ethics would fit into such a technical curriculum, Creel and Troccoli, along with course assistant Brynne Hurst, found clear hooks for ethics discussions in assignments, lectures, and labs throughout the course.

For example, they refreshed a classic assignment about how to figure out a program’s behavior without seeing its code (“reverse engineering”). Students were asked to imagine they were security researchers hired by a bank to discover how a data breach had occurred, and how the hacked information could be combined with other publicly-available information to discover bank customers’ secrets.

Creel talked about how anonymized datasets can be reverse engineered to reveal identifying information and why that is a problem. She introduced the students to different models of privacy, including differential privacy, a technique that can make privacy in a database more robust by minimizing identifiable information.

Students were then tasked to provide recommendations to further anonymize or obfuscate data to avoid breaches.

“Katie helped students understand what potential scenarios may arise as a result of programming and how ethics can be a tool to allow you to better understand those kinds of issues,” Troccoli said.

Another instructor Creel worked with was Assistant Professor Aviad Rubinstein , who teaches CS161: Design and Analysis of Algorithms .

Creel and Rubinstein, joined by research assistant Ananya Karthik and course assistant Golrokh Emami, came up with an assignment where students were asked to create an algorithm that would help a popular distributor decide the locations of their warehouses and determine which customers received one versus two-day delivery.

Students worked through the many variables to determine warehouse location, such as optimizing cost with existing customer demand and driver route efficiency. If the algorithm prioritized these features, closer examination would reveal that historically redlined Black American neighborhoods would be excluded from receiving one-day delivery.

Students were then asked to develop another algorithm that would address the delivery issue while also optimizing even coverage and cost.

The goal of the exercise was to show students that as engineers, they are also decision-makers whose choices carry real-world consequences that can affect equity and inclusion in communities across the country. Students were asked to also share what those concepts mean to them.

“The hope is to show them this is a problem they might genuinely face and that they might use algorithms to solve, and that ethics will guide them in making this choice,” Creel said. “Using the tools that we’ve taught them in the ethics curriculum, they will now be able to understand that choosing an algorithm is indeed a moral choice that they are making, not only a technical one.”

Developing moral co urage

Some students have shared with Creel how they themselves have been subject to algorithmic biases.

For example, when the pandemic shuttered high schools across the country, some school districts turned to online proctoring services to help them deliver exams remotely. These services automated the supervision of students and their space while they take a test.

However, these AI-driven services have come under criticism, particularly around issues concerning privacy and racial bias. For example, the scanning software sometimes fails to detect students with darker skin, Creel said.

Sometimes, there are just glitches in the computer system and the AI will flag a student even though no offense has taken place. But because of the proprietary nature of the technology, how the algorithm came to its decision is not always entirely apparent.

“Students really understand how if these services were more transparent, they could have pointed to something that could prove why an automated flag that may have gone up was wrong,” said Creel.

Overall, Creel said, students have been eager to develop the skillset to help them discuss and deliberate on the ethical dilemmas they could encounter in their professional careers.

“I think they are very aware that they, as young engineers, could be in a situation where someone above them asks them to do something that they don’t think is right,” she added. “They want tools to figure out what is right, and I think they also want help building the moral courage to figure out how to say no and to interact in an environment where they may not have a lot of power. For many of them, it feels very important and existential.”

Creel is now transitioning from her role at Stanford to Northeastern University where she will hold a joint appointment as an assistant professor of philosophy and computer science.

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Megan Loh and Nadine Gaab show a model of an MRI machine they use to acclimate young study participants.

Getting ahead of dyslexia

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Why AI fairness conversations must include disabled people

Sir Andre Geim (pictured), giving the Morris Loeb Lecture in Physics.

How did you get that frog to float?

Embedding ethics in computer science curriculum.

Photo illustration by Judy Blomquist/Harvard Staff

Paul Karoff

SEAS Communications

Harvard initiative seen as a national model

Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”

Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”

At a time when computer science departments around the country are grappling with how to turn out graduates who understand ethics as well as algorithms, Harvard is taking a novel approach.

In 2015, Grosz designed a new course called “Intelligent Systems: Design and Ethical Challenges.” An expert in artificial intelligence and a pioneer in natural language processing, Grosz turned to colleagues from Harvard’s philosophy department to co-teach the course. They interspersed into the course’s technical content a series of real-life ethical conundrums and the relevant philosophical theories necessary to evaluate them. This forced students to confront questions that, unlike most computer science problems, have no obvious correct answer.

Students responded. The course quickly attracted a following and by the second year 140 people were competing for 30 spots. There was a demand for more such courses, not only on the part of students, but by Grosz’s computer science faculty colleagues as well.

“The faculty thought this was interesting and important, but they didn’t have expertise in ethics to teach it themselves,” she said.

Barbara Grosz (from left), Jeffrey Behrends, and Alison Simmons hope Harvard’s approach to turning out graduates who understand ethics as well as algorithms becomes a national model.

Rose Lincoln/Harvard Staff Photographer

In response, Grosz and collaborator Alison Simmons, the Samuel H. Wolcott Professor of Philosophy, developed a model that draws on the expertise of the philosophy department and integrates it into a growing list of more than a dozen computer science courses, from introductory programming to graduate-level theory.

Under the initiative, dubbed Embedded EthiCS, philosophy graduate students are paired with computer science faculty members. Together, they review the course material and decide on an ethically rich topic that will naturally arise from the content. A graduate student identifies readings and develops a case study, activities, and assignments that will reinforce the material. The computer science and philosophy instructors teach side by side when the Embedded EthiCS material is brought to the classroom.

Grosz and her philosophy colleagues are at the center of a movement that they hope will spread to computer science programs around the country. Harvard’s “distributed pedagogy” approach is different from many university programs that treat ethics by adding a stand-alone course that is, more often than not, just an elective for computer science majors.

“Standalone courses can be great, but they can send the message that ethics is something that you think about after you’ve done your ‘real’ computer science work,” Simmons said. “We want to send the message that ethical reasoning is part of what you do as a computer scientist.”

Embedding ethics across the curriculum helps computer science students see how ethical issues can arise from many contexts, issues ranging from the way social networks facilitate the spread of false information to censorship to machine-learning techniques that empower statistical inferences in employment and in the criminal justice system.

Courses in artificial intelligence and machine learning are obvious areas for ethical discussions, but Embedded EthiCS also has built modules for less-obvious pairings, such as applied algebra.

“We really want to get students habituated to thinking: How might an ethical issue arise in this context or that context?” Simmons said.

“Standalone courses can be great, but they can send the message that ethics is something that you think about after you’ve done your ‘real’ computer science work.” Alison Simmons, Samuel H. Wolcott Professor of Philosophy

Curriculum at a glance

A sampling of classes from the Embedded EthiCS pilot program and the issues they address

  • Great Ideas in Computer Science: The ethics of electronic privacy
  • Introduction to Computer Science II: Morally responsible software engineering
  • Networks: Facebook, fake news, and ethics of censorship
  • Programming Languages: Verifiably ethical software systems
  • Design of Useful and Usable Interactive Systems: Inclusive design and equality of opportunity
  • Introduction to AI: Machines and moral decision making
  • Autonomous Robot Systems: Robots and work

David Parkes, George F. Colony Professor of Computer Science, teaches a wide-ranging undergraduate class on topics in algorithmic economics. “Without this initiative, I would have struggled to craft the right ethical questions related to rules for matching markets, or choosing objectives for recommender systems,” he said. “It has been an eye-opening experience to get students to think carefully about ethical issues.”

Grosz acknowledged that it can be a challenge for computer science faculty and their students to wrap their heads around often opaque ethical quandaries.

“Computer scientists are used to there being ways to prove problem set answers correct or algorithms efficient,” she said. “To wind up in a situation where different values lead to there being trade-offs and ways to support different ‘right conclusions’ is a challenging mind shift. But getting these normative issues into the computer system designer’s mind is crucial for society right now.”

Jeffrey Behrends, currently a fellow-in-residence at Harvard’s Edmond J. Safra Center for Ethics, has co-taught the design and ethics course with Grosz. Behrends said the experience revealed greater harmony between the two fields than one might expect.

“Once students who are unfamiliar with philosophy are introduced to it, they realize that it’s not some arcane enterprise that’s wholly independent from other ways of thinking about the world,” he said. “A lot of students who are attracted to computer science are also attracted to some of the methodologies of philosophy, because we emphasize rigorous thinking. We emphasize a methodology for solving problems that doesn’t look too dissimilar from some of the methodologies in solving problems in computer science.”

The Embedded EthiCS model has attracted interest from universities — and companies — around the country. Recently, experts from more than 20 institutions gathered at Harvard for a workshop on the challenges and best practices for integrating ethics into computer science curricula. Mary Gray, a senior researcher at Microsoft Research (and a fellow at Harvard’s Berkman Klein Center for Internet and Society), who helped convene the gathering, said that in addition to impeccable technical chops, employers increasingly are looking for people who understand the need to create technology that is accessible and socially responsible.

“Our challenge in industry is to help researchers and practitioners not see ethics as a box that has to be checked at the end, but rather to think about these things from the very beginning of a project,” Gray said.

Those concerns recently inspired the Association for Computing Machinery (ACM), the world’s largest scientific and educational computing society, to update its code of ethics for the first time since 1992.

In hope of spreading the Embedded EthiCS concept widely across the computer science landscape, Grosz and colleagues have authored a paper to be published in the journal Communications of the ACM and launched a website to serve as an open-source repository of their most successful course modules.

They envision a culture shift that leads to a new generation of ethically minded computer science practitioners.

“In our dream world, success will lead to better-informed policymakers and new corporate models of organization that build ethics into all stages of design and corporate leadership,” Behrends says.

More like this

Harvard Business Review Editor-in-Chief, Adi Ignatius talks with Brad Smith (left), President and Chief Legal Officer, Microsoft

Corporate activism takes on precarious role

computer ethics assignment

Bulgarian-born computer science student finds her niche

The experiment has also led to interesting conversations beyond the realm of computer science.

“We’ve been doing this in the context of technology, but embedding ethics in this way is important for every scientific discipline that is putting things out in the world,” Grosz said. “To do that, we will need to grow a generation of philosophers who will think about ways in which they can take philosophical ethics and normative thinking, and bring it to all of science and technology.”

Carefully designed course modules

At the heart of the Embedded EthiCS program are carefully designed, course-specific modules, collaboratively developed by faculty along with computer science and philosophy graduate student teaching fellows.

A module that Kate Vredenburgh, a philosophy Ph.D. student, created for a course taught by Professor Finale Doshi-Velez asks students to grapple with questions of how machine-learning models can be discriminatory, and how that discrimination can be reduced. An introductory lecture sets out a philosophical framework of what discrimination is, including the concepts of disparate treatment and impact. Students learn how eliminating discrimination in machine learning requires more than simply reducing bias in the technical sense. Even setting a socially good task may not be enough to reduce discrimination, since machine learning relies on predictively useful correlations and those correlations sometimes result in increased inequality between groups.

The module illuminates the ramifications and potential limitations of using a disparate impact definition to identify discrimination. It also introduces technical computer science work on discrimination — statistical fairness criteria. An in-class exercise focuses on a case in which an algorithm that predicts the success of job applicants to sales positions at a major retailer results in fewer African-Americans being recommended for positions than white applicants.

An out-of-class assignment asks students to draw on this grounding to address a concrete ethical problem faced by working computer scientists (that is, software engineers working for the Department of Labor). The assignment gives students an opportunity to apply the material to a real-world problem of the sort they might face in their careers, and asks them to articulate and defend their approach to solving the problem.

Share this article

You might like.

Harvard lab’s research suggests at-risk kids can be identified before they ever struggle in school

Naomi Saphra, Lawrence Weru, and Maitreya Shah.

Tech offers promise to help yet too often perpetuates ableism, say researchers. It doesn’t have to be this way.

Sir Andre Geim (pictured), giving the Morris Loeb Lecture in Physics.

Ever-creative, Nobel laureate in physics Andre Geim extols fun, fanciful side of very serious science

Forget ‘doomers.’ Warming can be stopped, top climate scientist says

Michael Mann points to prehistoric catastrophes, modern environmental victories

College accepts 1,937 to Class of 2028

Students represent 94 countries, all 50 states

Yes, it’s exciting. Just don’t look at the sun.

Lab, telescope specialist details Harvard eclipse-viewing party, offers safety tips

A new program at Stanford is embedding ethics into computer science

Image of Kathleen Creel with an illustration of a legal scale

Shortly after Kathleen Creel started her position at Stanford as the inaugural Embedded EthiCS fellow some two years ago, a colleague sent her a 1989 newspaper clipping about the launch of Stanford’s first computer ethics course to show her how the university has long been committed to what Creel was tasked with: helping Stanford students understand the moral and ethical dimensions of technology .

While much has changed since the article was first published in the San Jose Mercury News , many of the issues that reporter Tom Philp discussed with renowned Stanford computer scientist Terry Winograd in the article remain relevant.

Describing some of the topics Stanford students were going to deliberate in Winograd’s course – a period Philp described as “rapidly changing” – he wrote: “Should students freely share copyrighted software? Should they be concerned if their work has military applications? Should they submit a project on deadline if they are concerned that potential bugs could ruin peoples’ work?”

Three decades later, Winograd’s course on computer ethics has evolved , but now it is joined by a host of other efforts to expand ethics curricula at Stanford. Indeed, one of the main themes of the university’s Long Range Vision is embedding ethics across research and education. In 2020, the university launched the Ethics, Society, and Technology (EST) Hub , whose goal is to help ensure that technological advances born at Stanford address the full range of ethical and societal implications.

That same year, the EST Hub, in collaboration with Stanford Institute for Human-Centered Artificial Intelligence (HAI), the McCoy Family Center for Ethics in Society , and the Computer Science Department, created the Embedded EthiCS program, which will embed ethics modules into core computer science courses. Creel is Embedded EthiCS’ first fellow.

Stanford University, situated in the heart of Silicon Valley and intertwined with the influence and impact inspired by technological innovations in the region and beyond, is a vital place for future engineers and technologists to think through their societal responsibilities, Creel said.

“I think teaching ethics specifically at Stanford is very important because many Stanford students go on to be very influential in the world of tech,” said Creel, whose own research explores the moral, political, and epistemic implications of how machine learning is used in the world.

“If we can make any difference in the culture of tech, Stanford is a good place to be doing it,” she said.

Establishing an ethical mindset

Creel is both a computer scientist and a philosopher. After double-majoring in both fields at Williams College in Massachusetts, she worked as a software engineer at MIT Lincoln Laboratory on a large-scale satellite project. There, she found herself asking profound, philosophical questions about the dependence on technology in high-stake situations, particularly when it comes to how AI-based systems have evolved to inform people’s decision-making. She wondered, how do people know they can trust these tools and what information do they need to have in order to believe that it can be a reliable addition or substitution for human judgment?

Creel decided to confront these questions head-on at graduate school, and in 2020, she earned her PhD in history and the philosophy of science at the University of Pittsburgh.

During her time at Stanford, Creel has collaborated with faculty and lecturers across Stanford’s Computer Science department to identify various opportunities for students to think through the social consequences of technology – even if it’s just one or five minutes at a time.

Rather than have ethics be its own standalone seminar or dedicated class topic that is often presented at either the beginning or end of a course, the Embedded EthiCS program aims to intersperse ethics throughout the quarter by integrating it into core course assignments, class discussions, and lectures.

“The objective is to weave ethics into the curriculum organically so that it feels like a natural part of their practice,” said Creel. Creel has worked with professors on nine computer science courses, including: CS106A: Programming Methodology ; CS106B: Programming Abstractions ; CS107: Computer Organization and Systems ; CS109: Introduction to Probability for Computer Scientists ; CS221: Artificial Intelligence: Principles and Techniques ; CS161: Design and Analysis of Algorithms; and CS47B: Design for Behavior Change.

During her fellowship, Creel gave engaging lectures about specific ethical issues and worked with professors to develop new coursework that demonstrates how the choices students will make as engineers carry broader implications for society.

One of the instructors Creel worked with was Nick Troccoli , a lecturer in the Computer Science Department. Troccoli teaches CS 107: Computer Organization & Systems , the third course in Stanford’s introductory programming sequence, which focuses mostly on how computer systems execute programs. Although some initially wondered how ethics would fit into such a technical curriculum, Creel and Troccoli, along with course assistant Brynne Hurst, found clear hooks for ethics discussions in assignments, lectures, and labs throughout the course.

For example, they refreshed a classic assignment about how to figure out a program’s behavior without seeing its code (“reverse engineering”). Students were asked to imagine they were security researchers hired by a bank to discover how a data breach had occurred, and how the hacked information could be combined with other publicly-available information to discover bank customers’ secrets.

Creel talked about how anonymized datasets can be reverse engineered to reveal identifying information and why that is a problem. She introduced the students to different models of privacy, including differential privacy, a technique that can make privacy in a database more robust by minimizing identifiable information.

Students were then tasked to provide recommendations to further anonymize or obfuscate data to avoid breaches.

“Katie helped students understand what potential scenarios may arise as a result of programming and how ethics can be a tool to allow you to better understand those kinds of issues,” Troccoli said.

Another instructor Creel worked with was Assistant Professor Aviad Rubinstein , who teaches CS161: Design and Analysis of Algorithms .

Creel and Rubinstein, joined by research assistant Ananya Karthik and course assistant Golrokh Emami, came up with an assignment where students were asked to create an algorithm that would help a popular distributor decide the locations of their warehouses and determine which customers received one versus two-day delivery.

Students worked through the many variables to determine warehouse location, such as optimizing cost with existing customer demand and driver route efficiency. If the algorithm prioritized these features, closer examination would reveal that historically redlined Black American neighborhoods would be excluded from receiving one-day delivery.

Students were then asked to develop another algorithm that would address the delivery issue while also optimizing even coverage and cost.

The goal of the exercise was to show students that as engineers, they are also decision-makers whose choices carry real-world consequences that can affect equity and inclusion in communities across the country. Students were asked to also share what those concepts mean to them.

“The hope is to show them this is a problem they might genuinely face and that they might use algorithms to solve, and that ethics will guide them in making this choice,” Creel said. “Using the tools that we’ve taught them in the ethics curriculum, they will now be able to understand that choosing an algorithm is indeed a moral choice that they are making, not only a technical one.”

Developing moral courage

Some students have shared with Creel how they themselves have been subject to algorithmic biases.

For example, when the pandemic shuttered high schools across the country, some school districts turned to online proctoring services to help them deliver exams remotely. These services automated the supervision of students and their space while they take a test.

However, these AI-driven services have come under criticism, particularly around issues concerning privacy and racial bias. For example, the scanning software sometimes fails to detect students with darker skin, Creel said.

Sometimes, there are just glitches in the computer system and the AI will flag a student even though no offense has taken place. But because of the proprietary nature of the technology, how the algorithm came to its decision is not always entirely apparent.

“Students really understand how if these services were more transparent, they could have pointed to something that could prove why an automated flag that may have gone up was wrong,” said Creel.

Overall, Creel said, students have been eager to develop the skillset to help them discuss and deliberate on the ethical dilemmas they could encounter in their professional careers.

“I think they are very aware that they, as young engineers, could be in a situation where someone above them asks them to do something that they don’t think is right,” she added. “They want tools to figure out what is right, and I think they also want help building the moral courage to figure out how to say no and to interact in an environment where they may not have a lot of power. For many of them, it feels very important and existential.”

Creel is now transitioning from her role at Stanford to Northeastern University where she will hold a joint appointment as an assistant professor of philosophy and computer science.

Related:   Aviad Rubinstein , assistant professor of computer science

Related Departments

Multicolored neural cells with intricate connections.

The future of addiction

Solar panels installed on a flat rooftop, with the skyline of a city in the background under a clear blue sky.

Stanford-led research shows how commercial rooftop solar power could bring affordable clean energy to low-income homes

  • Technology & Society

Portrait of Michael Genesereth, sitting in front of a bookshelf.

Michael Genesereth is on a mission to bring logic education to high schools

  • School News

You are using an outdated browser. Please upgrade your browser to improve your experience.

Computer Ethics

Administrative.

Winter 2023 Time: Wednesdays and Fridays from 3:30 to 4:20pm Location: NAN 181 Instructor: Jared Moore Email: [email protected] Office hours: by appointment (please do ask to chat!)

Please do not hesitate to write to the instructor about any accommodations or questions related to readings or course material.

Description

Be it social-media platforms, robots, or big data systems, the code Allen School students write—the decisions they make—influences the world in which it operates. This is a survey course about those influences and ways to think about them. We recognize, “the devil is in the implementation details.”

The course is divided into two parts: In the first part , we survey historical and local issues in tech, particularly those concerning data. We then engage with critical perspectives from disciplines such as machine ethics and science and technology studies as a framework for students to articulate their own beliefs concerning these systems. In the second part , we apply these perspectives to urgent issues in applied technologies; see the schedule for the topics we plan to consider this quarter.

Throughout, students hone their critical reading and discussion skills, preparing them for a life-long practice of grappling with the—often unanticipated—consequences of innovation.

We approach topics such as: AI ethics, social good, utopianism, governance, inclusion, facial recognition, classification, privacy, automation, platforms, speculative design, identity, fairness, power and control, activism, and subversive technologies.

We aim to have you feel this course experience is an essential part of your Allen School education despite being (or because it is!) very different from most CSE courses.

By the end of this course students will:

  • Obtain awareness of issues arising from the use of computers in contemporary sociotechnical systems
  • Articulate technological harms to individuals and groups in the language of critical perspectives
  • Appreciate how historical, cultural, economic, and political factors contribute to how technologies are built and designed
  • View themselves as both subjects and creators of sociotechnical systems
  • Understand and articulate complex arguments pertaining to values in technology
  • Recognize the diversity of stakeholders and views when considering a technology
  • Amplify voices and values not traditionally considered in technological development (e.g., in design processes)
  • Re-imagine and speculate alternative histories and futures for using and coexisting with computers

( may change up to a week in advance )

Introduction: A Brief History

Wed, Jan 04 Groundwork

Read "Our Numbered Days: The Evolution of the Area Code" [url] by Megan Garber, 2014 (5 pages).

Are you a “425” or a “206?” Through an exploration of phone area codes, Megan Garber shows us how cultures can be built up around artifacts spawned by engineering decisions. It traces the history nicely, while also discussing the hype cycle and push back from the community, as well as how such choices continue to resonate long after they are made.

Read "Why the Luddites Matter" [url] by Z.M.L., 2018 (5 pages).

“That which makes the Luddites so strange, so radical, and so dangerous is not that they wanted everyone to go back to living in caves (they didn’t want that), but that they thought that those who would be impacted by a new technology deserved a voice in how it was being deployed.”

Read "Organizational Frictions and Increasing Returns to Automation: Lessons from AT&T in the Twentieth Century" [pdf] by James Feigenbaum et al., 2021 (56 pages).

Check out this interesting recent paper on why Bell Labs may have moved to digit dialing!

Before Class:

  • Preview the course syllabus, particularly the assignments.

Feeling motivated? Here are a few relevant responses to today’s themes:

Check out "Beyond Thoughtfulness" [pdf] by Anonymous, 2022.

Check out "The Anti-Digit Dialing League" [pdf] [url] by John Wilcock (1 page).

Check out "About CPSR" [url] by Douglas Schuler, 2008.

Who's behind the keyboard?

Fri, Jan 06 Groundwork

Read chapter one from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.

In her introductory chapter, computer scientist and data journalist Broussard lays out both her love of and skepticism for computing technology. You might find a bit of yourself in her.

Read "Be Careful What You Code For" [url] by danah boyd, 2016 (2 pages).

danah boyd, a researcher at Micosoft and at Data and Society, highlights just how few guardrails there are for developers, from the consequences of algorithmic bias to the implications of crazy metaphors. She offers a call to action, solutions, and ample evidence for considering the implications of code. We highly recommend chasing down some of the links provided.

Read "How to read a book" [pdf] by Paul N. Edwards, 2000 (8 pages).

Here, an adroit scholar walks through some tips and tricks for reading more effectively. He hits the major points and includes some bonus tips, like where to best organize your reading notes. This is an invaluable resource as our course’s weekly reading load begins to increase. Skim now, but revisit throughout the course.

Read "Discussion Leading" [pdf] [url] by John Rickford et al., 2007.

This resource from the Stanford Teaching Commons offers an in-depth analysis of how to have better discussions. Their recommendations, from setting an agenda, asking questions, and increasing discussant engagement are all a part of how to create a better climate for discourse. For leading discussions in this class and beyond, it’s worth a read.

  • What do you want to get out of our class discussions?
  • Do you feel able to change outcomes of how tech affects society?
  • Preview the course project, part zero.

Check out "Tackling Climate Change with Machine Learning" [pdf] [url] by David Rolnick et al., 2019 (97 pages).

Also check out their website .

Check out "Green AI" [pdf] [url] by Roy Schwartz et al., 2019.

Check out "Open letter to Jeff Bezos and the Amazon Board of Directors" [url] by Amazon Employees for Climate Justice, 2019.

Deconstructing a Data System

Wed, Jan 11 Data

Read "At Amazon's New Checkout-Free Store, Shopping Feels Like Shoplifting" [url] by Jake Bullinger, 2018 (2 pages).

Jake Bullinger describes the experience had by some of the first shoppers of the Checkout-Free Amazon Go store and considers its economic implications. As you’re reading the article, look for possible tensions, critiques, or questions which it raises. Also think about the ways in which data is used by this store.

Read "In Amazon Go, no one thinks I'm stealing" [url] by Ashlee Clark Thompson (2 pages).

Ashlee Clark Thompson reflects on her experience of shopping in the Amazon Go store: “Amazon Go isn’t going to fix implicit bias or remove the years of conditioning under which I’ve operated. But in the Amazon Go store, everyone is just a shopper, an opportunity for the retail giant to test technology, learn about our habits and make some money.”

Read chapter four from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.

While focusing specifically on data journalism, Broussard well explains the means to which data can be put to explain the world. In particular, read it to understand the different means people take to “challenge false claims about technology.”

Read "Inside Amazon Go, a Store of the Future" [url] by Nick Wingfield, 2018 (1 page).

High level description of the store; touches on themes like convenience, how this tech could affect jobs, and the vagueness of plans surrounding the system at this time; Look at the photos if you don’t visit the Amazon Go store.

  • Today’s a light reading day, but day four isn’t so we recommend you get started on that.
  • How do the readings differ in their views of the Amazon Go store? Did the Amazonians consider each of these perspectives? Should they have? How might you classify them?
  • Have you visited an Amazon Go store? If so, how did you feel? If not, how do you think you would feel? (In non-stay-at-home times, I’d encourage you to visit one, time-permitting.)

Check out "The Loneliest Grocery" [url] by Joshua McNichols.

Check out ""Good" isn't good enough" [pdf] by Ben Green, 2019 (4 pages).

This paper, by a postdoc at the AI Now Institute and formerly of MIT, summarizes many of the themes we touch on throughout the quarter. It synthesizes many of the arguments we cover and applies them as a call for action to data scientists in particular. The author’s arguments are equally relevant to computer scientists.

Conceptions of Data

Fri, Jan 13 Data

Read "Chapter 1: Conceptualising Data" [pdf] by Rob Kitchin, 2014 (25 pages).

The introduction to the book describes ways of thinking about what data is (ontologies) and goes on to discuss ethical and political considerations of data. It postulates the framework of “data assemblages” and how thoughts about them influence their own conceptions.

Read "On Being a Data Skeptic" [pdf] by Cathy O'Neil, 2014 (26 pages).

This rapid-fire, well-articulated article is about the advantages and perils of data science, with ample advice and examples to advocate for why those who use data ought to be this special kind of “skeptical”.

Use the following questions as a resource to guide your reading. Then, respond to a few of them, or otherwise on theme.

  • What is the difference between data, information, and knowledge? How are they related?
  • How can wider or economic concerns “frame” data? That is, in what sense do data act and how? (Answers might include: as an economic resource, as a form of power or knowledge, etc.) Explain why.
  • How have politics or economics influenced how some data have been defined or created?
  • The creation of data
  • The access to data
  • The standards of data, such as metrics or units
  • The means of data collection, such as sensors or know-how
  • From “On being a data skeptic” explain “measuring the distortion.”
  • What is the relationship between models and proxies? Why are proxies used? Give an example.
  • Why might O’Neil have singled out “nerds” and “business people” separately? What do the differences in her comments indicate about how those groups view problems differently? Do you agree?
  • Submit a clarifying question which you’d like to discuss in class.
  • Finish the course project, part zero by tonight.

Check out "The era of blind faith in big data must end" [url] by Cathy O'Neil.

Check out "How ImageNet Roulette, a Viral Art Project That Exposed Facial Recognition's Biases, Is Changing Minds About AI" [url] by Naomi Rea, 2019.

Check out "An Investigation of the Facts Behind Columbia’s U.S. News Ranking" [url] by Michael Thaddeus, 2022.

Find the creep of proxies in university rankings in Thaddeus’s own words: “Almost any numerical standard, no matter how closely related to academic merit, becomes a malignant force as soon as universities know that it is the standard. A proxy for merit, rather than merit itself, becomes the goal.”

"Data is the new oil": data politics

Wed, Jan 18 Data

Skim "The world's most valuable resource is no longer oil, but data" [pdf] [url] , 2017 (1 page).

A short article that introduces the metaphor that “data is the new oil” which reflects the widely held view that data is now “the world’s most valuable resource”.

Read "Do artifacts have politics?" [pdf] [url] by Langdon Winner, 1980 (15 pages).

In this widely-cited essay, Langdon winner makes the case that technologies embody social relations. He argues that we should develop a language for considering technology which not only focuses on it as a tool, or its use, but also the meaning of its design and social arrangements which it facilitates. Langdon asks: “what, after all, does modern technology make possible or necessary in political life?”. Consider this while you read the piece.

Read "Anatomy of an AI System" [url] by Kate Crawford et al., 2018 (14 pages).

Kate Crawford and Vladan Joler consider the production of the Amazon Echo Dot with an astounding breadth by mapping the human labor, data, and material resources required to build it. Kate Crawford is a co-founder of the AI Now institute at NYU which is breaking ground on many questions relevant to the social implications of AI.

Read "How ICE Picks Its Targets in the Surveillance Age" [pdf] [url] by McKenzie Funk, 2019.

Consider reading this harrowing, and physically proximal, telling of the real-life implications of some of these data systems.

  • Pick one aspect of “Anatomy of an AI System” and discuss it with someone outside of class. In a couple of sentences, what did you talk about?
  • Did any aspect of the Amazon Echo AI system surprise or interest you? Which aspect?
  • What conclusions can we draw from the tomato picking example in “Do Artifacts Have Politics?”

Check out "Google Will Not Renew Pentagon Contract That Upset Employees" [url] by Daisuke Wakabayashi et al., 2018.

Check out "The Societal Implications of Nanotechnology" [url] by Langdon Winner, 2003 (5 pages).

Winner’s testimony before Congress

Check out "An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System" [pdf] [url] by Chelsea Barabas et al., 2017 (8 pages).

Operationalization and Classification

Fri, Jan 20 Data

Read introduction (pg. 1 - 16; 31 - 32; 17 pages total) from "Sorting things out: classification and its consequences" [pdf] by Geoffrey C. Bowker et al., 1999.

Sorting Things Out is a classic text on classification and standardization. The introduction writes about the importance of considering the ubiquity of classification and the processes that generate them, standardize them, and enforcement. It also looks at how classification has caused harm and how the processes which create standards can at times yield an inferior solution. The authors take an expansive view of classification, so be prepared to think about the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM) or VHS vs. Betamax.

Read "Do algorithms reveal sexual orientation or just expose our stereotypes?" [url] by Blaise Aguera y Arcas et al., 2018 (5 pages).

This essay sets out to debunk a scientific study which claimed to have built a “sexual orientation detector” using machine learning. “Do algorithms reveal sexual orientation or just expose our stereotypes?” presents a thorough analysis of the offending paper and shows that one move to debunk “junk science” is validate the study’s results against some other baseline. In this case, the authors use Amazon’s Mechanical Turk. As you’re reading this, think about what one can learn from a face?

Read chapter one (pg. 46 - 50 from "Infrastructure" on) from "Sorting things out: classification and its consequences" [pdf] by Geoffrey C. Bowker et al., 1999 (377 pages).

Read "Deep neural networks are more accurate than humans at detecting sexual orientation from facial images." [url] by Yilun Wang et al., 2017.

This is the article critiqued by “Do algorithms…”

  • “Do algorithms…” claims to focus on the underlying “science.” Why do you think the authors did so? Why was this distinction important?
  • What strategies did “Do algorithms…” use to make its argument?
  • The authors of “Do algorithms…” conclude that the paper they examined was misguided. Drawing on the discussion of classification in “Sorting Things Out,” think of another misguided method (any method, scientific or otherwise, and not necessarily one related to the “Do Algorithms” paper). Now, describe that misguided method which you have identified.

Check out "Drawing a Line" [url] by Tableau Employee Ethics Alliance, 2019.

Check out "Engaging the ethics of data science in practice" [pdf] [url] by Solon Barocas et al., 2017 (3 pages).

Check out "Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI" [pdf] [url] by Philip E. Agre, 1997 (28 pages).

Agre, an AI researcher in the 1990s, convincingly walks the line between a critial perspective and that of a practitioner, evoking why practitioners may bristle at critique.

Moral Machines

Wed, Jan 25 Critical Perspectives

Read "Whose Life Should Your Car Save?" [pdf] [url] by Azim Shariff et al., 2016 (1 page).

This is a high-level introduction to the moral machines project that also discusses how politics and businesses view ethical dilemmas, and that a tragedy of the commons-type scenario could undermine our common expectations for machine behavior.

Read "The dark side of the ‘Moral Machine’ and the fallacy of computational ethical decision-making for autonomous vehicles" [pdf] by Hubert Etienne, 2021 (22 pages).

Consider this take on autonomous vehicles and the moral machine experiment in particular. Appreciate the way that Etienne unpacks the assumptions implicit in various aspects of AVs and the statements which people make about them.

Read "Why the moral machine is a monster" [pdf] by Abby Everett Jaques, 2019 (10 pages).

This paper presents a fierce-yet-thoughtful critique of the Moral Machine experiment. It also highlights the importance of analyzing the structural implications of the problem: “In a slogan, the problem is that an algorithm isn’t a person, it’s a policy. And you don’t get policy right by just assuming that an answer that might be fine in an individual case will generalize. You have to look at the overall structure you create when you aggregate the individual transactions….The right question is what kind of world will I be creating if this is the rule. What will the patterns of advantage and disadvantage be if this preference is encoded in our autonomous vehicles. (pg 5-6)”

Read "The Trolley Problem" [pdf] [url] by Judith Jarvis Thomson, 1985 (21 pages).

Judith Jarvis Thomson marches you through 21 pages of analyzing various hypothetical scenarios in order to expose the difficulty of using abstractions, and how a normative/ethical tenet may on the face seem straight forward, but really be undermined if one looks at the details. Makes it seem difficult to think that there are an exhaustive set of rules to encode which actions are permissible, not to mention moral.

Read "Ethical Machines (published)" [pdf] by Irving John Good, 1982 (5 pages).

I.J. Good discusses problems with certain considerations one should make when thinking about whether and how a machine could exhibit “ethical” behavior.

Read "Ethical Machines (unpublished)" [pdf] [url] by Irving John Good, 1980 (16 pages).

The longer version is able to better parse the implications of the arguments in part because it’s more in depth due to its length. Notable in particular are the final consideration of a “synergistic relation with the boss” and the shakespeare reference.

Read chapter 7 from "Braintrust: What neuroscience tells us about morality" [pdf] by Patricia S. Churchland, 2018.

Here Churchland discusses human ethical reasoning as a form of the “brain’s continuous decision-making.” Motivated by neuroscience, she makes the case that to suggest that, “the moral understanding that underlies specific rules is more like a skill than like a concrete proposition.” This has interesting applications for how we approach learning about ethics as well as for how we go about creating ethical systems.

  • Judge a few scenarios on the MIT Moral Machines website.
  • Do you agree with the arguments of one of the readings more than those of the other? Why or why not? Cite or paraphrase those arguments to support your answer.
  • What are the trade-offs of looking at individual ethics cases (such as in the “Trolley Problem” or “Moral Machines”) as compared to the bigger pictures (the kind of assumptions Etienne mentions) or a “structural analysis” (as described by Jaques)?

Check out "Immoral Code" [url] by Stop Killer Robots, 2021.

Check out this response to the implications of the moral machine experiment—should we have robots which kill?

Check out "I Quit My Job to Protest My Company’s Work on Building Killer Robots" [url] by Liz O'Sullivan, 2019.

Check out "Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars" [url] by Simon Romero, 2018.

Check out "Tech Billionaires Think SimCity Is Real Life" [url] by Nicole M. Aschoff, 2019 (4 pages).

About Alphabet’s Sidewalk Labs and their attempt to create a “smart city” in Tronto. This is quite related to conversations about autonomous vehicles, creating moral machines, etc. Also check out one community response to the project .

Data Feminism

Fri, Jan 27 Critical Perspectives

Read "Introduction: Why Data Science Needs Feminism" [pdf] [url] by Catherine D'Ignazio et al., 2020.

Through a historical examination of women in technology, D’Ignazio and Klein, both leading scholars in the field, introduce us to feminism and its role in shaping technologies. This is just a taste and the whole book is worth a read.

Read "Gender Equality Paradox Monkey Business: Or, How to Tell Spurious Causal Stories about Nation-Level Achievement by Women in STEM" [url] , 2020 (5 pages).

A widely known study argued that countries with more gender equity in society have fewer women studying STEM, but this article accompanies a peer-reviewed publication casting doubt on the study’s analysis – a scholarly back-and-forth also playing out in the blogosphere.

Read "Patriarchy, technology, and conceptions of skill" [pdf] by Judy Wajcman, 1991 (16 pages).

When considering the future of work, one question that’s often raised is how technology negatively impacts the amount of “skill” required to complete a task, aka “deskilling”. In “Patriarchy, Technology, and Conceptions of Skill”, Judy Wajcman questions the underlying assumption that skill is entirely technically derived. Instead, she considers how men’s historical control over technology in the workplace has extensively influenced the ideological and material conceptions of skill, thus concluding in part that “definitions of skill, then, can have more to do with ideological and social constructions than with technical competencies which are possessed by men and not by women”.

Read "Technically Female: Women, Machines, and Hyperemployment" [url] by Helen Hester, 2016 (10 pages).

This essay surveys a history of “electronic secretaries” to frame relevant questions of today’s tech, such as: Why are AI assistants so often feminized? We question what it means for technology to “do gender” and in service of which “imagined technology user”? Yet we can turn that question around and ask who “does technology”? and how does labor gets redistributed with the introduction of new software and AI assistants? Ultimately, Hester asks us to confront questions concerning lived experiences of gender and how its programmed, productive vs. reproductive labor, and the (dis)advantages of automation.

Read "Testosterone rex: unmaking the myths of our gendered minds" [url] by Cordelia Fine, 2017.

If you were looking to throw the book at someone who continues to insist that sex differences are sufficient to explain gender differences, this would be that book. Take her word for it: “Every influence is modest, made up of countless small instances of its kind. That’s why everything—a doll packaged in pink, a sexist joke, a male-only expert panel—can seem trivial, of intangible effect. But that’s exactly why calling out even seemingly minor points of sexism matters. It all adds up, and if no one sweats the small stuff, the big stuff will never change.” And it’s funny, too: “If we stop believing that boys and men are emotional cripples and fly-by-night Casanovas who just want sex, and start believing that they’re full, complete human beings who have emotional and relational needs, imagine what might happen.”

  • Why is feminism relevant to data science?
  • What considerations does data feminism require us to make?
  • “Gender Equality Paradox” is an example of data feminism at work. How so?
  • Finish the course project, part one by tonight.

Check out "Reflecting on one very, very strange year at Uber" [url] by Susan Fowler, 2017 (4 pages).

This blog post, by the author of “What Have We Done” contributed to the resignation of Uber’s CEO, Travic Kalanick.

Check out "Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment" [url] by Daisuke Wakabayashi et al., 2018.

Latent Identity and Privacy

Wed, Feb 01 Critical Perspectives

Read "It's Not Privacy, and It's Not Fair" [pdf] by Cynthia Dwork et al., 2013 (6 pages).

This law review paper is the missing link between the concept of control and of privacy as represented by the (optional) Deluze piece and the Barocas piece, respectively.

Read "Think You're Discreet Online? Think Again" [pdf] [url] by Zeynep Tufekci, 2019 (2 pages).

How ought we make sense of questions such as privacy, classification, tracking, and surveillance in the era of big data and computational inference? Zeynep Tufeci asks us to consider these questions by looking at examples of the collective implications of a “privacy-comprised world”.

Read "Big data's end run around procedural privacy protections" [pdf] [url] by Solon Barocas et al., 2014 (2 pages).

Solon Barocas and Helen Nissenbaum, both well-known AI ethics scholars, consider “why the increasingly common practice of vacuuming up innocuous bits of data may not be quite so innocent: who knows what inferences might be drawn on the basis of which bits?”

Read "We Need to Take Back Our Privacy" [pdf] by Zeynep Tufekci, 2022.

Through the lense of reproductive rights, we again see Tufekci turn to fundamental questions of privacy and surveillance—rights historically challenged by the introduction of new technologies.

Watch "Deleuze the Societies of Control" [url] .

This video highlights some significant passages in “Postscript” and explains what’s going on by connecting it back to contemporary questions of control. Only the first 10 minutes actually cover the essay and the next 12 or so are on “commentary”, by posing relevant questions and extrapolating “Postscript”’s ideas into the future.

Read "Postscript on the Societies of Control" [pdf] [url] by Gilles Deleuze, 1992 (4 pages).

“[J]ust as the corporation replaces the factory, perpetual training tends to replace the school, and continuous control to replace the examination. Which is the surest way of delivering the school over to the corporation.” Deleuze considers the technologies of power, and what it means to be in a “control state”. One wonders what he would have to say about this virtual world.

Identify a new system or one we’ve discussed in class that makes decisions which affect people’s lives in some meaningful way. Describe it and then answer the following questions:

  • Does this system rely on data collection to make these decisions?
  • Where does this information come from?
  • What’s the consent model?
  • What questions related to individual privacy does it raise?

Check out "The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy" [url] by Jemio et al..

Check out "Pregnancy Tracking with Garmin" [url] , 2021.

Check out "Why Hong Kongers Are Toppling Lampposts" [url] by Sidney Fussell, 2019.

Check out "neveragain.tech" [url] by Leigh Honeywell, 2016.

Check out "Cegłowski Senate Testimony" [url] by Maciej Cegłowski, 2019 (10 pages).

Check out "How the Federal Government Buys Our Cell Phone Location Data" [url] by Bennett Cyphers.

Fri, Feb 03 Critical Perspectives

Read Chapter one, just "STORIES CHANGED BY AN ALGORITHM" (12 pages), Interlude (4 pages) and Chapter two, from "WHICH ALGORITHMS ARE HIGH STAKES" on (20 pages) (36 pages total) from "Voices in the code" [pdf] by David Robinson, 2022.

Robinson delivers a poignant and approachable introduction to the question of how to govern AI and other algorithmic decision systems by both surveying the field and opening a case study on kidney transplants.

Read the introduction from "Automating Inequality" [pdf] by Virginia Eubanks, 2017 (13 pages).

In the introduction to her acclaimed book, Eubanks captures the personal consequences of poorly designed decision systems, particularly on poor and working class people.

Read "Fairness and Machine Learning" [url] by Solon Barocas et al., 2019.

Scholars Barocas, Narayanan, and Hardt collaborate on one of the most readable and insightful texts to address questions of fairness in AI, ML, and related decision systems.

Chose one of the four governance strategies Robinson lists in his second chapter. Identify a technology or system (one not mentioned in the text) that you think could benefit from this kind of governance. Why?

Platform or Publisher?

Wed, Feb 08 Misinformation and Platforms

Read chapter one (pg. 1 - 14) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" [pdf] by Tarleton Gillespie, 2018 (288 pages).

Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. Beginning with an example of content moderation on Facebook, he makes the case that content moderation is an essential element of these social media companies and that the act of providing content to users comes with many value laden decisions–picking up myths of openness, free speech, neutrality, and more.

Read "The Great Delusion Behind Twitter" [url] by Ezra Klein, 2022.

Here the journalist Ezra Klein attempts to boil-down the issue of regulating internet speech specifically as it pertains to Twitter—is it a town square? What metaphor applies?

Read introduction (9 pages) from "Kill All Normies" [pdf] [url] by Angela Nagle, 2017.

This does a better job of describing some of the thorny “grey” areas and mechanisms for radicalization that we will otherwise discuss on the platform days, and is an illustrative antidote to the clearer cut “malicious” content mentioned in the other pieces.

Read "Split Screen: How Different Are Americans’ Facebook Feeds? – The Markup" [url] by Sam Morris et al..

Use this page, designed by the data journalism publication, the Markup, to get a sense of different filter bubbles on Facebook. Try out a couple different options to see what different groups of people are seeing right now.

Read "Why Facebook Can't Fix Itself" [url] by Andrew Marantz, 2020 (5 pages).

Here we get a close look into the implementation of content moderation strategies at Facebook. Pay attention to how what Gillespie talks about applies.

Read "How We Analyzed the Cost of Trump’s and Biden’s Campaign Ads on Facebook" [url] by Jeremy B Merrill, 2020.

This article details how one media outlet, The Markup, in conjunction with data from an NYU research project, attempts to measure the influence of social media companies and their moderation.

Read "Facebook Seeks Shutdown of NYU Research Project Into Political Ad Targeting" [pdf] [url] by Jeff Horwitz, 2020.

Here find more details about Facebook’s efforts to push back against the very research described above.

Read "Inside Nextdoor's "Karen problem"" [url] by Makena Kelly, 2020.

Notice how content moderation can have racial disparate impacts.

Read "The Moderators" [url] by Adrian Chen, et al..

Watch this 20 minute documentary on content moderators, but beware of graphic content.

Read "Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy" [pdf] [url] by Nancy Fraser, 1990 (56 pages).

In a couple paragraphs address the following: What are platforms? Why does this term matter so much? Who are their stakeholders?

Check out "Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads" [url] by The New York Times, 2019.

Check out "“So You Won't Take Down Lies?”: AOC Blasts Mark Zuckerberg in Testy House Hearing" [url] by Alison Durkee.

Also see how she solicited the public for questions to ask the CEO .

Check out "A Reckoning at Facebook" [url] by Nicholas Thompson, 2018.

Check out ""I Have Blood On My Hands": A Whistleblower Says Facebook Ignored Global Political Manipulation" [url] by Craig Silverman et al., 2020 (6 pages).

This article quotes the internal Facebook memo mentioned in the Marantz piece.

Check out "When War Struck, Ukraine Turned to Telegram" [url] by Matt Burgess, 2022.

Content Moderation Algorithms and Free Speech

Fri, Feb 10 Misinformation and Platforms

Read "It's the (Democracy-Poisoning) Golden Age of Free Speech" [pdf] [url] by Zeynep Tufekci, 2018 (4 pages).

Zeynep Tufekci considers how power to censor functions on our oversaturated social networks, and the role of misinformation and the attention economy in this. The article provides striking clarity to issues we collectively face on this platform.

Read chapter 17 from "Genius Makers : The Mavericks Who Brought AI to Google, Facebook, and the World" [pdf] [url] , 2021.

In this chapter of his book on the rise of neural networks, Metz, a veteran journalist of Silicon Valley, concisely describes the claims which some make about those tools. Will AI “solve” content moderation for us? Read on.

Read chapter four (pg. 74 - 110; 36 pages total) from "Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media" [pdf] by Tarleton Gillespie, 2018.

Read the whole chapter, but, if you’re short on time, focus on the “automatic detection” section (pg. 97 to 110). Tarleton Gillespie, a social media scholar, establishes the groundwork for understanding social media platforms. In chapter 4, “Three Imperfect Solutions to the Problem of Scale”, Gillespie considers several models for content moderation, one of which is algorithmic “automatic detection” techniques.

Read "Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve" [url] by Mike Masnick, 2022.

Will Elon Musk solve free speech? Read on.

Read "The Problem of Free Speech in an Age of Disinformation" [url] by Emily Bazelon, 2020 (14 pages).

Read this article for an up-to-date account of free speech and its history in the United States. Notice the similarities between analog and social media. It questions how different governmental approaches to speech may be making us more or less free.

Read "How Facebook Hides How Terrible It Is With Hate Speech" [url] by Noah Giansiracusa.

Cut through the claims of AI content moderation and listen to Facebook’s own assessment that they ‘may action as little as 3-5% of hate … on Facebook.’

Read "The Risk of Racial Bias in Hate Speech Detection" [pdf] [url] by Maarten Sap et al., 2019 (9 pages).

From the 2016 Russian misinformation campaign , to the 2021 booting of the U.S. president, to the late 2021 whistle-blower revelations regarding mental health and hate speech, a maelstrom surrounds Facebook and other popular social media platforms (e.g. see the optional reading, “How Facebook Hides How Terrible…”).

What do you think ought to be done about content and its moderation on Facebook? Try to be specific in exactly what problem you are addressing and the trade-offs involved in your proposal.

Your answer might include:

  • De-platforming
  • More automated moderation
  • Hire more moderators
  • e.g. changing liability of publisher
  • or otherwise

Check out "Jigsaw" [url] .

Examine this as a “moonshot” type projects and how it attempts to reimagine people’s internet experiences using machine learning type systems (such as with perspective ). At the same time, consider it in light of Gillespie’s comments on Jigsaw on page 109 of “Custodians.”

Check out "Fighting Neo-Nazis and the Future of Free Expression" [url] , 2017.

History Forward and Backward

Wed, Feb 15 Course Project Related Discussions

Read "The Rape Kit's Secret History" [pdf] [url] by Kennedy, Pagan, 2020.

[Feel free to skip if triggering or otherwise too upsetting to you.] Here is a nearly-lost history of the effort behind getting a technology widely adopted. It is not a computing example, but it is an excellent example of writing about the context behind something now in our society, the challenges are often not technical, and the leaders often go unrecognized. The whole thing is fascinating, but it is long, so if you need to skim, try to focus on the politics needed for this technology to succeed.

Read page 5 onwards from "The Triple Revolution (original)" [pdf] [url] , 1964 (16 pages).

This famous memo to President Lyndon B. Johnson was drafted by the Ad Hoc Committee of the Triple Revolution, comprised of notable social activists, scientists and technologists, among others. It warns that revolutions in social justice, automation, and weapon development, and that if urgent social and economic changes are not made “the nation will be thrown into unprecedented economic and social disorder.” Consider what happens when our Utopian projects are societal.

Read "Is this time different? The opportunities and challenges of artificial intelligence" [pdf] by Jason Furman, 2016 (17 pages).

Keep the historical and argument sections of your course project in mind as you do the readings. Then, try to answer these questions or others of your own.

  • Who were the protagonists trying to convince?
  • What actions did they take to succeed?
  • What was the purpose of each article? What were some techniques used in the writing to achieve this purpose?
  • How is writing about the future different than writing about the past?
  • In the history of technology, who gets to have their story told? Who doesn’t?

Check out "San Francisco police linked a woman to a crime using DNA from her rape exam, D.A. Boudin says" [url] by Cassidy, Megan, 2022.

Check out "Somali Workers in Minnesota Force Amazon to Negotiate" [url] by Karen Weise, 2018.

Check out "Tech Won't Build It" [pdf] [url] , 2018 (28 pages).

Does it work?

Fri, Feb 17 Automating Humanity

Read "ChatGPT Is Dumber Than You Think" [url] by Ian Bogost, 2022 (5 pages).

Bogost playfully unpacks the implications of large language models such as GPT-3 from OpenAI. Do they work as our enthusiasm for them might project?

Read "AI's Jurassic Park moment" [url] by Gary Marcus, 2022 (2 pages).

Marcus, in his very abbreviated style, lays out why current large language models built on deep learning fail and why we might need another approach.

Read "Can machines learn how to behave?" [url] by Blaise Agüera y Arcas, 2022 (21 pages).

Agüera y Arcas, whom we’ve seen elsewhere in class, makes the argument that there is no clear line separating what AI models can understand from what people do. The issue, in his view, is only that we need to teach those models what is and is not ok.

Read "Language Models Understand Us, Poorly" [pdf] [url] by Jared Moore, 2022 (5 pages).

Jared lays out a couple of views on language understanding. You can also review the slides.

Read "Language Models as Agent Models" [url] by Jacob Andreas, 2022 (10 pages).

Andreas makes the case that language models can rightfully be thought of as models of humans, of agents. The philosophical and historical context is a bit lacking here. Take cse490a1, the philosophy of AI, to really get into the details.

Read "Semantics derived automatically from language corpora contain human-like biases" [url] by Aylin Caliskan et al., 2017 (4 pages).

This is the classic paper which set off the debate about bias in langauge models. You can also check out Caliskan’s similar 2022 paper finding bias in image models.

What kind of issues are there in using large language models? Why might they be or not be a very good idea? In a sense: do they work?

  • Finish the course project, part two by tonight.

Check out "Temporary policy: ChatGPT is banned" [url] by Makyen, 2022.

Check out "‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos" [url] by Olivia Snow, 2022.

Check out "A recent tweet" [url] by Steven Piantadosi, 2022.

Does it matter?

Wed, Feb 22 Automating Humanity

Read "Reclaiming conversation: the power of talk in a digital age" [pdf] by Sherry Turkle, 2015 (21 pages).

As technologists we set out to do the good work of automation. We formalize, experiment, and implement, increasing the space of what computers can do. But, as Turkle asks, is this what we want? Do we want to be replacing each other? She says, “It seemed that we all had a stake in outsourcing the thing we do best—understanding each other, taking care of each other.” Consider what happens when our Utopian projects are personal.

Read "One Day, AI Will Seem as Human as Anyone. What Then?" [pdf] [url] by Joanna Bryson, 2022 (6 pages).

Bryson investigates what it is that people value and whether we should be enlarging the tent to fit in AI. She says, “our values are the way that we hold our societies together.”

Read "Robots should be slaves" [pdf] by Joanna Bryson, 2009 (10 pages).

This is a more academic version of the argument Bryson makes in “One Day…”

Read "Computer power and human reason: From judgment to calculation." [url] by Joseph Weizenbaum, 1976.

Joseph Weizenbaum, writting as a professor at MIT in the 1960s, responds to the thoughtlessness present in programmers of the time especially in their attempts to replace human communication. Check out a recent javascript implementation of his Eliza chatbot

Read "The Most Human Human" [url] by Brian Christian, 2011.

A fun read that gives you entry to Christrian’s mindset and to philosophical and computer science ideas as he was preparing to try to come off as a human in a sort of Turing test.

Automated therapists have been proposed ever since Weizenbaum released his Eliza program in the 1970s. Do you think this would be a good idea? Why or why not?

Don’t just assume that we will solve all of ways that language models fail to be like people. (“Soon enough, language models will be indistinguishable from people. People are therapists so why shouldn’t language models be therapists, too?” is not an interesting argument.)

Check out "Voice assistants could ‘hinder children’s social and cognitive development’" [url] by Amelia Hill, 2022.

Check out "We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇" [url] , 2023.

Techno-Utopianism

Fri, Feb 24 The Society of Tech

Read "Three Expensive Milliseconds" [pdf] [url] by Paul Krugman, 2014 (1 page).

While reading this article, consider: who is today’s infrastructure for? What are some metrics being optimized for which have led to some (perhaps) unexpected consequences? How are these metrics and systems shaping our world?

Read chapter two from "Geek Heresy" [pdf] by Kentaro Toyama, 2015 (21 pages).

The classically trained computer scientist Toyama, in an excerpt from his book about realizing how Utopian tech fails the most marginalized, describes the role of tech as amplifying—whether for good or for bad.

Read "Survival of the Richest" [url] by Douglas Rushkoff, 2019 (3 pages).

From Douglas Rushkoff, a noted media theorist read of, “Apocalypto – the intolerance for presentism leads us to fantasize a grand finale. “Preppers” stock their underground shelters while the mainstream ponders a zombie apocalypse, all yearning for a simpler life devoid of pings, by any means necessary. Leading scientists – even outspoken atheists – prove they are not immune to the same apocalyptic religiosity in their depictions of “the singularity” and “emergence”, through which human evolution will surrender to that of pure information.” This idea is investigated more deeply in his book “Present Shock”.

Listen to "Speed" [url] by RadioLab, 2013.

An enlivening radio show which goes over many of the same concerns about speed as the readings.

Read "Tech Leaders Justify Project To Create Army Of AI-Controlled Bulletproof Grizzly Bears As Inevitable Part Of Progress" [url] , 2022.

The title explains it all

Read "The Microsoft Provocateur" [pdf] [url] by Ken Auletta, 1997 (14 pages).

This piece shows us the thinking of tech folks at a pivotal time when the nature of the internet was not yet decided. Skim the sections which are just about Mhyrvold’s life—we mean to focus on founders and their utopian visions.

Read "Origin Stories of Tech Companies if Their Founders Had Been Women" [url] by Ginny Hogan, 2019 (1 page).

Oftentimes in computer science, we consider only the upshot, only the Utopian worlds we might create.

  • For one of the technologies mentioned in the readings or another of your choosing, comment on the future worlds we imagine for it (e.g. a Utopia) and complications to that future which might arise in practice (e.g. a Dystopia).
  • What are other utopias you have heard described when talking about emerging technologies? Do you find those utopias compelling? Why or why not?

Check out "Why Joi Ito needs to resign" [url] by Arwa Mboya.

Check out "“What Have We Done?”: Silicon Valley Engineers Fear They've Created a Monster" [url] by Susan Fowler, 2018 (2 pages).

Harder, Faster, Better, Stronger?

Wed, Mar 01 The Society of Tech

Read chapter 1 (pg. 13 - 36; 23 pages total) from "Pressed for time: The acceleration of life in digital capitalism" [pdf] by Judy Wajcman, 2015.

Wajcman investigates the demand for speed and efficiency in our current society, particularly as encouraged by technologies (like smartphones) and their creators (like silicon valley corporations).

Read Chapter two from "The Atalas of AI" [pdf] by Kate Crawford, 2021 (35 pages).

Crawford, in her tour of the complex social and political systems which make up what we call AI, homes in on the labor practices which undergird modern trends.

Read "The Automation Charade" [url] by Astra Taylor, 2018 (5 pages).

Astra Taylor urges: “We shouldn’t simply sit back, awestruck, awaiting the arrival of an artificially intelligent workforce. We must also reckon with the ideology of automation, and its attendant myth of human obsolescence.”

Read "Communist Commentary on "The Triple Revolution"" [pdf] by Richard Loring, 1964 (10 pages).

This essay was published contemporaneously to the “Triple Revolution”, and is largely favorable of the reforms demanded. It also touches on the utility of utopianism in futurism, while considering labor issues in a distinctly Marxist, but still American, manner. In particular, the authors summarize, and take issue with, the Triple Revolution as saying “it is useless to fight the path progress is taking and they should therefore re-direct the aims of their fight to seek a better future in a world in which labor and its role will no longer be a basic factor.” The scan we have is a bit difficult to read, but we have been unable to find another.

Recall that in the “The Triple Revolution,” the authors observe:

There is no question that cybernation does increase the potential for the provision of funds to neglected public sectors. Nor is there any question that cybernation would make possible the abolition of poverty at home and abroad. But the in industrial system does not possess any adequate mechanisms to permit these potentials to become realities. The industrial system was designed to produce an ever-increasing quantity of goods as efficiently as possible, and it was assumed that the distribution of the power to purchase these goods would occur almost automatically. The continuance of the income-through-jobs link as the only major mechanism for distributing effective demand—for granting the right to consume—now acts as the main brake on the almost unlimited capacity of a cybernated productive system.

Consider the impact of our current regulatory and political framework on such cybernetic systems. (E.g. In the U.S., there’s a lower tax rate on income from investments—as from ‘unicorn’ start-ups. Or: The U.S. government, like some others, invests a lot of money into basic research programs as in computer science.)

  • In what ways do these social frameworks have an impact on what technology gets built?
  • In what was do the technologies which get built shape our world?
  • The readings cover externalities which arise in this interaction between social and technical systems. What’s an example of one of these? (And how does the term “accelerationism” relate?)

Check out "The Code: Silicon Valley and the Remaking of America" [pdf] by Margaret O'Mara, 2019.

Check out "Humane: A New Agenda for Tech (44 min. watch)" [url] , 2019.

From the Center for Human Technology . Also examine their website

Check out "Are we having an ethical crisis in computing?" [pdf] [url] by Moshe Y. Vardi, 2018 (1 page).

Experiences of Injustice in Computing

Fri, Mar 03 Computing and Racial Equity

Read "Critical Race Theory for HCI" [pdf] [url] by Ihudiya Finda Ogbonnaya-Ogburu et al., 2020 (1–16 page).

This article calls human-computer interaction research, and computing research more generally, to explicitly attend to race, namely through critical race theory. Through this theory, analysis of the field, and storytelling, the authors show that despite (some) efforts in computing, racism persists and bares redressal. Pay particular attention to the stories.

Read "Roles for computing in social change" [pdf] [url] by Rediet Abebe et al., 2020 (9 pages).

This reading offers concrete suggestions specific to computing research. It advances four roles such research can play in discussions around fairness, bias, and accountability.

Read introduction through page 16 from "Race after technology: Abolitionist tools for the new jim code" [pdf] by Ruha Benjamin, 2019.

Benjamin, in a recent book, offers a “race conscious orientation to emerging technology not only as a mode of critique but as a prerequisite for designing technology differently.” Affiliated with Princeton’s Center on Information Technology and Policy, she brings a fresh perspective to many of the foundations of computing.

Read introduction (16 pages) from "Stuck in the shallow end: education, race, and computing" [pdf] by Jane Margolis, 2008.

This introductory chapter details an early 2000s study which attempted to figure out why so few black and latinx students were enrolling in computer science courses. It, “shows how segregation and inequality along racial lines operate on a daily basis in our schools, despite our best intentions” (16).

Listen to particularly minutes 15 through 30 from "Episode 12: Confronting Our Reality: Racial Representation and Systemic Transformation with Dr. Timnit Gebru" [url] by Dylan Doyle-Burke et al..

Gebru, who received her Ph.D. from Stanford and then worked for Google before very publicly not working for Google, discusses the necessity of focusing on your values (hers being racial justice) in conjunction with your work.

Read "Combating Anti-Blackness in the AI Community" [pdf] [url] by Devin Guillory, 2020.

This reading is more about a computing research community than about the external impact of our technology, but each influences the other.

Answer at least two of the following.

On “Critical Race Theory for HCI”:

  • What is interest convergence?
  • Why do you think the authors include information about their background? What does this achieve?
  • Why do the authors contend that theory is an important focus?
  • What does “true recognition of the pervasiveness of racism” (pg. 9) require?
  • Do you have any stories along the lines of those shared? Would you be interested with sharing any with the group?

Choose one of the roles from “Roles for Computing in Social Change.”

  • How might your work as a computer scientist engage with your chosen role?
  • What social problem would you like to work on?

Check out "ShutDownSTEM Resources" [url] .

Check out "Liberatory Technology Zine" [url] , 2022.

Check out "USENIX Statement on Racism and Black, African-American, and African Diaspora Inclusion" [url] , 2020.

Wed, Mar 08 Participating in the Society of Tech

Read "Digital Pregnancy Test Deconstruction (Twitter Thread)" [url] by Foone Turing, 2020.

This and the next reading offer informal Twitter threads with some surprising twists on how a technology actually works and why it may or may not be useful. Is this misleading people or providing an ethically valuable service or both? If you had chosen this for your project, what would you have investigated?

Read "Response to Digital Pregnancy Test Deconstruction (Also on Twitter)" [url] by Naomi Wu, 2020.

(See above.)

  • Why did you choose these two? How would you categorize them? Describe the response taken. Might you act similarly? Why or why not?
  • Briefly reflect on you may or may not have reacted differently to the short readings for today now that you have mostly completed this class.

Fri, Mar 10 Participating in the Society of Tech

Read chapter 12 from "Artificial Unintelligence: How Computers Misunderstand the World" [pdf] [url] by Meredith Broussard, 2018.

Having read Broussard’s commentaries on the first day and as an introduction to the data unit, we now finish with her conclusion: a renewed plea for computing technology to serve the people who made it—humans.

Read foreward and introduction (pg. xi - xxvi; pg. 1 - 5; 23 pages total) from "Hope in the dark: Untold histories, wild possibilities" [pdf] by Rebecca Solnit, 2016.

Solnit, a writer and activist, reflects on our desire for social, cultural, or political change given the appearance that we have not arrived there (considering issues from global warming, to human rights abuse). Originally responding to the war in Iraq, she explores how news cycles and our personal narratives frame these issues and makes the case for hope nonetheless–“tiny and temporary victories”

Read From 'SAN DIEGO' on from "The Code: Silicon Valley and the Remaking of America" [pdf] by Margaret O'Mara, 2019.

From the UW historian O’Mara hear this story of how two UW CSE alumni have navigated their careers as computer scientists.

  • What’s an idea from this course that every UW CSE student ought to understand?
  • Finish the course project, part three by tonight.

COMPUTERS ETHICS  SOCIETY and HUMAN VALUES                          Written Assignments         

  Each student is required to submit the following assignments according to the schedule for the semester.  Check on the COURSE CALENDAR for the due dates. Follow the INSTRUCTIONS for PREPARING and SUBMITTING WRITTEN ASSIGNMENTS

  • Do not violate academic integrity!  Do not plagiarize! Students must include citations and references and quotations. Students must type or keyboard their papers and essays.
  • No more than two typographical, grammatical or syntactical errors per page.
  • Late papers and essays will NOT be accepted.

Assignments are intended to provide for an assessment of the learner's achievement and progress. Assignments and parts of assignments are intended to assess the learner's motivation, reading comprehension, critical thinking skills and appreciation of philosophy. Assignments for modules 1 to 12 may be revised and resubmitted any number of times up to one week before the end of the semester. Assignments for modules 13 to 15 may be revised and resubmitted up to the last scheduled class.       Click on the module title below for the assignment for that module.   

MODULE  0 ORIENTATION ASSIGNMENTS: ORIENTATION WEEK

For this first module you are expected to COMPLETE ALL the TEN STEPS TO START OFF

=============================================================================

Written Assignment for MODULE 1 INTRODUCTION TO THE COURSE

1. Reading Comprehension

From your readings on the history of computers and information networks and the internet take several of what were reported to be significant inventions, developments, applications and comment on why you agree or disagree with the judgment that they are significant.  Copy and paste the passages making the reports and offer the citations as to the source of those passages. CAUTION>   Do not select passages from the online textbook of Dr. Philip Pecorino!  They must be from the materials supplied through the links.

2. Critical Thinking  

Here at the beginning of the semester make an attempt to describe what things humans consider of great and fundamental importance (Values) that appear to have any relation to the technological advances and applications involving computers and information networks and the internet.  Be very clear in indicating the values.

***************************************************

I suggest that you create your assignments using your word-processing program and spell checker , then copy and paste your text-answers into the message window of an email ,and send it to the instructor by email with the text inside the email message window. DO NOT send attachments .

Written Assignment for MODULE 2 COMPUTERS and ETHICS

What is unique about computers as far as ethical issues?  How do policy vacuums come about? 

From your readings copy and paste the passages that address or answer this question and offer the citations as to the source of those passages.  CAUTION>   Do not select passages from the online textbook of Dr. Philip Pecorino!  They must be from the materials supplied through the links.

 2. Critical Thinking

How should the ethical problems presented by Computer Technology, Information Technology, Information Networks and the Internet be approached?

Written Assignment for MODULE 3 ETHICS

1 . Reading Comprehension

From your readings copy and paste the passages that address or answer this question and offer the citations as to the source of those passages. 

List at least three things that are wrong with or problems with each of these theories:

  • UTILITARIANISM
  • CATEGORICAL IMPERATIVE : Kant
  • NATURAL LAW
  • THEORY OF JUSTICE AS FAIRNESS-MAXI MIN : Rawls
  • WILL TO POWER - Existentialist Theory of Nietzsche
  • CARING- One of several Feminist Theories of Ethics
  • NORMATIVE ETHICAL RELATIVISM<<< DISPROVEN and NOT a THEORY but a critique of all Ethics as being culturally based and no universal ethics possible.

2 . Critical Thinking

Which of the theories listed above do you think is the theory with the most acceptable disadvantages for you?  At this time in your life what are the values you hold highest?  Which ethical principles is most consonant or supportive of the values you hold highest? Which is the most acceptable theory to serve as the basis for a moral order in a society in which you would want to live?  Please indicate your familiarity with the readings in your answer.NOTE:  NORMATIVE ETHICAL RELATIVISM is DISPROVEN and NOT a THEORY but a critique of all Ethics as being culturally based and no universal ethics possible.

As you approach the various situations posing moral dilemmas or problems involving Computer Technology, Information Technology, Information Networks and the Internet what principles will you attempt to apply first to arrive at or agree with a position on the matter?  Whatever conclusions are reached as to what may be morally correct the analysis of the case and the reasoning used to support the position will need to be critically reviewed as alternatives are considered.  Over time some humans may develop into better moral beings capable of better reasoning and they may deepen in their moral convictions.  Is it possible that you might be one of them?

Written Assignment for MODULE 4 Law: Free Speech and Censorship

Select from your readings the most important statement in favor of or arguments for freedom on the Internet and the most important statement or arguments against such freedom or limiting such freedom.   State why you think each passage is the best you have found and why it is better than some of the others.

From your readings copy and paste the passages that address or answer this question and offer the citations as to the source of those passages.   CAUTION>   Do not select passages from the online textbook of Dr. Philip Pecorino!  They must be from the materials supplied through the links.

2 . Critical Thinking 

State your own position on  Freedom of Speech on the Internet and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle to decide what you think is the morally correct thing to do. You must state those principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own.

Use this template or form to make certain that you include each part of the process-parts a to e

Label your parts with the letters a to e to make very clear that you have done each part.

Dialectical thinking: the 5 parts

  • a. Take a position on this question or issue Be as exact as you can be.  Be precise in your use of language.
  • b.  Provide the reasons (ethical principles and human values) why you think this position is better defended by reason and evidence than are the alternative positions Position defended using reasoning in support of the judgment (conclusion of the  argument).  You state the reasons (ethical principles and human values) why the position you take makes sense and has evidence and reasons to support it other than your feelings or personal preference or your opinion or what you were brought up to believe or what just about everyone you know thinks or believes.   Philosophers have offered such reasons (ethical principles and human values) and evidence for the positions they have taken and you should consider them and if you agree you can and should so state them in support of your own position.
  • c.  State the reasons why you found the other positions flawed or less defensible than the one you are defending
  • d.  State the criticisms of your position
  • e.   Respond to the criticisms- rebuttal- how do you defend your position in light of those criticisms

VIDEO on Dialectical Process http://www.youtube.com/user/PhilipPecorino#play/uploads/21/zziTWJPbYyU

Written Assignment for MODULE 5 Intellectual Property

Select from your readings the most important statement in favor of or arguments for treating software as Intellectual Property and the most important statement or arguments against such a position that either argue against the IP right altogether or want it limited.   State why you think each passage is the best you have found and why it is better than some of the others.

2 . Critical Thinking   

State your own position Intellectual Property Rights and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 6 Privacy

Select from your readings the most important statement in favor of or arguments for maintaining as much privacy as is possible for individuals and groups and the most important statement or arguments against such a position that either argue against the right to privacy or who argue for the right of governments and corporations to monitor people.   State why you think each passage is the best you have found and why it is better than some of the others.

State your own position   Privacy in the Age of the Internet and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 7 Secrecy and Security

Select from your readings the most important statement in favor of or arguments for permitting individuals and groups and corporations to use encryption programs that can not be broken by governments and the most important statement or arguments against such a position that either argue against the right to have such software or who argue for the right of governments and corporations to limit what may be used only to what software encryption the government can circumvent.    State why you think each passage is the best you have found and why it is better than some of the others.  You may also state your own position and offer a defense for it.

State your own position   the Need for Secrecy and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 8 Mid Course Evaluations and Changes

1 .NO NEW READINGS !!!

2. NO WRITTEN ASSIGNMENT

Just respond in the Discussion Board to the lead questions concerning the class and how it is going.

Written Assignment for MODULE 9 Crime and Misbehavior

1 . Reading Comprehension 

What is hacking and its various forms? Is it always illegal? 

Are any of the following activities immoral?   Defend your position using ethical principles and reasoning.

State your own position   The Morality of the Internet MisBehaviors   and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 10 Information Technology  Accountability

What is the problem of many hands ?

How are the designers, programmers and manufacturers and distributors of software accountable in any manner for their software?   Discuss both the legal notion of accountable as well as the moral notion of being accountable or blameworthy.

State your own position   The Morality of Accountability   and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 11  Computing and Information Technology as Professions and Professional Codes

What are the professional responsibilities for those who make computers, software, information networks, and set up and maintain the Internet and internet sites? 

How is moral responsibility to be determined for those who make computers, software, information networks, and set up and maintain the Internet and internet sites?  What role do the various codes of the various professions play in answering this?  Are the codes the basis for responsibility or are they a reflection of it?

State your own position   The Morality of Professional Responsibility   and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle   and human value to decide what you think is the morally correct thing to do. You must state those ethical principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own .

Written Assignment for MODULE 12 Social Change

Select from your readings the most important statements concerning the actual and potential impact of the computer and information technologies on society.  

State why you think each passage is the best you have found and why it is better than some of the others.  You may also state your own position and offer a defense for it.

Written Assignment for MODULE 13 Political Change

Select from your readings the most important statements concerning the actual and potential impact of the computer and information technologies on democracy.  Be sure to include materials on how the technologies both support the democratic process and how they might threaten it as well including the creation of the Digital Divide.  

Written Assignment for MODULE 14 Artificial Intelligence: Computers and Being Human

Select from your readings the most important statements concerning the actual and potential impact of artificial intelligence on the conceptions that humans have of themselves and of what makes a human, human.  

State why you think of the creation of artificial intelligence and entities that would make use of such intelligence.   Take a position on the moral legitimacy or justification for the creation of such entities and defend it using ethical principles.   State your own position and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle to decide what you think is the morally correct thing to do. You must state those principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own.

Written Assignment for MODULE 15:    CULMINATING ACTIVITY

Four (4) parts

PART I. ABOUT THE SUBJECT MATTER: FINAL EXERCISE   Answer each of the questions below and submit

1. Which of the issues covered this semester has been the most important and why so?

2. Which of the issues covered this semester has meant anything to you personally and why so?

3. Why are not more people aware of these troublesome questions, issues or problems?

4. Now that you have been educated as to these issues in Computers and  Ethics in what way will they have any consequence in your life?

5. Are you in a better position now to think about and handle these issues using ethical principles?

==================================================

PART II.  ABOUT THE COURSE: FINAL EXERCISE 

1: What did you like best about this course? 

2: What specific things do you think could be improved in the structure or design of the course and learning activities? 

3: How would you improve the quality and participation in course discussions/interactions?  

4:   What changes would you suggest be made to the pacing or sequence of the content and activities for this course? (e.g., were the due dates doable for you? Were the course materials sequenced well?)  

5:  What changes would you suggest be made to the quantity of work required for this course?  

6:  How could the course be improved in terms of my interaction, participation, and management of the course?  

7:  What other suggestions, comments, or recommendations would you have for the instructor?

8:  What advice do you offer to students who would be just entering the class at the very start of the semester?

PART III.  COURSE OBJECTIVES

This course has nine objectives on a scale of 0 to 5 with 5 as the highest level , how well do you think that you have achieved these objectives?

a.      Identify some of the basic content in the field of Computers, Information Systems, Ethics, Society and Human Values ;  

a. vocabulary

b. concepts

c. theories  

b.   Identify traditional and current Issues related to Computers, Information Systems, Ethics, Society and Human Values ;  

c. Communicate your awareness of and understanding of philosophical issues.

d. Demonstrate familiarity with the main issues in the discourse related to Computers, Information Systems, Ethics, Society and Human Values and be able to state what major schools of thought there are that have contributed to the ongoing discussion of these issues  

e.   Develop skills of critical analysis and applying ethical principles to situations and dialectical thinking .  

f. Analyze and respond to the comments of other students regarding philosophical issues.  

Score 0 to 5   a.____  b. ______ c._______  d.________ e.______ f._____

PART IV. DIALECTICAL ETHICAL REASONING

Select which of your submitted work represents your estimate of your best effort at ethical dialectical reasoning and submit it again to me.

  Ethical Dialectical Thinking http://www.qcc.cuny.edu/SocialSciences/ppecorino/SS770/outline-Ethical-Dialectical-Thinking.html

  Steps for presenting an ethical argument or a defense of a moral judgment using ethical dialectic

1. Position on the moral issues or dilemma made clear

2. Position defended using

a. reasoning in support of the judgment (conclusion of the moral argument) b. Ethical Principles employed in the argument c. Values used to select the ethical principles used in defense of the conclusion (judgment)

3. Consideration of alternative positions and the rejection of those alternatives in favor of the judgment made for the reasons given which employ ethical principles and values.

State your own position and offer a defense for it. Using the Ethical Dialectical Process of Thinking state what your ethical position would be on the moral question or dilemma or situation and why. You are to take a position and defend it. You should use some ethical principle to decide what you think is the morally correct thing to do. You must state those principles and explain how they have been applied to the situation. You should indicate that you have rejected alternative positions to your own and the reasons why you have done so.  In so doing you need to enunciate clearly the values and ethical principle(s) you are using to both reject the alternative positions and to defend or support your own.

*************************************************** Research in Philosophy on the Internet. ******************

Free tutorial on doing research in Philosophy on the Internet.

http://www.humbul.ac.uk/vts/philosophy/index.htm

As for the search engine, you would enter:

hacking + ethics

privacy + ethics

Intellectual Property  + ethics

==============================================

You may want to print this part of this document out.

INSTRUCTIONS for PREPARING and SUBMITTING WRITTEN ASSIGNMENTS

Composing your assignment

Normally, you should compose your response using your word processor or on paper. This will give you the opportunity to revise, proofread, and spell check. When you have completed your assignment document be sure to spell check it .

Make sure to read the directions for each assignment carefully for details, due dates, and any thing else that may be specific to the assignment.

Format For Submitting Written Assignments 

EMAIL  [email protected]   Do not send attachments!!!    Copy and paste your text from the word processor directly into the message window of the email.  In the subject line put:

first name , last name,  PHI 301 , assignment#

Evaluations

The evaluation for your assignment will appear directly in your document or with your document when returned to you by email or returned directly to you by the instructor.  Evaluations are private and can only be read by the student and professor.

OBSERVE THE DUE DATES!!   Check on due dates:

All written assignments may be revised and resubmitted.  At least one assignment must be submitted in draft form and then after receiving the instructor�s comments and suggestions, it is to be revised and resubmitted for formal assessment.  Students may resubmit their revised assignments no more than three times before the final day of class.

In all cases the written work must show evidence of the author�s awareness of the materials made available in the online textbook and through the related Internet links found in the Online Textbook that is part of the course.   Proper citations and accreditation are to be made evident in the body of the work. The learners are required to provide evidence of research and scholarship and to AVOID Plagiarism!

Criteria for evaluation of the written assignments is given under Course Information document titled � How you will be evaluated.�  Other students will not view student written assignments anywhere within the course.  Students may send drafts of their work to their classmates and discuss them through the use of email.   They may discuss the assignment itself within the course in the Student Caf�.

Check on due dates:   Calendar

============================================================

Click the back button on your browser to return to the previous document.

Advertisement

Advertisement

From computer ethics and the ethics of AI towards an ethics of digital ecosystems

  • Open access
  • Published: 31 July 2021
  • Volume 2 , pages 65–77, ( 2022 )

Cite this article

You have full access to this open access article

  • Bernd Carsten Stahl   ORCID: orcid.org/0000-0002-4058-4456 1  

10k Accesses

24 Citations

4 Altmetric

Explore all metrics

Ethical, social and human rights aspects of computing technologies have been discussed since the inception of these technologies. In the 1980s, this led to the development of a discourse often referred to as computer ethics. More recently, since the middle of the 2010s, a highly visible discourse on the ethics of artificial intelligence (AI) has developed. This paper discusses the relationship between these two discourses and compares their scopes, the topics and issues they cover, their theoretical basis and reference disciplines, the solutions and mitigations options they propose and their societal impact. The paper argues that an understanding of the similarities and differences of the discourses can benefit the respective discourses individually. More importantly, by reviewing them, one can draw conclusions about relevant features of the next discourse, the one we can reasonably expect to follow after the ethics of AI. The paper suggests that instead of focusing on a technical artefact such as computers or AI, one should focus on the fact that ethical and related issues arise in the context of socio-technical systems. Drawing on the metaphor of ecosystems which is widely applied to digital technologies, it suggests preparing for a discussion of the ethics of digital ecosystems. Such a discussion can build on and benefit from a more detailed understanding of its predecessors in computer ethics and the ethics of AI.

Similar content being viewed by others

computer ethics assignment

Philosophical foundations for digital ethics and AI Ethics: a dignitarian approach

Robert Hanna & Emre Kazim

computer ethics assignment

Introduction to the 2019 Yearbook of the Digital Ethics Lab

computer ethics assignment

AI Ethics: how can information ethics provide a framework to avoid usual conceptual pitfalls? An Overview

Frédérick Bruneault & Andréane Sabourin Laflamme

Avoid common mistakes on your manuscript.

1 Introduction

The development, deployment and use of digital technologies has long been recognised as having ethical implications. Based on initial reflections of those implications by seminal scholars such as Wiener [ 122 ], [ 123 ] and Weizenbaum [ 121 ], a stream of research and reflection on ethics and computers emerged. The academic field arising from this work, typically called computer ethics, was and remains a thriving but nevertheless relatively small field that managed to establish a body of knowledge, dedicated conferences, journals and research groups.

While computer ethics continues to be a topic of discussion, the dynamics of the ethical reflection of digital technology changed dramatically from approximately the middle of the 2010s when the concept of artificial intelligence (AI) (re-)gained international prominence. The assumption that AI was in the process of fundamentally changing many societal and business processes with manifest implications for most individuals, organisations and societies led to a plethora of research and policy initiatives aimed at understanding ethical issues of AI and finding ways of addressing them.

The assumption underlying this paper is that one can reasonably and transparently distinguish between the discourses on computer ethics and the one focusing on the ethics of AI. If this is the case, then it would be advantageous to participants in both discourses to better understand the differences and similarities between these two discourses. This paper, therefore, asks the research question: how and to what extent do the discourses of computer ethics and the ethics of AI differ from one another?

The paper is furthermore motivated by a second assumption, which is that ethical reflection of digital technologies will continue to develop and that there will be future discourses, based on novel technologies and their applications that go beyond both computer ethics and the ethics of AI. If this turns out to be true, then an understanding of the commonalities and persistent features of computer ethics and the ethics of AI may well provide insights into likely ethical concerns that can be expected to arise in the next generation of digital technologies and their applications. The second question that the paper seeks to answer is, therefore: what can be deduced about a general ethics of digital technologies by investigating computer ethics and the ethics of AI?

These are important questions for several reasons. Answering them facilitates or improves mutual awareness and understanding of computer ethics and ethics of AI. Such an understanding can help both discourses identify current gaps existing ideas. For computer ethics scholars, this may be an avenue to contribute their work to the broader societal discourse on AI. For scholars involved in the ethics of AI debate, it may help to avoid repetition of settled discussion. But even more importantly, by comparing computer ethics and the ethics of AI, the paper can think beyond current discussions. A key contribution of the paper is the argument that an analysis of computer ethics and the ethics of AI allows for the identification of those aspects of the discourse that remain constant and are independent from specific technologies. The paper suggests that a weakness of both computer ethics and the ethics of AI is their focus on a particular technology or artefact, i.e. computers or AI. It argues that a better understanding of ethical issues can be achieved by taking seriously the systems nature of digital technologies. One stream of research that has not been prominent in the ethics-related debate is that of digital (innovation) ecosystems. By moving away from an artefact and looking at the ethics of digital ecosystems, it may be possible to proactively engage with novel and emerging technologies while the exact terminology to describe them is still being developed. This would allow for paying attention early to the ethical aspects of such technologies.

The paper proceeds as follows. The next section summarises the discourses on computer ethics and on the ethics of AI with a view to identifying both changing and constant aspects between these two. This includes a justification of the approach and a more detailed description of aspects and components of the discourses to be compared. This provides the basis for the description and critical comparison of the two discourses. The identification of overlaps and continuity provides the starting point for a discussion of a future-proof digital ethics.

2 Computer ethics and the ethics of AI

The argument of the paper rests on the assumption that one can reasonably distinguish between computer ethics and the ethics of AI. This assumption is somewhat problematic. A plausible reading is that the ethics of AI is simply a part or an extension of computer ethics. This paper therefore does not propose any categorical difference between computer ethics and the ethics of AI but simply suggests that it is an empirical phenomenon that these two discourses differ to some degree.

One argument that supports a distinction between computer ethics and the ethics of AI is the level of attention they receive. While many of the topics of interest to computer ethics, such as privacy, data protection or intellectual property, have raised societal and, thus, political interests, this has never led to the inclusion of computer ethics terminology into a public policy discourse. This is very different for the ethics of AI, which is not just a thriving topic of academic debate, but which is explicitly dealt with by numerous policy proposals [ 104 ]. A related aspect of the distinction refers to the participants in the discourse. Where computer ethics is to a large extent an academic topic, the ethics of AI draws much more on contributions from industry, media and policy.

This may suffice as a justification for the chosen approach. The validity of these observations are discussed in more detail below. The following Fig.  1 aims to represent the logic of the research described in this paper.

figure 1

Representation of the research logic of the paper

The two blue ellipses on the left represent the currently existing discourses on computer ethics and the ethics of AI. The differences and similarities between these two are explored later in this section. From the insights thus generated the paper will progress to the question what can be learned about these existing discourses that can prepare the future discussion of the ethics of emerging digital technologies.

2.1 Methodology

The methodological basis of this paper is that of a literature review, more specifically of a comparison of two bodies of literature. Literature reviews are a key ingredient across all academic disciplines [ 42 ] and form at least part of most publications. There are numerous approaches to reviewing various bodies of literature that serve different purposes [ 115 ]. Rowe [ 106 ] suggests a typology for literature reviews along four different dimensions (goal with respect to theory, breadth, systematicity, argumentative strategy).

A central challenge for this paper is that the distinction between computer ethics and the ethics of AI is not clear-cut, but rather a matter of degree and emphasis. This is exacerbated by the fact that the terminology is ambiguous. So far, I have talked about computer ethics and the ethics of AI. Neither of these terms is used consistently. While the term computer ethics is well established, it is closely linked with other such as ethics of ICT [ 105 ], information technology ethics [ 110 ] or cyberethics [ 111 ]. Computer ethics is closely related to information ethics to the point where there are several publications that include both terms in the title [ 56 ] and [ 120 ]. The link between computer ethics and information ethics is discussed in more detail under the scope of the topic below.

Just like there are different terms that overlap with computer ethics, there are related terms describing ethics of AI, such as responsible AI [ 15 , 38 , 45 , 118 ] or AI for good [ 17 , 69 ]. In addition, the term ethics is used inconsistently. It sometimes refers to ethics as a philosophical discipline with references to ethical theories. However, it often covers ad hoc concerns about particular situations or developments that are perceived as morally problematic. Many such issues could be equally well described as social concerns. Many of them also have a legal aspect of them, in particular where they pertain to established bodies of law, notably human rights law. The use of the term 'ethics' in this paper, therefore, is a short hand for all these uses in the discourse.

The comparison of the discourses on computer ethics and ethics of AI, thus, requires criteria that allow to determine the content of the two discourses. An important starting point for the delimitation of the computer ethics discourse is the fact that there are several published accounts that review and classify this discourse. These notably include work undertaken by Terry Bynum [ 27 , 28 , 29 ] but also other reflective accounts of the field (H. T. [ 117 ]. There are several seminal publications that deserve to be mentioned as defining the discourse of computer ethics. Jim Moor [ 93 ] notably asked the question "what is computer ethics?". And Deborah Johnson [ 73 ] provided the answer in the first textbook on the topic, a work that was also initially published in 1985. The description of computer ethics in this paper takes its point of departure from these defining publications. It also takes into account other sources which include a number of edited volumes, work published in relevant conferences (notably Computer Ethics Philosophical Enquiry (CEPE), Computers and Philosophy (CAP) and ETHICOMP) but also published accounts of ethics of computing in adjacent fields, such as information systems or computing [ 113 ].

The debate on the ethics of AI is probably more difficult to delineate than the one on computer ethics. However, there are some foundational texts and review articles that can help with the task. Müller's [ 97 ]recent overview in the Stanford Encyclopedia provides a good overview. There are several review and overview papers, in particular of ethical principles [ 54 , 72 ]. In addition, there is a quickly growing literature covering several recent monographs [ 41 , 45 ] and several new journals, including the new Springer journal AI and Ethics [ 84 ]. These documents can serve as the starting point to delineate the discourse, which also covers many publications from neighbouring disciplines as well as policy and general media contributions. It should be clear that these criteria do not constitute a clear delineation and there will be many contributions that could count under both headings and some that may fit neither. However, despite the fuzziness of the demarcation line, this paper submits that a distinction between these discourses is possible to the point where it allows a meaningful comparison.

In order for such a comparison to be interesting, it requires a clarification of which aspects one can expect to differ, which is the subject of the following section.

2.2 Differences between computer ethics and the ethics of AI

This section starts with an overview of the aspects that are expected to differ between the two discourses and then discusses each of these in more detail The obvious starting point for a comparison of the discourses on computer ethics and the ethics of AI is the scope of the discourse, in particular the technologies covered by it. This leads to the topics that are covered and the issues that are of defining interest to the discourse. The next area is the theoretical basis that informs the discourse and the reference disciplines that it draws from. Computer ethics and the ethics of AI may differ on the solutions to these issues and the mitigation strategies they propose. Finally, there is the question of the broader importance and impact of the discourses.

Figure 2 represents the different aspects of the two discourses that will now be analysed in more detail.

figure 2

Characteristics of the discourse

2.2.1 Scope: technology and its features

The question of the exact scope of both discourses has been the subject of reflection within the discourse itself and has varied over time. The early roots of computer ethics as represented by Wiener's [ 122 ] work was inspired by the initial developments of digital computing and informed by his experience of contributing to these during the Second World War. Wiener observed characteristics of these devices, such as an increased measure of autonomy and independence from humans which he saw as problematic. Similarly, Weizenbaum's [ 121 ] experience of natural language processing (an area that forms part of AI) led him to voice concerns about the potential social uses of technology (such as the ELIZA conversational system).

By the time the term "computer ethics" was coined in the 1980s, mainframe computers were already well established in businesses and organisations, and initial indications of personal computer use could be detected. The Apple II was launched in 1977, the BBC Micro and the IBM 5150 came to market in 1981, paving the way for wide-spread adoption of PCs and home computers. At this time, it was reasonably clear what constituted a computer and the discourse, therefore, spent little time on definitions of underlying technology and instead focused on the ethically problematic characteristics of the technology.

The initial clarity of the debate faded away because of technical developments. Further miniaturisation of computer chips, progress in networking, the development of the smartphone as well as the arrival of new applications such as social media and electronic commerce radically changed the landscape. At some point in the 2000s, so many consumer devices had integrated computing devices and capabilities that the description of something as a computer was no longer useful. This may explain the changing emphasis from the term "computer ethics" to "information ethics", which can be seen, for example, by the change of the title of Terry Bynum's [ 29 ] entry in the Stanford Encyclopedia of Philosophy which started out in 2001 as "Computer Ethics: Basic Concepts and Historical Overview" and was changed in 2008 to "Computer and Information Ethics". The difference between computer ethics and information ethics goes deeper than the question of technology and we return to it below, but Bynum's changed title is indicative of the problem of delimiting the scope of computer ethics in the light of rapid development of computing technologies.

The challenges of delimiting computer ethics are mirrored by the challenge of defining the scope of the ethics of AI. The concept of AI was coined in 1956 [ 88 ] in a funding proposal that was based on the conjecture that " every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". It set out to explore " how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." These ambitions remain largely intact for current AI research, but they do not explain why ethics of AI became a pervasive discourse from the mid-2010s.

The history of AI (cf. [ 19 ]) includes a history of philosophical and ethical questions [ 31 ]. AI is a field of research, generally accepted to be a sub-field of computer science that developed several themes and bodies of theory, which point to different concepts of AI. Shneiderman [ 107 ] suggests a simple distinction between two goals of AI that is helpful to understand the conceptual challenge faced by the ethics of AI discourse. The two goals that Shneiderman sees for AI are: first, emulation to understand human abilities and then improve on them and, second, the application of technical methods to develop products and services. This distinction of goals aligns well with the well-established distinction between narrow and strong or general AI. Narrow AI aims to fulfil specifically described goals. In recent years, it has been hugely successful in the rapidly developing sub-field of machine learning [ 10 ], based on the implementation of deep learning through artificial neural networks and related technologies [ 114 ]. Narrow AI, in particular as realised in machine learning using neural networks to analyse and learn from large datasets, has roots going back decades. However, it is widely believed that these well-known technologies came to the fore because of advances in computing power, development of algorithms and the availability of large datasets [ 21 , 65 ].

In addition to this narrow AI aimed at solving practical problems, there is the long-standing aim to develop technologies with human-like abilities. These systems would be able to transfer learning across domains and are sometimes called artificial general intelligence [ 41 ]. Artificial general intelligence forms part of the earliest attempts to model intelligent behaviour through symbolic representations of reality [ 94 ], sometimes referred to as good old-fashioned AI or GOFAI [ 55 ]. It remains contested whether artificial general intelligence is achievable and, even if so, whether it could be done using current technological principles (i.e. digital computers and Turing machines) [ 56 ].

There are attempts to interpret the difference between narrow and general AI as a difference in temporal horizon, with narrow AI focusing on short-term goals, whereas general AI is seen as a long-term endeavour [ 13 , 32 ]. Notwithstanding the validity of this interpretation, the inclusion of narrow and general AI in the discussion means that its technical scope is large. It includes well-understood current technologies of machine learning with ethically relevant properties (e.g. need for large datasets, opacity of neural networks) as well as less determined future technologies that would display human-like properties. This breadth of the technical scope has important consequences for possible issues arising from the technology, as will be discussed below.

2.2.2 Topics and issues

The topics and issues discussed by both discourses cover all aspects of life where computers or AI have consequences for individuals and groups. It is, therefore, beyond the scope of this paper to provide a comprehensive overview of all topics discussed. Instead, the aim of this section is to provide an indication of some key topics with the aim of showing which of them have changed over time or remained stable.

In the introduction to 1985 special issue on computer ethics of the journal Metaphilosophy , the editor [ 46 ] stated that the central issue of computer ethics would be the replacement of humans by computers, in particular in tasks requiring judgment. It was clear at the time, however, that other issues were relevant as well, notably invasions of privacy, computer crime and topics related to the way computer professionals deal with clients and society, including ownership of programmes, responsibility for computer errors and the structure of professional codes of ethics. This structure is retained in the 2001 version of Bynum's [ 29 ] encyclopaedia entry which lists the following issues: computers in the workplace, computer crime, privacy and anonymity, intellectual property, professional responsibility, globalisation and the metaethics of computer ethics. Picking up the discussion of ethics of computing in the neighbouring discipline of information systems, Mason [ 87 ] proposed the acronym PAPA to point to key issues: privacy, accuracy, property and accessibility.

A more recent survey of the computing-oriented literature suggests that the topics discussed remain largely stable [ 113 ]. It may, therefore, not be surprising that there is much continuity from computer ethics in the ethics of AI debate. One way to look at this discussion is to distinguish between issues directly related to narrow AI, broader socio-technical concerns and longer-term questions. Current machine learning approaches require large datasets for training and validation and they are opaque, i.e. it is difficult to understand how input gets translated into output. This combination leads to concerns about privacy and data protection [ 26 , 47 ] as well as the widely discussed and interrelated questions of lack of transparency [ 3 , 109 ], accountability, bias [ 34 ] and discrimination [ 96 ]. In addition, current machine learning systems raise questions of reliability, security [ 7 , 10 , 25 ] and safety [ 45 ].

The impact of AI-enabled socio-technical systems on society and communities is covered well in the discourse. AI is a key enabler of the digital transformation of organisations and society which may have significant impact with ethical relevance. This includes economic concerns, notably questions of employment [ 77 , 124 ] as well as labour relationships including worker surveillance [ 97 ] as well as concerns about justice and distribution [ 96 ]. Digital transformation can affect political power constellations [ 98 ] and support as well as weaken citizen participation. Possible consequences of use AI include changes to the nature of warfare [ 103 ] and environmental impacts [ 99 ]. Concerns are raised about how machines may enhance or limit human agency [ 18 , 40 ].

Two concepts that figure prominently in the AI ethics discourse are those of trust and trustworthiness. The AI HLEG's [ 6 ] structured its findings and recommendations in a way that seems to suggest that ethics is considered, to strengthen trustworthiness of AI technologies, which then engenders trust and, thus, acceptance and use. This functional use of ethics is philosophically highly problematic but seems to be driven by a policy agenda that sees the desirability of AI as an axiom and ethics as a means to achieve targets for uptake.

Finally, there is some debate about the long-term issues related to artificial general intelligence. Due to the open question whether current type of technologies can achieve this [ 108 ], it is contested how much attention should be given to questions such as the singularity [ 80 ], superintelligence [ 22 ], etc. These questions do not figure prominently in current policy-oriented discussions, but they continue to attract interest in the scientific community and beyond.

The topics and issues discussed in computer ethics and the ethics of AI show a high level of consistency. Many of the discussions of computer ethics are continued or echoed in the ethics of AI. This includes questions of privacy and data protection, security, but also wider societal consequences of technical developments. At the same time, some topics are less visible, have morphed or moved into different discourses. The computer ethics discourse, for example, had a strong stream of discussion of ownership of data and computer code with a heavy emphasis on the communal nature of intellectual property. This discussion has changed deeply with some aspects appearing to be settled practice, such as ownership of content now administered through structures that are based on business models that emerged taking into account the competing views on intellectual property. Netflix, iTunes, etc. employ a distribution service and subscription model that appears to satisfy consumers, producers and intermediaries. Other aspects of ownership remain highly contested, such as the right to benefit from secondary use of process data, which underpins what Zuboff [ 126 ] calls surveillance capitalism.

2.2.3 Theoretical basis and reference disciplines

While there is a high level of continuity in terms of issues and topics, the theoretical positions vary greatly between computer ethics and the ethics of AI. This may have to do with the reference disciplines [ 11 , 14 , 78 ], i.e. the academic disciplines in which the contributors to the discourses were originally trained or from which they adopt theoretical positions they apply to computing and AI [ 85 ].

Both computer ethics and the ethics of AI are highly interdisciplinary and draw from a range of reference disciplines. In both cases there is a strong influence of philosophy, which is not surprising given that ethics is a discipline of philosophy. Similarly, there is a strong presence of contributors from technical disciplines. While the computer ethics discourse draws on contributions from computer scientists, the ethics of AI has attracted attention from more specialised communities that work on AI, notably at present the machine learning community. The most prominent manifestation of this is the FAT / FAccT community that focuses on fairness, accountability and transparency ( https://facctconference.org/ ). There are also contributions from other academic fields, such as technology law, social sciences including science and technology studies. Some fields such as information systems are less visible than one could expect them to be in the current discourse [ 112 ].

While details of the disciplinary nature of the contributions to both discourses are difficult to assess, there are notable changes in the use of foundational concepts. In computer ethics, there is a strong emphasis on well-established ethical theories, notably duty-based theories [ 75 , 76 ], theories focusing on consequences of actions [ 16 , 89 ] as well as theories focusing on individual character and virtue [ 9 ],A. C. [ 83 ]. Ethical theorising has of course not been confined to these and there are examples of other ethical theories applied to computing, such as the ethics of care [ 4 , 60 ], or discourse ethics [ 91 ]. In addition there have been proposals for ethical approaches uniquely suited to computing technologies, such as disclosive ethics [ 23 , 70 ].

The ethics of AI discourse also uses a rich array of ethical theories [ 82 ], but it displays an undeniable focus on principle-based ethical guidelines [ 72 ]. This approach is dominant in biomedical ethics [ 37 ] and its adoption by the ethics of AI discourse may be explained by the well-established biomedical ethics procedures which promise practical ways of dealing with ethical issues, as well as an increasing interest of the biomedical ethics community in computing and AI technologies. However, it should be noted that this reliance on principalism [ 39 ] is contested within the biomedical community [ 79 ] and has been questioned in the AI field [ 67 , 92 ], but at present remains dominant.

A further significant difference between computer ethics and the ethics of AI is that the latter has a much stronger emphasis on the law. One aspect of this legal emphasis is the recognition that many of the issues discussed in the ethics of AI are well-established issues of human rights, e.g. privacy or the avoidance of discrimination and physical harm. There are, thus, numerous vocal contributors to the discourse that emphasise human rights as a source of normativity in the ethics of AI as well as a way to address issues (Access [ 2 , 43 , 81 , 96 , 102 ]. This legal emphasis translates into a focus on legislation and regulation as a way of dealing with these issues, as discussed in the next section.

2.2.4 Solutions and mitigation

One can similarly observe some consistency and continuity but also some discontinuity with regards to proposals for addressing these issues. This is clearly a complex set of questions that depend on the issue in question and on the individual, group or organisation that is to deal with it. While it is, thus, not possible to provide a comprehensive overview of the different ways in which the issues can be resolved or mitigated, it is possible to highlight some differences between the two discourses [ 120 ].

One proposal that figured heavily in the computer ethics discourse that is less visible in the ethics of AI is that of professionalism. [ 8 , 30 , 74 ]. While it was and remains controversially discussed whether and to what degree computer experts are, should be or would want to be professionals, the idea of institutionalising professionalism as a way to deal with ethical issues has driven the development of organisations that portray themselves as professional bodies for computing [ 24 , 62 ]. The uncertain status of computing as a profession is reflected by the status of AI, which is probably at best be regarded as a sub-profession.

Both discourses underline the importance of knowledge, learning and education as conditions of successfully navigating ethical questions [ 20 ]. Both ask the question which help can be provided to people working in the design and development of technology and aim to develop suitable methodologies [ 68 ]. This is the basis of various "by design" approaches [ 33 , 64 , 86 ] that are based on the principles of value-sensitive design [ 58 ], [ 85 ]. Specific methodologies for incorporating ethical considerations in organisational practice can be found both in the computer ethics debate [ 63 , 66 ] as well as the ethics of AI discourse [ 7 , 45 , 48 ].

One area where the ethics of AI debate appears to be much more visible and impactful than computer ethics is that of legislation and regulation. This does not imply that the ethics of AI has a greater fundamental affinity to legislation, but it is based on the empirical observation that ethical (and other) issues of AI are perceived to be in need of legislation due to their potential impact (see next section). Rodrigues [ 104 ] provides an overview of recent legislative agendas. The most prominent example is probably the European Commission's proposed Regulation for AI (European [ 50 ] which would bring in sweeping changes to the AI field, mostly based on earlier ethical discussion. In addition to proposed legislation in various jurisdictions, there are proposals for the creation of new regulatory bodies [ 44 ], European [ 51 ] and international structures to govern AI [ 71 , 119 ]. It is probably not surprising that some actors in the AI field actively campaign against legislation and industry associations such as the Partnership on AI but also company codes of conduct, etc. can be seen as ways of heading off legislation.

Computer ethics, on the other hand, also touched on and influenced legislative processes concerning topics in its field of interest, notably data protection and intellectual property. However, the attention paid to AI by legislators is much higher than it ever was to computers in general.

2.2.5 Importance and impact

One reason for the high prevalence of legislation and regulation with regards to AI is the apparent importance and impact of the technology. AI is generally described as having unprecedented impact on most aspects of life which calls for ethical attention. Notwithstanding the accuracy of this narrative, it is broadly accepted across academia, policy and broader societal discourse. It is also the mostly unquestioned driver for the engagement with ethics. Questions about the nature of AI, its characteristics, and its likely and certain consequences are dealt with under the implicit assumption that they must be dealt with due to the importance of the technology.

The computer ethics debate does not share this unquestioned assumption of the importance of its subject matter. In fact, it was a recurrent theme of computer ethics to ask whether it was needed at all [ 57 ], H. [ 116 ]. This is, of course, a reasonable question to ask. There are a number of fields of applied ethics, e.g. medical ethics, business ethics or environmental ethics. But there are few, if any, that focus on a particular artefact, such as a computer. So, why would computer ethics be called for. Moor [ 93 ] famously proposed that it is the logical malleability, the fact that intended uses are not even foreseen by the designer, that sets computers apart from other artefacts, such as cars or airplanes. This remains a strong argument that also applies to current AI. With the growing spread of computers, first in organisations, then through personal and mobile computing which facilitated everyday application including electronic commerce and social media, computer ethics could point to the undeniable impact of computing technology which paved the way for the now ubiquitous reference to the impact of AI.

3 Towards an ethics of digital ecosystems

So far, this article has suggested that computer ethics and the ethics of AI can be read as two related, but distinct discourses and it has endeavoured to elucidate the differences and similarities between these two. While this should have convinced the reader that such a distinction is possible and helpful in understanding both discourses, it is also clear that other interpretations are possible. The ethics of AI can be seen as a continuation of the computer ethics discourse that has attracted new participants and led to a shift of topics, positions and impact. Both interpretations allow for a critical analysis of both discourses with a view to identifying their shared strengths and weaknesses and an exploration of what can be learned from them that can prepare the next discourse that can be expected to arise.

This question is motivated by the assumption that the ethics of AI discourse is not the final step in the discussion. AI is many things, but it is also currently a hype and an academic fashion. This is not to deny its importance but a recognition that academia, like policy and general discussions follow the technology hype cycle [ 52 ] and attention to technologies, management models and research approaches have characteristics of fashion cycles [ 1 , 12 ]. It is, therefore, reasonable to expect that the current focus on AI will peak and be replaced by another topic of debate. The purpose of this section is to discuss what may emerge from and follow the ethics of AI discourse and how this next stage of the debate can best profit from insights generated by the computer ethics and the ethics of AI discourses.

The term "computer ethics" lost some of its appeal when computing technologies became pervasive and integrated into many other devices. When a computer is in every phone, car and even most washing machines and refrigerators, then the term "computer ethics" becomes too fuzzy to be useful. A similar fate is likely to befall AI, or may already have done so. On the one hand, "AI" as a term is already too broad, as it covers everything from specific machine learning techniques to fictional artificial general intelligence. On the other hand, it is too narrow, given that it excludes many of the current and emerging technologies that anchor part of its future impact, such as quantum computing, neuromorphic technologies, the Internet of Things, edge computing, etc. And we can of course expect new technologies and terminology to emerge to add to this complexity.

One weakness that both computer ethics and the ethics of AI share is their apparent focus on a particular piece of technology. Ethical, social, human rights and other issues never arise from a technology per se, however, but result from the use of technologies by humans in societal, organisational and other setting. This is not to suggest that technologies are value neutral, but that the affordances that they possess [ 59 , 100 ] can play out differently in different environments.

To prepare for the next wave of ethics of technology discussion that will succeed the ethics of AI, it may, therefore, be advisable to take a slightly different perspective, one that reduces the focus on particular technologies. One family of such perspectives are based on systems theory [ 99 ]. There are a number of such theories that have been applied to computing technologies, such as complex adaptive systems [ 90 ] or soft systems [ 35 , 36 ].

A possible use of the systems concept to understand the way technology and social environments interact is that of an ecosystem. The metaphor of ecosystems to describe AI and its broader social and ethical consequences has already been employed widely by scholars [ 53 ] as well as policymakers. The European Commission, for example, in its White Paper (European [ 49 ] that prepared the proposed Regulation (European [ 50 ] framed European AI policy in terms of an ecosystem of excellence and an ecosystem of trust, with the latter representing ethical, social and legal concerns. The OECD [ 101 ] similarly proposes the development of a digital ecosystem for AI. The World Economic Forum [ 125 ] underlines the logic of this terminology when it emphasises the importance of a systems-wide approach, if responses to the ethics of AI are to be successful.

From a scholarly perspective, it is interesting to observe that a body of research has developed since the mid-1990s that uses the concept of an ecosystem to describe how technologies are used in the economic system [ 5 , 61 , 95 ]. This discourse is of interest to this paper because it has developed a rich set of theoretical positions, substantive insights and methodologies that can be used to understand specific socio-technical systems. At the same time, there has been very little emphasis in this discourse on the ethical and normative aspects of these ecosystems. This paper does not offer the space to pursue this argument in more detail, but can suggest that combining these different perspectives and looking at the ethics of digital (innovation) ecosystems can provide a helpful new perspective.

The benefit of using such a digital ecosystems-based approach is that it moves away from a particular technology and opens the view to the way in which technical developments interact with social developments, which broadens the view to encompass application areas, social structures societal environments as well as technical affordances. Actual ethical concerns are affected by all of these different factors and the dynamics of their relationships.

The proposal arising from this insight is, thus, that, to prepare the next wave of ethics and technology discussion, the focus should not be on predicting the next big technology, but rather to explore how ethical issues arise in socio-technical (innovation) ecosystems. This is a perspective that can be employed right now and used to better understand the ethics of AI or computing more generally. It invites detailed empirical observations of the social realities of the development, deployment and use of current and past technology. It is similarly open to sophisticated ethical and theoretical investigations. This understanding can then be the baseline for exploring consequences of technological and social change. Making use of this perspective for the current ethics of AI debate would have the great benefit that the question of adequately defining AI loses its urgency. The key question then becomes how socio-technical innovation ecosystems develop, which is a question that is open to the inclusion of other types of technology from quantum computing to well-established computational and other technological artefacts.

Taking this perspective, which might be called the "ethics of digital ecosystems" moves beyond individual technologies and allows keeping track of and continuing established ethical discussions. An ethical analysis of digital ecosystems will need to delineate the systems which will be required to determine the capabilities of these ecosystems. The capabilities, in turn will be what gives rise to possible social applications and the resulting benefits and concerns. Whatever the next technological hype will be, it is a safe bet that it will continue at least some trends from the past and that the corresponding ethical debates will remain valid. For example, it is plausible that future digital technologies will make use of, analyse and produce personal data, hence continuing the need for considerations of privacy and data protection. Security, safety and reliability of any future socio-technical system are similarly a good bet in terms of future relevance.

The focus on broader innovation ecosystem furthermore means that many of the currently discussed topics can be better framed as relevant topics of discussion. Questions of political participation, economic justice or human autonomy are much more easily understood as aspects of socio-technical systems than as intrinsically linked to a particular technology. The change of perspective towards digital ecosystems can, thus, strengthen the plausibility and relevance of some of the current topics of debate.

The same can be said for the discussion of possible mitigations. By focusing on digital innovation ecosystems, the breadth of possible mitigation strategy automatically increases. In computer ethics or ethics of AI, the focus is on technical artefacts and there is a temptation to link ethical issues as well as responses to these issues to the artefacts themselves. This is where approaches such as value-sensitive design or ethics by design derive their legitimacy from. The move away from the artefact focus towards the socio-technical ecosystem does not invalidate such approaches but clearly shows that the broader context needs to be included, thus opening up the discussion for regulation, legislation, democratic participation, societal debate as means to shape innovation ecosystems.

The move beyond the ethics of AI towards an ethics of digital innovation ecosystems will further broaden the disciplines and stakeholder groups involved in the discussion. Those groups who have undertaken research on computer ethics will remain important, as will the additional groups that have developed or have moved to exploring the ethics of AI. However, the move towards digital innovation ecosystems makes it clear that additional perspectives will be required to get a full understanding of potential problems and solutions. Innovation ecosystem research is done in fields like business studies and information systems, which have much to contribute, but have traditionally had limited visibility in computer ethics and ethics of AI. Such a broadening of the disciplines and fields involved suggests that the range of theoretical perspectives is also likely to increase. Traditional theories of philosophical ethics will doubtlessly remain relevant and the focus on mid-level principles that the ethics of AI has promoted are similarly likely to remain important for guiding ethical reflection. However, a broader range of theories is likely to be applied, including systems theories, theories from business and organisational studies as well as the literature on innovation ecosystems.

4 Conclusion

This paper started from the intuition that there is a noticeable difference between the discourses on computer ethics and the ethics of AI. It explored this difference with a view to examining how understanding it can help us prepare for the inevitable next discourse, which will follow the current discussion of ethics of AI. The analysis of the two discourses has outlined that there are notable differences in terms of scope of the discussion, topics and issues, theoretical basis and reference disciplines, solutions and mitigations and expected impacts. It is, thus, legitimate to draw a dividing line between the two discourses. However, it has also become clear that there is much continuity and overlap and to a significant degree, the ethics of AI discourse is a continuation and extension of the computer ethics discourse. This part of the analysis presented in the paper should help participants in both discourses to more clearly see similarities and discontinuities and appreciate where research has already been done that can benefit the respective other discourse.

The exact insights to be gained from the review of the two discourses clearly depends on the prior knowledge of the observer. Individuals who are intimately familiar with both discourses may be aware of all the various angles. However, participants of the computer ethics discourse who have not followed the ethics of AI debate can find insights with regards to current topics and issues, e.g. the broader socio-economic debates that surround AI. They can similarly benefit from an understanding of how biomedical principalism is being applied to AI, which may offer avenues of impact, solutions and mitigations that computer ethics tended to struggle with. Similarly a new entrant to the ethics of AI debate may benefit from an appreciation of computer ethics by realising that many of the topics have a decade long history that there are numerous ethical positions and mitigation structures that have been well established and do not need to be reinvented.

Following from these insights, the paper then moved to the question what the next discourse is likely to be. This part of the paper is driven by the recognition that the emphasis on a particular technology or family of technologies, be this computers or AI, is not particularly helpful. Technologies unfold their ethical benefits and problems when deployed and used in the context of socio-technical systems. It is less the affordances of a technology per se, but the way in which those affordances evolve in practical contexts that are of interest to ethical reflection. There are numerous ways in which these socio-technical systems can be described, and this paper has proposed that the concept of innovation ecosystems may offer one suitable approach.

The outcome of the paper is, thus, the suggestion to start to prepare the discourse of the ethics of digital innovation ecosystems. This will again be a somewhat different discourse from the ones on computer ethics and the ethics of AI, but can also count as a continuation of the former two. The shift of the topic from computing or AI gives this discourse the flexibility to accommodate existing and emerging technologies from quantum computing to IoT without requiring a major shift of the debate. Maybe more importantly, it will require a more focused attention to the social side of innovation ecosystems, which means that aspects like application area and the local and cultural context of use will figure prominently.

By calling for this shift of the debate, the paper provides the basis for such a shift and can help shape current debates in this direction. This is particularly necessary with regards to the ethics of AI, which otherwise may be locked into mitigation strategies ranging from legislation and regulation to standardisation and organisational practice with a focus on the concept of AI, which may misdirect efforts away from the areas of greatest need.

This shift of the debate and the attention to the ethics of innovation ecosystem will not be a panacea. The need for a delimitation of the subject of debate will remain, which means that the exact content or membership of an innovation ecosystem that raises ethical questions will remain. Systems-based approaches raise questions of individual agency and the locus of ethics, which the dominant ethical theories may find difficult to answer. The innovation ecosystems construct is also just an umbrella term underneath which there will be many specific innovation ecosystems, which means that the attention to the empirical realisation of such systems will need to grow.

Despite the fact that this shift of the debate will require significant additional efforts, it is still worth considering. The currently ubiquitous discussion of the ethics of AI will continue for the foreseeable future. At the same time it is already visibly reaching its limitations, for example by including numerous ethical issues that are not unique to AI. In order for the discussion to remain specific and allow the flexibility to react to future developments, it will need to reconsider its underpinnings. This paper suggests that this can be achieved by refocusing its scope and explicitly embracing digital innovation ecosystems as the subject of ethical reflection. Doing so will ensure that many of the lessons that have been learned over years and decades of working on the ethics of computing and AI will remain present and relevant, and that there is a well-established starting point from which we can engage with the next generations of digital technologies to ensure that their creation and use benefit humanity.

Abrahamson, E.: Management fashion. Acad. Manag. Rev. 21 (1), 254–285 (1996)

Article   Google Scholar  

Access Now.: Human Rights in the Age of Artificial Intelligence. Access Now. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf (2018)

Access Now Policy Team.: The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems. Access No. https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf (2018)

Adam, A.: Computer ethics in a different voice. Inf. Organ. 11 (4), 235–261 (2001)

Adner, R.: Match your innovation strategy to your innovation ecosystem. Harv. Bus. Rev. 84 (4), 98–107 (2006)

Google Scholar  

AI HLEG.: Ethics Guidelines for Trustworthy AI. European Commission - Directorate-General for Communication. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (2019)

AIEI Group.: From Principles to Practice—An Interdisciplinary framework to operationalise AI ethics (p. 56). VDE / Bertelsmann Stiftung. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf (2020)

Albrecht, B., Christensen, K., Dasigi, V., Huggins, J., Paul, J.: The Pledge of the computing professional: recognizing and promoting ethics in the computing professions. SIGCAS Comput. Soc. 42 (1), 6–8 (2012). https://doi.org/10.1145/2422512.2422513

Aristotle.: The Nicomachean Ethics. Filiquarian Publishing, LLC (2007)

Babuta, A., Oswald, M., & Janjeva, A.: Artificial Intelligence and UK National Security—Policy Considerations [Occasional Paper]. Royal United Services Institute for Defence and Security Studies. https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf (2020)

Baskerville, R.L., Myers, M.D.: Information systems as a reference discipline. MIS Q. 26 (1), 1–14 (2002)

Baskerville, R. L., & Myers, M. D.: Fashion waves in information systems research and practice. Mis Quarterly, 647–662 (2009)

Baum, S.D.: Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & Soc. 33 (4), 565–572 (2018). https://doi.org/10.1007/s00146-017-0734-3

Benbasat, I., Weber, R.: Research commentary: Rethinking" diversity" in information systems research. Inf. Syst. Res. 7 (4), 389 (1996)

Benjamins, R.: A choices framework for the responsible use of AI. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00012-5

Bentham, J.: An Introduction to the Principles of Morals and Legislation. Dover Publications Inc (1789)

Berendt, B.: AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics 10 (1), 44–65 (2019). https://doi.org/10.1515/pjbr-2019-0004

Boddington, P.: AI and moral thinking: How can we live well with machines to enhance our moral agency? AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00017-0

Boden, M. A.: Artificial Intelligence: A Very Short Introduction (Reprint edition). OUP Oxford (2018)

Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00002-7

Borges, A.F.S., Laurindo, F.J.B., Spínola, M.M., Gonçalves, R.F., Mattos, C.A.: The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions Int. J. Inf. Manage. (2020). https://doi.org/10.1016/j.ijinfomgt.2020.102225

Bostrom, N.: Superintelligence: Paths, Dangers, Strategies (Reprint edition). OUP Oxford (2016)

Brey, P.: Values in technology and disclosive computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 41–58). Cambridge University Press (2010)

Brinkman, B., Flick, C., Gotterbarn, D., Miller, K., Vazansky, K., Wolf, M.J.: Listening to Professional Voices: Draft 2 of the ACM Code of Ethics and Professional Conduct. Commun. ACM 60 (5), 105–111 (2017). https://doi.org/10.1145/3072528

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Héigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., Amodei, D.: The malicious use of artificial intelligence: forecasting, prevention, and mitigation. http://arxiv.org/abs/1802.07228 (2018)

Buttarelli, G.: Choose Humanity: Putting Dignity back into Digital [Opening Speech]. 40th Edition of the International Conference of Data Protection Commissioners, Brussels. https://www.privacyconference2018.org/system/files/2018-10/Choose%20Humanity%20speech_0.pdf (2018)

Bynum, T.W.: Computer ethics: Its birth and its future. Ethics Inf. Technol. 3 (2), 109–112 (2001). https://doi.org/10.1023/A:1011893925319

Bynum, T. W.: The historical roots of information and computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 20–38). Cambridge University Press (2010)

Bynum, T. W.: Computer and Information Ethics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2015/entries/ethics-computer (2018)

Bynum, T. W., & Rogerson, S.: Computer ethics and professional responsibility: introductory text and readings. WileyBlackwell (2003)

Capurro, R.: The Age of Artificial Intelligences: A Personal Reflection. International Review of Information Ethics, 28. https://informationethics.ca/index.php/irie/article/view/388 (2020)

Cave, S., ÓhÉigeartaigh, S.S.: Bridging near- and long-term concerns about AI. Nature Machine Intelligence 1 (1), 5–6 (2019). https://doi.org/10.1038/s42256-018-0003-2

Cavoukian, A.: Privacy by design: The 7 foundational principles. Information and privacy commissioner of Ontario, Canada. http://dataprotection.industries/wp-content/uploads/2017/10/privacy-by-design.pdf (2009)

CDEI.: Interim report: Review into bias in algorithmic decision-making. Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/interim-reports-from-the-centre-for-data-ethics-and-innovation/interim-report-review-into-bias-in-algorithmic-decision-making (2019)

Checkland, P., Poulter, J.: Learning for action: A short definitive account of soft systems methodology and its use for practitioner, teachers, and students. Wiley (2006)

Checkland, P., & Poulter, J.: Soft systems methodology. In Systems approaches to managing change: A practical guide (pp. 191–242). Springer (2010)

Childress, J.F., Beauchamp, T.L.: Principles of biomedical ethics. Oxford University Press (1979)

Clarke, R.: Principles and Business Processes for Responsible AI. Comput. Law Secur. Rev. 35 (4), 410–422 (2019)

Clouser, K.D., Gert, B.: A Critique of Principlism. J. Med. Philos. 15 (2), 219–236 (1990). https://doi.org/10.1093/jmp/15.2.219

Coeckelbergh, M.: Technology, Narrative and Performance in the Social Theatre. In D. Kreps (Ed.), Understanding Digital Events: Bergson, Whitehead, and the Experience of the Digital (1 edition, pp. 13–27). Routledge (2019)

Coeckelbergh, M.: AI Ethics. The MIT Press (2020)

Book   Google Scholar  

Cooper, H. M.: Synthesizing research: A guide for literature reviews. Sage (1998)

Council of Europe.: Unboxing artificial intelligence: 10 steps to protect human rights. https://www.coe.int/en/web/commissioner/view/-/asset_publisher/ugj3i6qSEkhZ/content/unboxing-artificial-intelligence-10-steps-to-protect-human-rights (2019)

Council of Europe.: CAHAI - Ad hoc Committee on Artificial Intelligence. Artificial Intelligence. https://www.coe.int/en/web/artificial-intelligence/cahai (2020)

Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way (1st ed. 2019 edition). Springer (2019)

Editor.: Editor’s Introduction. Metaphilosophy, 16(4), 263–265 (1985)

EDPS.: EDPS Opinion on the European Commission’s White Paper on Artificial Intelligence – A European approach to excellence and trust (Opinion 4/2020) (Opinion No. 4/2020). EDPS. https://edps.europa.eu/sites/edp/files/publication/20-06-19_opinion_ai_white_paper_en.pdf (2020)

Eitel-Porter, R.: Beyond the promise: Implementing ethical AI. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00011-6

European Commission.: White Paper on Artificial Intelligence: A European approach to excellence and trust (White Paper COM(2020) 65 final). https://ec.europa.eu/info/files/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (2020)

European Commission.: Proposal for a Regulation on a European approach for Artificial Intelligence (COM(2021) 206 final). European Commission. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence (2021)

European Parliament.: DRAFT REPORT with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). European Parliament, Committee on Legal Affairs. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/JURI/PR/2020/05-12/1203395EN.pdf (2020)

Fenn, J., & Lehong, H.: Hype Cycle for Emerging Technologies. Gartner. http://www.gartner.com/technology/research/hype-cycles/index.jsp (2011)

Findlay, M., & Seah, J.: An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections. 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G), 192–197 (2020). https://doi.org/10.1109/AI4G50087.2020.9311069

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M.: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI. https://dash.harvard.edu/handle/1/42160420 (2020)

Floridi, L.: Information ethics: On the philosophical foundation of computer ethics. Ethics Inf. Technol. 1 (1), 33–52 (1999)

L. Floridi (ed.), The Cambridge Handbook of Information and Computer Ethics. Cambridge University Press (2010)

Floridi, L., Sanders, J.W.: Mapping the foundationalist debate in computer ethics. Ethics Inf. Technol. 4 (1), 1–9 (2002)

Friedman, B., Kahn, P., & Borning, A.: Value Sensitive Design and Information Systems. In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations. M.E Sharpe, Inc (2006)

Gibson, J. J.: The theory of affordances. In R. E. Shaw & J. D. Bransford (Eds.), Perceiving, acting and knowing (pp. 67–82). Lawrence Erlbaum Associates (1977)

Gilligan, C.: In a Different Voice: Psychological Theory and Women’s Development (Reissue). Harvard University Press (1990)

Gomes, L. A. de V., Facin, A. L. F., Salerno, M. S., & Ikenami, R. K.: Unpacking the innovation ecosystem construct: Evolution, gaps and trends. Technological Forecasting and Social Change, 136, 30–48 (2018). https://doi.org/10.1016/j.techfore.2016.11.009

Gotterbarn, D., Miller, K., Rogerson, S.: Computer society and ACM approve software engineering code of ethics. Computer 32 (10), 84–88 (1999)

Gotterbarn, D., & Rogerson, S.: Responsible risk analysis for software development: Creating the software development impact statement. Communications of AIS, 15 , 730–750 (2005). https://doi.org/10.17705/1CAIS.01540

Gürses, S., Troncoso, C., & Diaz, C.: Engineering Privacy by Design. Conference on Computers, Privacy & Data Protection (CPDP) (2011)

Hall, W., & Pesenti, J.: Growing the artificial intelligence industry in the UK. Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/652097/Growing_the_artificial_intelligence_industry_in_the_UK.pdf (2017)

Harris, I., Jennings, R.C., Pullinger, D., Rogerson, S., Duquenoy, P.: Ethical assessment of new technologies: A meta-methodology. J. Inf. Commun. Ethics Soc. 9 (1), 49–64 (2011). https://doi.org/10.1108/14779961111123223

Hickok, M.: Lessons learned from AI ethics principles for future actions. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00008-1

Huff, C., Martin, C.D.: Computing consequences: A framework for teaching ethical computing. Commun. ACM 38 (12), 75–84 (1995)

International Telecommunication Union.: AI for Good Global Summit Report 2017. International Telecommunication Union. https://www.itu.int/en/ITU-T/AI/Documents/Report/AI_for_Good_Global_Summit_Report_2017.pdf (2017)

Introna, L.D.: Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems. Ethics Inf. Technol. 7 (2), 75–86 (2005)

Jelinek, T., Wallach, W., Kerimi, D.: Policy brief: The creation of a G20 coordinating committee for the governance of artificial intelligence. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00019-y

Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

Johnson, D. G.: Computer Ethics (3rd ed.). Prentice Hall (2001)

Johnson, D.G.: Computing ethics Computer experts: Guns-for-hire or professionals? Commun. ACM 51 (10), 24–26 (2008)

Kant, I.: Kritik der praktischen Vernunft. Reclam, Ditzingen (1788)

Kant, I.: Grundlegung zur Metaphysik der Sitten. Reclam, Ditzingen (1797)

Kaplan, A., Haenlein, M.: Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 62 (1), 15–25 (2019)

Keen, P.: MIS research: Reference disciplines and a cumulative tradition. Proceedings of the First International Conference on Information Systems (1980)

Klitzman, R.: The Ethics Police?: The Struggle to Make Human Research Safe (1 edition). OUP USA (2015)

Kurzweil, R.: The Singularity is Near. Gerald Duckworth & Co Ltd (2006)

Latonero, M.: Governing artificial intelligence: Upholding human rights & dignity. Data & Society. https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_Intelligence_Upholding_Human_Rights.pdf (2018)

Lauer, D.: You cannot have AI ethics without ethics. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00013-4

MacIntyre, A. C.: After virtue: A study in moral theory. University of Notre Dame Press (2007)

MacIntyre, J., Medsker, L., Moriarty, R.: Past the tipping point? AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00016-1

Manders-Huits, N., & van den Hoven, J.: The Need for a Value-Sensitive Design of Communication Infrastructures. In P. Sollie & M. Düwell (Eds.), Evaluating New Technologies: Methodological Problems for the Ethical Assessment of Technology Developments (pp. 51–62). Springer (2009)

Martin, C.D., Makoundou, T.T.: Taking the high road ethics by design in AI. ACM Inroads 8 (4), 35–37 (2017)

Mason, R.O.: Four ethical issues of the information age. MIS Q. 10 (1), 5–12 (1986)

McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag. 27 (4), 12–12 (2006)

Mill, J. S.: Utilitarianism (2nd Revised edition). Hackett Publishing Co, Inc (1861)

Miller, J. H., & Page, S. E.: Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press (2007)

Mingers, J., Walsham, G.: Towards ethical information systems: The contribution of discourse ethics. MIS Q. 34 (4), 833–854 (2010)

Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, (2019) https://doi.org/10.1038/s42256-019-0114-4

Moor, J.H.: What is computer ethics. Metaphilosophy 16 (4), 266–275 (1985)

Moor, J.H., Bynum, T.W.: Introduction to cyberphilosophy. Metaphilosophy 33 (1/2), 4–10 (2002)

Moore, J.F.: Predators and prey: A new ecology of competition. Harv. Bus. Rev. 71 (3), 75–86 (1993)

Muller, C.: The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law (CAHAI (2020)06-fin). Council of Europe, Ad Hoc Committee on Artificial Intelligence (CAHAI) (2020). https://rm.coe.int/cahai-2020-06-fin-c-muller-the-impact-of-ai-on-human-rights-democracy-/16809ed6da

Müller, V. C.: Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020). Metaphysics Research Lab, Stanford University (2020) https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/

Nemitz, P.: Constitutional democracy and technology in the age of artificial intelligence. Phil. Trans. R. Soc. A 376 (2133), 20180089 (2018). https://doi.org/10.1098/rsta.2018.0089

Nishant, R., Kennedy, M., Corbett, J.: Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manage. 53 , 102104 (2020). https://doi.org/10.1016/j.ijinfomgt.2020.102104

Norman, D.A.: Affordance, conventions, and design. Interactions 6 (3), 38–43 (1999). https://doi.org/10.1145/301153.301168

OECD.: Recommendation of the Council on Artificial Intelligence [OECD Legal Instruments]. OECD (2019). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L.: Artificial Intelligence & Human Rights: Opportunities & Risks (SSRN Scholarly Paper ID 3259344). Social Science Research Network (2018). https://papers.ssrn.com/abstract=3259344

Richards, L., Brockmann, K., & Boulanini, V.: Responsible Artificial Intelligence Research and Innovation for International Peace and Security. Stockholm International Peace Research Institute (2020). https://reliefweb.int/sites/reliefweb.int/files/resources/sipri_report_responsible_artificial_intelligence_research_and_innovation_for_international_peace_and_security_2011.pdf

Rodrigues, R.: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4 , 100005 (2020). https://doi.org/10.1016/j.jrt.2020.100005

Rogerson, S.: Ethics and ICT. In R. D. Galliers & W. Currie (Eds.), The Oxford Handbook of Management Information Systems: Critical Perspectives and New Directions (pp. 601–622). OUP Oxford (2011)

Rowe, F.: What literature review is not: Diversity, boundaries and recommendations. European Journal of Information Systems, 23(3), 241–255 (2014). http://dx.doi.org.proxy.library.dmu.ac.uk/ https://doi.org/10.1057/ejis.2014.7

Shneiderman, B.: Design Lessons From AI’s Two Grand Goals: Human Emulation and Useful Applications. IEEE Transactions on Technology and Society 1 (2), 73–82 (2020). https://doi.org/10.1109/TTS.2020.2992669

Smith, B. C.: The Promise of Artificial Intelligence: Reckoning and Judgment. The MIT Press (2019)

Spiegelhalter, D.: Should We Trust Algorithms? Harvard Data Science Review (2020). https://doi.org/10.1162/99608f92.cb91a35a

Spinello, R. A.: Case Studies in Information Technology Ethics (2nd edition). Pearson (2002)

Spinello, R. A., & Tavani, H. T.: Readings in CyberEthics. Jones and Bartlett Publishers, Inc (2001)

Stahl, B. C., & Markus, M. L.: Let’s claim the authority to speak out on the ethics of smart information systems. MIS Quarterly, 45(1), 33–36 (2021). https://doi.org/10.25300/MISQ/2021/15434.1.6

Stahl, B. C., Timmermans, J., & Mittelstadt, B. D.: The Ethics of Computing: A Survey of the Computing-Oriented Literature. ACM Comput. Surv. 48(4), 55:1–55:38 (2016). https://doi.org/10.1145/2871196

Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., & Kraus, S.: Artificial Intelligence and Life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 Study Panel. Stanford University, Stanford, CA, http://Ai100.Stanford . Edu/2016-Report. Accessed: September, 6, (2016)

Tate, M., Furtmueller, E., Evermann, J., & Bandara, W.: Introduction to the Special Issue: The Literature Review in Information Systems. Communications of the Association for Information Systems, 37(1) (2015). http://aisel.aisnet.org/cais/vol37/iss1/5

Tavani, H.: The foundationalist debate in computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 251–270). Cambridge University Press (2010)

Tavani, H.T.: The uniqueness debate in computer ethics: What exactly is at issue, and why does it matter? Ethics and Inf. Technol. 4 (1), 37–54 (2002)

Tigard, D.W.: Responsible AI and moral responsibility: A common appreciation. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00009-0

Wallach, W., Marchant, G.: Toward the Agile and Comprehensive International Governance of AI and Robotics [point of view]. Proc. IEEE 107 (3), 505–508 (2019). https://doi.org/10.1109/JPROC.2019.2899422

Weckert, J., & Adeney, D. (Eds.). Computer and Information Ethics. Greenwood Press (1997)

Weizenbaum, J.: Computer Power and Human Reason: From Judgement to Calculation (New edition). W. H. Freeman & Co Ltd (1977)

Wiener, N.: The human use of human beings. Doubleday (1954)

Wiener, N.: God and Golem. MIT Press, Inc. A comment on certain points where cybernetics impinges on religion (1964)

Willcocks, L.: Robo-Apocalypse cancelled? Reframing the automation and future of work debate. J. Inf. Technol. 35 (4), 286–302 (2020). https://doi.org/10.1177/0268396220925830

World Economic Forum.: Responsible Use of Technology [White paper]. WEB (2019). http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology.pdf

Zuboff, P. S.: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (01 edition). Profile Books (2019)

Download references

This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and Grant Agreement No. 786641 (SHERPA).

Author information

Authors and affiliations.

Centre for Computing and Social Responsibility, De Montfort University, The Gateway, Leicester, LE19BH, UK

Bernd Carsten Stahl

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bernd Carsten Stahl .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Stahl, B.C. From computer ethics and the ethics of AI towards an ethics of digital ecosystems. AI Ethics 2 , 65–77 (2022). https://doi.org/10.1007/s43681-021-00080-1

Download citation

Received : 03 May 2021

Accepted : 01 July 2021

Published : 31 July 2021

Issue Date : February 2022

DOI : https://doi.org/10.1007/s43681-021-00080-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computer ethics
  • Ethics of AI
  • Artificial intelligence
  • Digital ethics
  • Find a journal
  • Publish with us
  • Track your research
  • Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Unordered, Ordered, and Description Lists in HTML
  • Backup and Restore
  • How to align Image in HTML?
  • What is Photoshop?
  • How to use the Burn and Sponge Tool in Photoshop?
  • What is Antivirus Software?
  • How to Use the Blur and Sharpen Tool in Photoshop?
  • Understanding Layers in Photoshop
  • How to Use the History Brush Tool in Photoshop?
  • How to Use Brush Tool in Photoshop?
  • What is Hacking? Definition, Types, Identification, Safety
  • How to Crop an Image in Photoshop?
  • Computer Security Threats
  • Adobe Photoshop Tool Palette
  • How to Use the Color Sampler tool in Photoshop?
  • Working with Photoshop Files
  • How to Use the Color Replacement Tool in Photoshop?
  • Types of Desktop Publishing Software
  • How to Use the Mixer Brush Tool in Photoshop?

Computer Ethics

What does the word ‘ethics’ mean? The dictionary defines ethics because of the moral principles that govern the behavior of a gaggle or individual. But, not every people in society need to live an absolutely moral life. Ethics are actually the unwritten code of conduct that every individual should follow. These codes are considered correct only by the members of that particular profession. Similarly, for computer users, computer ethics is a set of principles that regulates the use of computers. Computer ethics address issues related to the misuse of computers and how they can be prevented. It primarily imposes the ethical use of computing resources. It includes methods to avoid violating the unauthorized distribution of digital content. The core issues surrounding computer ethics are based on the use of the internet, internet privacy, copyrighted content, software, and related services, and user interaction with websites. The Internet has changed our lifestyle. It has become a part of our life. It allows us to communicate with a person from another part of the world. collecting information on any topic, social meets, and many other activities. But at the same time, some peoples are always trying to cheat or harm others.

Advantages of using the internet:

  • The Internet offers the facility to communicate with a person in any part of the world.
  • We can easily collect information related to any topic from the world wide web on the internet.
  • Various types of business are carried out through Internet, which is referred to as e-commerce. From booking railway tickets and flight tickets or tickets for movies to purchasing any type of merchandise or commodities, are possible via the Internet.
  • The Internet allows social networking, that is, it provides the ability to share our information, emotions, and feelings with our friends and relatives.

Disadvantages of using the internet:

  • A group of people is trying to get personal information (like bank detail, address, contact details, etc,) over the Internet and uses that for unethical benefits.
  • Malware or viruses are becoming quick access to different networks and ultimately are causing harm to personal computers(PC) or computers connected to the network.
  • Some people run deceitful businesses over the Internet, and the common people very often become victims of them.
  • People use the internet for cyberbullying, trolling, etc.

Ten commandments of computer ethics:

The commandments of computer ethics are as follows:

Commandment 1: Do not use the computer to harm other people’s data.

Commandment 2: Do not use a computer to cause interference in other people’s work.

Commandment 3: Do not spy on another person’s personal data.

Commandment 4: Do not use technology to steal personal information.

Commandment 5: Do not spread misinformation using computer technology.

Commandment 6: Do not use the software unless you pay for this software.

Commandment 7: Do not use someone else’s computer resources unless he authorized to use them.

Commandment 8: It is wrong to claim ownership of a work that is the output of someone else’s intellect.

Commandment 9: Before developing software, think about the social impact it can of that software.

Commandment 10: While computers for communication, always respectful with fellow members.

Internet Security

The internet is an insecure channel for exchanging information because it features a high risk of fraud or phishing. Internet security is a branch of computer security specifically associated with the utilization of the internet, involving browser security and network security.  Its objective is to determine measures against attacks over the web. Insufficient internet security can be dangerous. It can cause many dangerous situations, like starting from the computer system getting infected with viruses and worms to the collapse of an e-commerce business. Different methods have been devised to protect the transfer of data over the internet such as information privacy and staying alert against cyber attacks.

Information Privacy: Information privacy is the privacy or protection of personal information and refers to the personal data stored on a computer. It is an important aspect of information sharing. Information privacy is also known as data privacy or online privacy. Some Internet privacy involves the right of personal privacy and deals with the storing and displaying of personal information on the internet. In any exchange of personal information over the internet, there is always a risk involved with the safety of personal information. Internet privacy may be a cause for concern especially when online purchases, visiting social networking sites, participating in online games or attending forums. Privacy issues can arise in response to information from a good range of sources, such as:

  • Healthcare records
  • Financial institution
  • transactions
  • Biological traits
  • Residence records
  • Location-based service

The risk involved in internet privacy is sometimes dangerous. In the process of data transfer over the internet, if a password is revealed, a victim’s identity may be deceitfully used. 

Some important terms:

  • Spyware: An application that obtains data without the user’s consent.
  • Malware: An application used to illegally harm online and offline computer users
  • Virus: It is a small program or software which is embedded with a legitimate program and designed to harm your system.
  • Worms: It is a self-replicating program that spread across networks due to the poor security of the infected computers.
  • Trojan horse: Trojan horse is a program that a llows the hackers to gain remote access to a target system.

General steps to protect our system from risks:

To minimize internet privacy violation risks, the following measures need to be taken:

  • Always use preventive software applications, like anti-virus, anti-malware, etc,
  • Avoid exposing personal data on websites with low-security levels.
  • Avoid shopping from unreliable websites
  • Always use strong passwords consisting of letters, numerals, and special characters.
  • Always keep your operating system updated.
  • Always on the firewall.

Unethical computing practices:

Now we discuss some unethical computing practices:

1. Cyberbullying: When people bully other people by the use of electronic communication ( like the web, telephone, etc). it’s referred to as cyberbullying. Cyberbullying has been done by friends, classmates, relatives, any other unknown persons. Sending harmful emails to a person creates fake websites to make fun of or to make harm a person by distributing the same fake information about a person posting and distributing fake images of a person. These are some common ways of cyberbullying. 

In most cyberbullying cases, they do not reveal their identities. Due to cyberbullying, some bullied persons are affected emotionally or mentally. Even if those are fake information, the bullied person may become depressed or it may affect their day-to-day life. In the case of the students or kids, it may affect their study or they may lose self-esteem.

How to protect yourself from cyberbullying:

  • Not to respond to cyberbullying.
  • Never open e-mails received from unknown senders.
  • Keep your password secret.
  • Be careful, when you are posting something on a social site.

2. Phishing: An internet hacking activity used to steal user data. In this activity, an email is sent to the user which misleads him/her to believe that it is from a trusted organization. After sending the email, the attacker asks the user to visit their website, and on their website, they will ask for the personal information of the user like password, credit card information, etc. So, this is how the attacker steals the personal information of the user.

How to protect yourself from phishing:

  • Never open a link, attachment, etc in an email that is sent by some unknown person.
  • Never share your personal information in an email that is asked by an unknown person.
  • Always on the firewall of the computer system.
  • Always check your bank statements regularly to ensure that no unauthorized transactions are made. If unauthorized transactions are made in your account, then immediately report this issue to your bank.

3. Hacking: It is an unethical activity in which a highly skilled technical person(or commonly known as a hacker) enters another person’s computer without the permission of the user and steals important data/project/applications from the computer or sometimes destroys the information from the system.

How to protect yourself from hacking:

  • Never connect your system to free wifi or a free network.
  • Before installing any application in your system, always check permission and authenticity.

4. Spamming: It is an unethical activity in which bulk unwanted e-mail is set to you from a strange or unknown source. Sometimes, due to bulk emails, your mail server gets full and mail bombing activity happens. Spam mail is generally used to deliver viruses, worms, trojan horses, malware, spyware, etc. to attack the user.

How to protect yourself from spam:

  • To prevent spam mail, install filtering or blocking software.
  • In your mailbox, if you find suspicious mail, then immediately delete that mail(without opening).
  • Always keep your software updated.
  • Never open the link that is sent by an unknown person.

5. Plagiarism: Plagiarism is stealing or copying someone else’s intellectual work (can be an idea, literary work or academic work, etc.) and representing it as your own work without giving credit to the creator or without citing the source of information.

How to protect yourself from plagiarism:

  • While writing, always writes in your own words.
  • Always use a plagiarism checker before the update.
  • If you are taking someone else’s work, then always give the credit to the original author in an in-text citation.

Sample Questions

Question 1. What are the disadvantages of the internet?

A group of people is trying to get personal information (like bank detail, address, contact details, etc,) over the Internet and uses that for unethical benefits. Malware or viruses are becoming quick access to different networks and ultimately are causing harm to personal computers(PC) or computers connected to the network. Some people run deceitful businesses over the Internet, and the common people very often become victims of them. People use the internet for cyberbullying, trolling,

Question 2. What are the benefits of the internet?

The Internet offers the facility to communicate with a person in any part of the world. We can easily collect information related to any topic from the world wide web on the internet. Various types of business are carried out through Internet, which is referred to as e-commerce. From booking railway tickets and flight tickets or tickets for movies to purchasing any type of merchandises or commodities, are possible via the Internet. The Internet allows social networking, that is, it provides the facility to share our information, emotions and feelings with our friends and relatives.

Question 3. List some common computing ethics.

Do not use the computer to harm other people data. Do not spy on another person’s personal data. Do not use a computer to cause interference in other people work. Do not use technology to steal personal information. Do not spread misinformation using computer technology. It is wrong to claim ownership of a work that is the output of someone else’s intellect. In using computers for communication, be respectful with fellow members.

Question 4. List some unethical computing practices.  

Plagiarism Cyberbullying Unauthorized Hacking Spamming Phishing Software piracy

Question 5. What is cybercrime?

Cybercrime may be a criminal activity done with the help of computers and the Internet. It includes downloading illegal data, online fraud bank transaction, etc.

Question 6. The organization that has established the guideline for copyright Law.

Question 7. An application that obtained data without the user’s consent.

Please Login to comment...

Similar reads.

  • School Learning
  • School Programming
  • 10 Best Todoist Alternatives in 2024 (Free)
  • How to Get Spotify Premium Free Forever on iOS/Android
  • Yahoo Acquires Instagram Co-Founders' AI News Platform Artifact
  • OpenAI Introduces DALL-E Editor Interface
  • Top 10 R Project Ideas for Beginners in 2024

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Importance Of Computer Ethics

Ethics are the moral values that stop you from doing anything that is not legal and that does not harm or damage anyone else’s work, or interests. Ethics are something that are embittered in us from childhood. Actually, Computer ethics are nothing but how we use our personal morals and ethics while using the computer for various purposes.

They consist of all the rules that we would apply to not misuse any information that is not ours to use or to access any data that is not owned by us. With issues like cyber crimes, plagiarism, hacking, and password thefts, it has become important to know the importance of computer ethics to be able to create a safe computer-based environment.

Computer ethics essentially protect individuals online from predation: they prevent the breach of privacy, identify theft, interference with work, and unlawful use of proprietary software, among other events. Computer ethics govern the behavior of users online and date back to 1992. These ethics govern the social, financial, and legal uses of computers worldwide.

New computing technology is very powerful and malleable and computers can be programmed to perform a wide range of functions that can carry out various and diverse applications in our society. Communication that depends on computer technology has grown radically through well-known use of the internet, cell phones, and positioning systems. Computer ethics has grown dramatically to issues in most activities within society including military, government, law, education, and business because computing has become united in society. Through its extensive growing computer ethics is the acreage of activated ethics that affects and intersects around all added branches of applied ethics.

Prevents Misuse of Personal Information – Computers have made the world come closer, be it personally or professionally! Most of us find it more convenient to shop online rather than going out. For this, we are asked to give out our personal information like name, date of birth, and most importantly, our ‘credit card information’! Ask yourself, if we know that people don’t follow computer ethics everywhere, would we feel safe to give out all this information? On the contrary, if we have read their privacy policies and know that they abide by the cyber laws and computer ethics, we know for a fact that our personal information will not be misused.

Prevents Theft of Intellectual Property – What is intellectual property? Well, unlike physical forms of properties like a car or house, intellectual property refers to the property created by the mind! The internet consists of various intellectual properties which include works of various researchers, writers, song artists, and so on. Without the presence of computer ethics, the work created by the intellect of one person can be easily copied and plagiarized by someone else. Imagine how we would feel if our poetry has been copied and publicized under someone else’s name? Now, do we know what following computer ethics and privacy policies is important?

Prevents Loss of Various Jobs in the field of Software Development – There are thousands of people globally that are working in companies that develop computer programs and software. However, if we find out a way to get this software without having to pay the price, most of us would prefer piracy overpaying, right? Have you ever wondered how this can cost the employees working in these companies their job? The general mentality of most of the people downloading software illegally is that these companies are very rich and it really wouldn’t affect them, even if it does, who cares? However, the point to be kept in mind is, what if the one who had to pay the price is someone close to us. Imagine, there are thousands of people getting involved in unethical downloads and distribution. In fact, a report reveals that a significant number of people prefer never to pay for software and get involved in piracy!

Keeps us from being Unethical! – It’s not that we need to follow computer ethics to show others. However, by following these ethics, we would know what we need to do to be a responsible user and keep ourselves from getting into trouble by being unethical. Trouble? Well, yes, there are various laws that can put us behind bars if we are caught violating the privacy policies and norms of individual websites.

Makes our Computer a Better and Safe place to be At – Our computer is not just an electronic device for communication, it is our data store, our photo album, our work recorder, our social network, our calculator and what not…, it is what we are! If we get involved in downloading information or accessing portals that we are not allowed to, we are opening the doors of various issues and threats like viruses and Trojans that can illegally enter our system and crash it completely! On the other hand, if our system is used in the way it is supposed to, we are creating a safer and a better atmosphere wherein we can rest assured that our work and our personal information is absolutely safe and secure.

The code of computer ethics also called the Ten Commandments, instructs users to not harass other users, use computers to spy or use computers to gain access to private information. It is a part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct. Ethics prevent the taking of intellectual information without compensation and prevent users from using computing resources without compensation or authorization. The ethics also mandate the use of machines in a manner considerate and respectful of others.

Actually, Computer ethics are something that cannot be imposed on us, it is rather something that should be followed out of our own will and desire. The way we use the computer to access information speaks a lot about us and our ethical values. What would we do if we get our colleague’s password? Would we ask him or her to change it since we accidentally found out, or would we try to get some information that we are not supposed to? There are two kinds of people in this world, one who follow the rules and respect them at the same time, and the others who don’t really care. Should we care about the ethics involved in becoming a responsible citizen? We decide! Nevertheless, if we are caught, rest assured the authorities won’t care about us as well!

Information Sources:

  • reference.com
  • techspirited.com

National-anarchism

Charles sanders peirce: philosopher, logician, mathematician, and scientist, postmodern philosophy, about passions, overall marketing activities of krishibid properties limited, holistic management, variety information technology training courses, social science in psychology, annual report 2016-2017 of sanghi industries limited, case study on mortgage loan, latest post, urban climatology, oceanic trenches, conformational isomerism – in chemistry, researchers are harnessing rna in a new strategy to combating hiv, antibiotic resistance genes have significantly increased in both humans and cattle, cyclopentyne.

IMAGES

  1. Ten commandments of computer ethics assignment

    computer ethics assignment

  2. Math Worksheets For Grade 6: Computer Ethics Worksheet Answers

    computer ethics assignment

  3. what is ethics.doc

    computer ethics assignment

  4. Computer Ethics Essay Example

    computer ethics assignment

  5. PPT

    computer ethics assignment

  6. 😍 Ten commandments of computer ethics and their explanation. The Ten

    computer ethics assignment

VIDEO

  1. Computer Ethics Video

  2. A level P1 Ethics and ownership

  3. 10 COMMANDMENTS OF COMPUTER ETHICS #roleplay #ethics #cyberbullying #privacy #plagiarism

  4. Introduction to Computer Ethics

  5. Assignment 4

  6. video presentation ethics substitue assignment 2 CONSUMER RIGHTS AWARENESS IN MALAYSIA

COMMENTS

  1. Assignments

    For this assignment, you will write a paper that examines an ethical issue raised in a news article with respect to the design or usage of a computer technology. In this paper, you will summarize the argument, identify its shortcomings, and propose an alternative argument.

  2. 16.7: Assignment- Computer Ethics

    Task 1: Ethics. You will need a notebook to take notes in. Now you will visit at least three of the following sites about ethics and make notes on what a Code of Ethics is and how it affects your use of computers at school and how it affect your computer use at your place of employment. In your notes, make sure you document which site you got ...

  3. Assignments

    As part of this assignment, you need to interview at least two different individuals, who can be your friends, other students, or anyone else interested in discussing this topic. Consider the ethics and the logistics of the interviews. Decide on a location for the interview that is comfortable both for you and the interviewee.

  4. Stanford Embedded Ethics

    Although each of the problems in the problem set build on one another, the ethics assignment itself begins with Problem 4: Toxicity Classification and Maximum Group Loss. ... Introduction to the engineering of computer applications emphasizing modern software engineering principles: object-oriented design, decomposition, encapsulation ...

  5. Computer Ethics

    Computer ethics focuses on the issue of accountability, liability, responsibility, obligation and the system quality as well as quality of life. The code of conduct as pertaining to computer ethics has to be kept to the latter if humanity is to survive from the many threats and malicious codes that come along with computer systems (Moor, p ...

  6. COS 350: Ethics of computing

    Syllabus. Part 1: basics - Introduction, ethical foundations, political economy of the tech industry. Part 2: AI ethics - Fairness and machine learning, AI safety / AGI / alignment, AI and labor, AI and climate. Part 3: various other topics in computing ethics - Social media and platform power, information security, privacy, ethics in design ...

  7. Assignment: Computer Ethics

    Task 1: Ethics. You will need a notebook to take notes in. Now you will visit at least three of the following sites about ethics and make notes on what a Code of Ethics is and how it affects your use of computers at school and how it affect your computer use at your place of employment. In your notes, make sure you document which site you got ...

  8. New program embeds ethics into computer science courses

    Stanford launches an embedded EthiCS program to help students consistently think through the common issues that arise in computer science. Students were then tasked to provide recommendations to ...

  9. Computing, Ethics, and Society

    Throughout the specialization, learners will engage with articles published in computing ethics journals. These assignments will require learners to carefully dissect the articles, critically evaluate the ethical principles at play, and reflect on the broader implications for society, technology, and individual decision-making.

  10. PDF Chapter 1. Introduction to Computer Ethics

    property rights, privacy, free speech and professional ethics. Is computer ethics different to those that came before. Partially, the answer is no since all fields have similar problems and issue. Partially, the answer is also yes, since there are issues specific to computers such as speed and programs etc. 1.1.1 Scenario 1: Should I copy software?

  11. Computer Ethics and Professional Responsibility

    Topics covered include the history of computer ethics; the social context of computing; methods of ethical analysis; professional responsibility and codes of ethics; computer security, risks, and liabilities; computer crime, viruses, and hacking; data protection and privacy; intellectual property and the "open source" movement; global ...

  12. Harvard works to embed ethics in computer science curriculum

    Embedding ethics across the curriculum helps computer science students see how ethical issues can arise from many contexts, issues ranging from the way social networks facilitate the spread of false information to censorship to machine-learning techniques that empower statistical inferences in employment and in the criminal justice system.

  13. A new program at Stanford is embedding ethics into computer science

    Rather than have ethics be its own standalone seminar or dedicated class topic that is often presented at either the beginning or end of a course, the Embedded EthiCS program aims to intersperse ethics throughout the quarter by integrating it into core course assignments, class discussions, and lectures.

  14. Computer Ethics

    Computer Ethics Administrative. Winter 2023 Time: Wednesdays and Fridays from 3:30 to 4:20pm Location: NAN 181 ... Daily assignment, due at 8pm the night before class: ... The classically trained computer scientist Toyama, in an excerpt from his book about realizing how Utopian tech fails the most marginalized, describes the role of tech as ...

  15. Computer Ethics

    History of Computer ethics - Computer Ethics is the branch of philosophy that analysis the nature and social impact of computer technology as well as the standards of conduct that pertain to the proper use of computers. It involves social issues, such as access rights, working place monitoring, censorship, and junk mail; professional issues ...

  16. Computer Ethics ASSIGNMENTS

    Computer Ethics ASSIGNMENTS. COMPUTERS ETHICS SOCIETY and HUMAN VALUES Written Assignments. Each student is required to submit the following assignments according to the schedule for the semester. Check on the for the due dates. Follow the INSTRUCTIONS for PREPARING and SUBMITTING WRITTEN ASSIGNMENTS. Do not violate academic integrity!

  17. From computer ethics and the ethics of AI towards an ethics ...

    Ethical, social and human rights aspects of computing technologies have been discussed since the inception of these technologies. In the 1980s, this led to the development of a discourse often referred to as computer ethics. More recently, since the middle of the 2010s, a highly visible discourse on the ethics of artificial intelligence (AI) has developed. This paper discusses the relationship ...

  18. Computer Ethics

    Unmute. ×. The commandments of computer ethics are as follows: Commandment 1: Do not use the computer to harm other people's data. Commandment 2: Do not use a computer to cause interference in other people's work. Commandment 3: Do not spy on another person's personal data. Commandment 4: Do not use technology to steal personal information.

  19. Importance Of Computer Ethics

    Computer ethics essentially protect individuals online from predation: they prevent the breach of privacy, identify theft, interference with work, and unlawful use of proprietary software, among other events. Computer ethics govern the behavior of users online and date back to 1992. These ethics govern the social, financial, and legal uses of ...

  20. Computer Ethics and Professional Responsibility

    Computer Ethics and Professional Responsibility Follow. Lecture notes. Date Rating. year. Ratings. Tutorial 6-Demand. Supply Market Equilibrium Questions. 5 pages 2021/2022 None. ... MPU33213 Pair Group Assignment - Oct 2023. 3 pages 2023/2024 None. 2023/2024 None. Save. Computer ethics and professional responsibility. 18 pages 2020/2021 0% (1 ...

  21. Assignment: Computer Ethics

    Task 1: Ethics. You will need a notebook to take notes in. Now you will visit at least three of the following sites about ethics and make notes on what a Code of Ethics is and how it affects your use of computers at school and how it affect your computer use at your place of employment. In your notes, make sure you document which site you got ...