07 Jan 2020 - melbourne and canberra, australia
The following is a brief history of the course I co-created in the fall of 2019.
During the latter parts of September and October 2019, myself and Jared Moore developed a Computer Ethics class for undergraduates at the University of Washington (UW). This “project report” chronicles the experiences leading up to the course’s creation and some thoughts on the role of “ethics” and “tech and society”-type courses in undergraduate CS curricula.
This was the second course I created, and the first where I could dedicate time exclusively to crafting a syllabus. Having this time to focus on the course is a privilege and experience I am grateful for. During these few weeks, Jared and I met regularly to discuss readings and create a comprehensive syllabus consisting of a reading list, daily questions, instructor notes, summaries for each reading, and additional resources for students who might be interested in going deeper into the context and relevant action taken on a subject.
In 2017 when Jared pitched his undergraduate ethics course to the UW computer science department, students had not been able to take an ethics course for more than a decade. The timing was fortunate, as the Cambridge Analytica scandal was around the corner and with it the widening acknowledgement of the deep social responsibilities held by technologists and large tech firms.
Jared taught the first iteration of the computer ethics seminar in Winter Quarter 2018. It surveyed a range of science and technology studies (STS) perspectives with weekly readings, and a one hour discussion section. The course touched on issues such as privacy, algorithmic bias, and access to technology. He has subsequently spoken about this experience at several conferences, and has been staying up to date on how academics are developing ethics materials and coursework for computer science departments.
In Winter Quarter 2019, I taught the second version of this course. This was the first course I had taught at a university, and the first time I had created my own course syllabus. Coming fresh off assisting with Intelligent Machinery, Identity, and Ethics, I focused on the issues concerning “Automation” within the context of Artificial Intelligence in order for students to reflect on the myriad philosophies and values which frame discussions concerning this topic.
We touched on issues such as the impact of automation on labor, algorithmic bias, accountability, and engineering culture. We met for 2 hours per week, with weekly assigned readings and a final project. I assigned student groups to prepare weekly presentations and discussion questions for the readings. Student final projects were impressive. In one notable project, students took on activist roles by writing letters to department leadership requesting more emphasis on computer ethics and social responsibility in the school of computer science that offered the course.
The Allen school was interested in continuing the ethics course, and approached Jared and me to create the next instance of our computer ethics courses. Drawing upon our experiences, we reflected on our role as instructors in striking a balance between student interest in contemporary issues facing emerging technologies and the enduring ideas they can use in the workplace to confront difficult questions facing projects they’ll work on. In the rapid-pace context of CS education, this is a frequently voiced perspective among the CS educators we work with. We thus structured our course so that we explored enduring “critical perspectives” in the first half of the course, and in the latter considered these perspectives in the context of emerging tech.
The field has also grown since we first taught our courses. One effect of the increasing attention paid to the social impact of AI and computing technologies is that our students have access to an almost real-time stream of commentary and analysis on emerging issues. This includes a growing corpus of academic literature on prominent issues in AI ethics, such as fairness, accountability, privacy, etc. Our intention is that students in our class will critically engage with this material, understand how their own values (and those of their communities) intersect with the systems they build, and ultimately know how to act on these values in a ways authentic to themselves.
2019 was also a year of growing tech employee activism. It’s now well acknowledged that tech employees have a unique position–and leverage–to determine which technologies are built, and how. Many students taking our 2020 course will be seniors, having applied and accepted tech jobs in the quarter prior to our class. Last year a reason students often gave for taking my course was to better understand the issues a they will face upon graduation, and what to do about them. In the past year, this sentiment seems to have spread. Across the Allen School, students are starting to engage each other in discussions concerning computer ethics.
Although I am no longer at UW, I will be observing how students engage with the complexity and depth of the questions, discussions, and research they will encounter in Computer Ethics. Doubtless Jared and I will learn much from our students. The world of technology they are entering is rapidly evolving, and with it the values and perspectives needed to make sense of it.