# Responsible Computer Science

## Washington University McKelvey School of Engineering Department of Computer Science and Engineering

### Ron K. Cytron (PI) Tomas Larsen Russell Scharf

Washington University received financial support for Responsible Computer Science from Mozilla.

### Overview

We are funded to develop material for two courses: our introduction to computer science (CSE131) and our introduction to data science (CSE217A). The focus of our efforts has been on fairness related to algorithmic outcomes, bias in machine learning, and advantages people may have in terms of information.

This summer, we have worked on developing assignments that will be tried in Fall for CSE131 and Spring 2020 for CSE217A. This page reflects our work in progress.

### Projects underway

We are examining fairness in the following forms:
• Algorithmic fairness
• Ausitn's cake cutting algorithm as an example of an inherently fair algorithm. Cake cutting algorithms consider the fair division of a resource that can be divided without losing value. Austin's algorithm guarantees a fair outcome even if the parties try to cheat.

This goal of this algorithm is to determine a piece of a cake between two knives that represents 1/2 the value of the cake to each of two players, even though those players can regard the cake differently in terms of where the value of the cake lies.

 Movie of the algorithm. Still frames and an explanation appear below. In the picture to the right, the top red line is a knife moving up the cake. P1's preference for the cake is shown in blue, and it increases rapidly as the knife moves up. The progress bars on the left and right show each player's preference for the cake currently between the knives (the two red lines). P2's value for the cake between the knives is increasing much more slowly, as that player prefers the cake more towards the top of the picture. Because P1's preference for the cake increases faster, that player will say stop. P1 will then move both knives up the cake keeping 1/2 the value between the knives from P1's point of view, as shown here. Meanwhile, moving the knives up the cake will cause P2's preference for the cake between the knives to increase. When P2 see 1/2 the value between the cake, P2 says stop, and both have agreed that 1/2 the cake is between the knives, and 1/2 the cake is outside the knives (in 2 pieces).
The result is that the cake between the knives is 1/2 the value of the cake to each player.

Setting for our students:

• The goal of this game is always to receive 1/2 the value of the cake. Taking a chance on receiving more than 1/2 brings the chance of receiving less than 1/2, in which case the student loses.
• The students are then challenged to try to cheat to ensure they receive more than 1/2 the value of of the cake when
• they do not know the other person's preferences
• they do know the other person's preferences
• Block voting as a tyranny of the majority algorithm

What music should we play in lab? We will set up an election in which students select the music they most want to hear during three 20-minute time slots in a given lab section.

The voting protocol will be subject to tyranny of the majority by having the same music voted to play in each section.

The goal is to anger students to the point they realize this is unfair, feel the unfairness acutely and personally, and then go on to develop a more fair voting protocol.

They will be debriefed from this exercise by its relevance to the school board elections in nearby Ferguson, Missouri. That situation was resolved by use of cumulative voting.

• Unfairness due to advantage of information

Here we use the adjusted winner protocol to consider the allocation of jointly owned items that cannot be divided (in the way that cake can).

The algorithm is fair and creates equitable outcomes, unless one person knows too much about the other person's preferences. The videos below demonstrate a fair and then unfair allocation.

 Fair outcome, with the student not knowing the computer's preferences. The student supplies a value for the piano and the pot of gold, and the computer's preferences are hidden from the student. The result is an equitable outcome for both the student and the computer. Here, the student will initially state that the value of the piano and the pot of gold are approximately equal. The computer's values are then revealed to the student. The student can then slightly overbid the computer to win the piano, at a price less thant he student truly values the piano. The student thus wins the piano and receives a better share of the pot of gold than would otherwise be received had the student been honest.
• Bias in training, currently two efforts underway
• Working with a dataset about adults, their preparation, their current jobs, and their salaries. From there, we created two hypothetical schools that are presenting this data with bias to inflate the salaries students might expect to earn on graduation. The bias is introduced by omission of some data and presentation of data that is inappropriate for the question at hand.
• Working with a dataset of adults who had loans, their demographics, and their history of paying (or not paying) their debts. Based on stories from the books we are reading (by O'Neil and Eubanks), we aim to create loan history datasets that contain bias against certain demographics.

We then plan on having students apply for loans with their requests accepted or rejected based on machine learning of the biased data. The students will explore why seemingly reasonable requests are denied.