Nov 28, 2018 | Algorithms, Mobility, and Justice

Wednesday, November 28, 2018

4:00-6:00 PM

Engineering 2, Room 599

Are moral algorithms a reasonable solution for taking advantage of life-saving potentials of self-driving cars? In this talk, Nassim JafariNaimi (Assistant Professor, Georgia Institute of Technology)  engages the utilitarian framings that are dominant in the discourses on self-driving cars inclusive of the assumptions that are folded into the question above: that algorithms can be moral and self-driving cars will save lives. Drawing on feminist and care ethics, the talk brings to fore the injustices built into current and future mobility systems such as laws and policies that protect car manufacturers and algorithmic biases that will have disproportionate negative impacts on the most vulnerable. Moreover, it is argued that a constricted moral imagination dominated by the reductive scenarios of the Trolley Problems is impairing design imagination of alternative futures. More specifically, that a genuine caring concern for the many lives lost in car accidents now and in the future—a concern that transcends false binary trade-offs and that recognizes the systemic biases and power structures—could serve as a starting point to rethink mobility, as it connects to the design of cities, the well-being of communities, and the future of the planet.
Abhradeep Guha Thakurta (UCSC Assistant Professor of Computer Science and Engineering) will be offering a response and comments.
Event hosted/organized by Neda Atanasoski (UCSC Professor of Feminist Studies and Director of Critical Race and Ethnic Studies)

Neda Atanasoski is Professor of Feminist Studies at UC Santa Cruz, Director of Critical Race and Ethnic Studies and affiliated with the Film and Digital Media Department. Atanasoski has a PhD in Literature and Cultural Studies from the University of California, San Diego. Her research interests include race and technology; war and nationalism; gender, ethnicity, and religion; cultural studies and critical theory; media studies.

Nassim JafariNaimi is an Assistant Professor of Digital Media at the School of Literature, Media, and Communication at Georgia Tech and the director of the Design and Social Interaction Studio which she established in 2013. JafariNaimi’s research engages the ethical and political dimensions of design practice and technology especially as related to democracy and social justice. Her research spans both theoretical and design-based inquiries situated at the intersection of Design Studies, Science and Technology Studies, and Human Computer Interaction. Her writing on topics such as participatory media, smart cities, social and educational games, and algorithms have appeared in venues such as Science, Technology, and Human Values, Design Issues, Digital Creativity, and Computer Supported Cooperative Work (CSCW). JafariNaimi received her PhD in Design from Carnegie Mellon University. She also holds an MS in Information Design and Technology from the Georgia Institute of Technology and a BS in Electrical Engineering from the University of Tehran, Iran.

Abhradeep Guha Thakurta is Assistant Professor of Computer Science and Engineering at UC Santa Cruz. Thakurta’s research is at the intersection of machine learning and data privacy. Primary research interest include designing privacy-preserving machine learning algorithms with strong analytical guarantees, which are robust to errors in the data. In many instances, Thakurta harnesses the privacy property of the algorithms to obtain robustness and utility guarantees. A combination of academic and industrial experience has allowed Thakurta to draw non-obvious insights at the intersection of theoretical analysis and practical deployment of privacy-preserving machine learning algorithms.

Co-Sponsored by: Critical Race and Ethnic Studies, the Feminist Studies Department and The Humanities Institute’s Data and Democracy Initiative.

Rapporteur Report

By: Andy Murray, SJRC Graduate Student Researcher

This event spawned from a discussion about the meaning of ‘intelligence’ and ‘smartness’ in so-called ‘smart cities.’ It brought Nassim Parvin, an Assistant Professor of Digital Media at Georgia Tech, into conversation with UC Santa Cruz’s own Abhradeep Thakurta, Assistant Professor of Computer Science and Engineering. The event was organized as a presentation by Parvin, followed by a response from Thakurta. The group that convened for the event as fairly large, consisting of a range of undergraduates, graduate students, and faculty, as well as some community members (one of whom noted, after several attendees had mentioned their publications by way of introduction, that he was “just a layperson; I don’t write books, I just read them”).

Parvin’s discussion was divided into two different analyses, both bringing feminist theory to bear on algorithmic technologies: self-driving vehicles and ‘virtual fashion assistance.’ Parvin’s analysis of selfdriving cars was fueled by an MIT Technology Review article entitled, “Why Self-Driving Cars Must be Programmed to Kill.” This article states that self-driving cars necessitate ‘algorithmic morality’ and suggested that this morality could be based on majority responses to a classic philosophical puzzle: the trolley problem. Parvin disagrees, arguing that the premises of its argument are false. These thought experiments, which Parvin notes were introduced in the context of abortion debates in the 1970s, are generally intended to illustrate that morality is complicated. Self-driving cars seem to bring the trolley problem to life. The problem, as Parvin sees it, is that the trolley problem is an example of what she calls ‘quandary ethics’—it demands a ‘god’s eye’ view and clearly defined parameters. In contrast, real-life decisions are always marked by uncertainty, organic, and ongoing in their effects. Parvin doubts the ability to agree on any set of moral principles for algorithms. During her talk, she demonstrated how the binary logic of algorithms quickly becomes problematic when tested. She worries that in the process of simplifying things for machines and machine learning, humans are ceding moral agency to computers. She also put this more poignantly: “what if Grandma is pushing a stroller?”. Parvin argues against accepting algorithmic morality, arguing that we could instead pursue a radical rethinking of design. She notes the power of imagery and the simple shift of calling the machines “killing robots” rather than “self-driving cars,” an example that generated murmurs of recognition and agreement from the audience.

Moving on to ‘algorithmic fashion,’ Parvin showed an advertisement for the technology in question: Amazon’s Echo Look. This device takes photos of users and provides algorithm-based fashion advice. Members of the audience laughed at the advertisement, as did Parvin herself, joking, “I guess the look on everybody’s face… I do not need to say any more.” She broke down the advertisement, noting that it was relatively “homogenous, despite surface-level diversity,” featuring almost exclusively women who appear to be upper middle-class. She also observed that the device’s command to “look up and smile” invokes street harassment. Like self-driving cars (killing machines?), Parvin argues that Echo Look is a case of “ceding judgment to code.” In this case, the code is “needy and greedy,” and it is to users’ benefit to share large quantities of personal data, with unclear terms of accountability and ownership. Just as the trolley problem belies the complex and situated nature of morality in practice, Parvin used the example of Ava Fisher’s “How I Learned to Look Believable” to illustrate the same about fashion. In this piece, Fisher describes agonizing over what to wear as a complainant in a sexual harassment case: “I have needed to be ready, at every moment, to be seen as both a poverty-stricken graduate student and a reliable adult. As an accuser, I need to be a news-team-ready correspondent and someone who certainly wasn’t doing this for the limelight.” Using this example, Parvin also argued that fashion is far from frivolous and that we should “see fashion for its substance as we interrogate algorithms’ claim to reason.”

Thakurta’s response was fairly brief. He began with a simple admission: “I’ll be honest. I haven’t thought about this aspect of the problem.” He explained that he would be hesitant to stand in front of a self-driving car that could detect humans with 99.9% accuracy. Members of the audience chuckled at this. He seemed to agree with some of Parvin’s points about the shortcomings of algorithmic morality, explaining that something called ‘adversarial examples’ can throw off algorithms (unfortunately without much explanation of what these examples are). However, Thakurta wondered if algorithms should be thought of as the same kinds of moral actors as human beings at all. He noted that the human element of fear, which he called a ‘basal instinct,’ can make reason and ethics ‘go out the window.’ In other words, machines and humans each have different shortcomings as moral actors. Thakurta also noted that self-driving cars will at least have contractual agreements that state who they will save and under what circumstances, but noted that the question of whether these agreements are ‘moral’ or ‘ethical’ is separate.

Parvin responded, with the discussion seeming to turn into a bit of a debate. She argued again that ethics cannot be reduced to predefined parameters. She argued that we need to think less about how to solve the problem at hand and more about the problem we are solving and whether it is the right problem. Why, for example, decide whether to program self-driving cars to kill when we could redesign environments to eliminate the question altogether?

Thakurta provide an example of a relatively closed system, the airline industry, which he argued is ‘the most controlled environment.’ He recounted that a Canadian plan that ran out of fuel forced the pilot to make a decision of whether to land on an old runway that had been converted into a children’s go-kart track. The pilot made the decision to land, and remarkably, no one was hurt. Audible ‘wow’s could be heard form the audience upon hearing the story, but its larger point seemed somewhat lost. He claimed that self-driving cars should only be on the road when there are only self-driving cars on the road, to better approximate a closed system, and that the real issue is the level of relaxation of constraints of a closed system.

At this point, the event transitioned into a discussion between the presenters and the audience, with many eager to participate. It began with some clarifications about language and imagery, with SJRC Director Jenny Reardon asking if ‘kill decision,’ which Parvin had mentioned earlier, was a widely used term. Parvin repeated her point about ‘killing machines’ and how imagery changes perception. Thakurta provided the example of algorithms ‘thinking’ as both widespread and ‘utter bogus.’ Along these lines, another participant mentioned that the ‘learning’ in ‘machine learning’ is not what people tend to think, prompting Parvin to reiterate the importance of taking care with terminology. Thakurta, however, argued that in computer science, most of these terms are precise and well-defined, and often predate computer science itself. He framed this as the technical definitions being ‘right’ and popular usage being ‘loose’ and ‘wrong.’ Someone asked where the responsibility lies, and Thakurta responded, “with the person using the code,” insisting that “you have to know what the system is doing.” Who ‘you’ or ‘the user’ is in this case remained a bit unclear.

Jennifer Derr noted that there seemed to be two different questions in Parvin’s talk, one about ethics and where algorithms can operate and a bigger question about how artificial intelligence is structured with larger social structures, like gender and race. Parvin responded that “both have to deal with the question of action. What is the situation that demands action, and what is the action that it demands? What kinds of questions can algorithms answer?”.

Donna Haraway focused on what she called “the creepy desire to cede agency” to “seriously creepy companions.” She suggested that “the allure of the creepy” and a mix of terror and excitement is part of the appeal of these algorithmic technologies and noted that “the autonomous entity is a deep fantasy of Western science.” She asked Parvin what she though of ‘the creepy factor.’ Parvin responded that part of the allure is the idea of predictability, and that conversations often being with people being messy and unpredictable. Haraway mentioned the appeal of blaming others after ceding agency. Parvin suggested that it is less about ceding agency than simulating agency and again noted some of the language that makes this possible. Her example of a ‘made-up’ term this time, ‘precision targeting,’ received acknowledgement from the audience. A graduate student later returned to conversation to the question of the displacement of responsibility. He noted the obvious appeal of being able to blame a machine for the decision of wear a tacky shirt (prompting laughter from the audience and also somewhat contradicting Parvin’s earlier encouragement not to think of fashion as frivolous) but suggested that the case of autonomous vehicles is more troubling.

Drawing on Parvin’s feminist analysis of algorithmic technologies, an audience member initiated a discussion surrounding a past feminist intervention in the history of robotics. Lucy Suchman, who worked for Xerox for many years, encouraged focusing on interaction with the world, rather than on algorithms. This was a major intervention in the history of robotics. Acknowledging that the issue of ceding agency is really a question of what robots will do “without telling anyone,” an audience member agreed with Parvin that this is a reductive way to frame the problem of self-driving cars. Why not take up a different problem altogether, as Parvin suggests, like “how do we devise some computer programs to make driving safer with the driver there?”. Parvin agreed but noted the difficulty of changing the conversation at all, noting that follow-up publications to “Why Self-Driving Cars Must be Programmed to Kill” failed to cite critiques, such as her own, that had been published in the interim. Drawing on this idea of siloed conversations, Jenny Reardon described Suchman, an anthropologist, as ‘embedded’ at Xerox and wondered how to foster those kinds of relationships and change thinking in those spaces.

Lindsey Dillon changed the course of the conversation by insisting on consideration of the corporate and capitalist elements of algorithmic technologies and the fact that many of these technologies exist to generate new and different markets. Thakurta responded that even privacy researchers (like himself) and economists don’t know what an individual piece of data is worth. He suggested that individuals still have the choice not to buy these technologies. Jenny Reardon resisted this by saying that there are ways in which you can be forced to buy things, using the example of having to sacrifice her landline telephone. She wondered if the same could happen with driverless cars. Thakurta responded that it was more likely that cars would become a service. Countering the claim of individuals to simply opt out of algorithmic technologies, Parvin pointed out that, due in part to smart city initiatives in many places, “you are already giving up your data in the service of self-driving cars.” In response, Thakurta further expressed the complexity of data-driven economies. He pointed out that a company like Apple has a simple business model that one could explain to anyone: you pay them, and they give you a product. On the other hand, he pointed out the difficulty of trying to explain Google’s business model to someone 30 or 40 years ago. The audience laughed again, in recognition of this absurdity. He bigger point, however, is that strict data protection would immediately tank some major companies, doing harm to the economy in ways Thakurta argued would be “worse than the data being released.” This prompted some murmurs of acknowledgement from the audience (while also sounding a bit like a hostage situation).

Donna Haraway provided some more thoughts to close the evening, returning to the topic of Lucy Suchman’s work at Xerox as a way to discuss the possibilities of intervention in algorithmic technologies. Haraway pointed out that a major difficulty is simply that “we don’t know what game-changers we want. We lack what we want, because every single thing that we want, if we probe it, it actually makes things worse.” She wondered once again how to build communities and cultures like the one that existed at Xerox, of the type that can sustain conversations like the one this event fostered. Parvin parted with the difficulty of academia’s own economies, suggesting that part of the difficulty lies in what kinds of work academics receive credit for and how different types of scholarship are evaluated.

Posted in Past Events.