January 15, 2020 | Works-in-Progress with Iris Yellum

Wednesday, January 15, 2020

4:00-5:30 PM

SJRC Common Room, Oakes 231

Join SJRC scholars in the SJRC Common Room for an open discussion of works-in-progress! This is a wonderful chance to engage with one another’s ideas, and support our own internal work.

At this session, S&J Visitor Iris Yellum will discuss a chapter from her dissertation on the history of legume improvement in India.

Iris Yellum is a South Asian Studies PhD candidate from Harvard University working on Indian agrarian heritage, the history of botanical collection from India, and the governance of biodiversity.

January 14, 2020 | Theorizing Race After Race

5:00-6:30 PM

SJRC Common Room, Oakes 231

Join Science & Justice scholars for an open discussion of Theorizing Race After Race!

At this meeting, we will discuss our plans for the upcoming event with Herman Gray and Alondra Nelson at Kuumbwa on January 22nd, as well as a draft funding proposal to support future TRAR projects.

Readings can be found in this Google Drive folder and includes writings by Alondra and Herman, as well as some pieces by Paul Gilroy. All, in different ways, are touching on questions of (post)race, science, and futurity–themes we hope to incorporate into the Kuumbwa event.

More information on the cluster can be found at: https://scijust.ucsc.edu/2019/05/17/theorizing-race-after-race/.

Nov 28, 2018 | Algorithms, Mobility, and Justice

Wednesday, November 28, 2018

4:00-6:00 PM

Engineering 2, Room 599

Are moral algorithms a reasonable solution for taking advantage of life-saving potentials of self-driving cars? In this talk, Nassim JafariNaimi (Assistant Professor, Georgia Institute of Technology)  engages the utilitarian framings that are dominant in the discourses on self-driving cars inclusive of the assumptions that are folded into the question above: that algorithms can be moral and self-driving cars will save lives. Drawing on feminist and care ethics, the talk brings to fore the injustices built into current and future mobility systems such as laws and policies that protect car manufacturers and algorithmic biases that will have disproportionate negative impacts on the most vulnerable. Moreover, it is argued that a constricted moral imagination dominated by the reductive scenarios of the Trolley Problems is impairing design imagination of alternative futures. More specifically, that a genuine caring concern for the many lives lost in car accidents now and in the future—a concern that transcends false binary trade-offs and that recognizes the systemic biases and power structures—could serve as a starting point to rethink mobility, as it connects to the design of cities, the well-being of communities, and the future of the planet.
Abhradeep Guha Thakurta (UCSC Assistant Professor of Computer Science and Engineering) will be offering a response and comments.
Event hosted/organized by Neda Atanasoski (UCSC Professor of Feminist Studies and Director of Critical Race and Ethnic Studies)

Neda Atanasoski is Professor of Feminist Studies at UC Santa Cruz, Director of Critical Race and Ethnic Studies and affiliated with the Film and Digital Media Department. Atanasoski has a PhD in Literature and Cultural Studies from the University of California, San Diego. Her research interests include race and technology; war and nationalism; gender, ethnicity, and religion; cultural studies and critical theory; media studies.

Nassim JafariNaimi is an Assistant Professor of Digital Media at the School of Literature, Media, and Communication at Georgia Tech and the director of the Design and Social Interaction Studio which she established in 2013. JafariNaimi’s research engages the ethical and political dimensions of design practice and technology especially as related to democracy and social justice. Her research spans both theoretical and design-based inquiries situated at the intersection of Design Studies, Science and Technology Studies, and Human Computer Interaction. Her writing on topics such as participatory media, smart cities, social and educational games, and algorithms have appeared in venues such as Science, Technology, and Human Values, Design Issues, Digital Creativity, and Computer Supported Cooperative Work (CSCW). JafariNaimi received her PhD in Design from Carnegie Mellon University. She also holds an MS in Information Design and Technology from the Georgia Institute of Technology and a BS in Electrical Engineering from the University of Tehran, Iran.

Abhradeep Guha Thakurta is Assistant Professor of Computer Science and Engineering at UC Santa Cruz. Thakurta’s research is at the intersection of machine learning and data privacy. Primary research interest include designing privacy-preserving machine learning algorithms with strong analytical guarantees, which are robust to errors in the data. In many instances, Thakurta harnesses the privacy property of the algorithms to obtain robustness and utility guarantees. A combination of academic and industrial experience has allowed Thakurta to draw non-obvious insights at the intersection of theoretical analysis and practical deployment of privacy-preserving machine learning algorithms.

Co-Sponsored by: Critical Race and Ethnic Studies, the Feminist Studies Department and The Humanities Institute’s Data and Democracy Initiative.

Rapporteur Report

By: Andy Murray, SJRC Graduate Student Researcher

This event spawned from a discussion about the meaning of ‘intelligence’ and ‘smartness’ in so-called ‘smart cities.’ It brought Nassim Parvin, an Assistant Professor of Digital Media at Georgia Tech, into conversation with UC Santa Cruz’s own Abhradeep Thakurta, Assistant Professor of Computer Science and Engineering. The event was organized as a presentation by Parvin, followed by a response from Thakurta. The group that convened for the event as fairly large, consisting of a range of undergraduates, graduate students, and faculty, as well as some community members (one of whom noted, after several attendees had mentioned their publications by way of introduction, that he was “just a layperson; I don’t write books, I just read them”).

Parvin’s discussion was divided into two different analyses, both bringing feminist theory to bear on algorithmic technologies: self-driving vehicles and ‘virtual fashion assistance.’ Parvin’s analysis of selfdriving cars was fueled by an MIT Technology Review article entitled, “Why Self-Driving Cars Must be Programmed to Kill.” This article states that self-driving cars necessitate ‘algorithmic morality’ and suggested that this morality could be based on majority responses to a classic philosophical puzzle: the trolley problem. Parvin disagrees, arguing that the premises of its argument are false. These thought experiments, which Parvin notes were introduced in the context of abortion debates in the 1970s, are generally intended to illustrate that morality is complicated. Self-driving cars seem to bring the trolley problem to life. The problem, as Parvin sees it, is that the trolley problem is an example of what she calls ‘quandary ethics’—it demands a ‘god’s eye’ view and clearly defined parameters. In contrast, real-life decisions are always marked by uncertainty, organic, and ongoing in their effects. Parvin doubts the ability to agree on any set of moral principles for algorithms. During her talk, she demonstrated how the binary logic of algorithms quickly becomes problematic when tested. She worries that in the process of simplifying things for machines and machine learning, humans are ceding moral agency to computers. She also put this more poignantly: “what if Grandma is pushing a stroller?”. Parvin argues against accepting algorithmic morality, arguing that we could instead pursue a radical rethinking of design. She notes the power of imagery and the simple shift of calling the machines “killing robots” rather than “self-driving cars,” an example that generated murmurs of recognition and agreement from the audience.

Moving on to ‘algorithmic fashion,’ Parvin showed an advertisement for the technology in question: Amazon’s Echo Look. This device takes photos of users and provides algorithm-based fashion advice. Members of the audience laughed at the advertisement, as did Parvin herself, joking, “I guess the look on everybody’s face… I do not need to say any more.” She broke down the advertisement, noting that it was relatively “homogenous, despite surface-level diversity,” featuring almost exclusively women who appear to be upper middle-class. She also observed that the device’s command to “look up and smile” invokes street harassment. Like self-driving cars (killing machines?), Parvin argues that Echo Look is a case of “ceding judgment to code.” In this case, the code is “needy and greedy,” and it is to users’ benefit to share large quantities of personal data, with unclear terms of accountability and ownership. Just as the trolley problem belies the complex and situated nature of morality in practice, Parvin used the example of Ava Fisher’s “How I Learned to Look Believable” to illustrate the same about fashion. In this piece, Fisher describes agonizing over what to wear as a complainant in a sexual harassment case: “I have needed to be ready, at every moment, to be seen as both a poverty-stricken graduate student and a reliable adult. As an accuser, I need to be a news-team-ready correspondent and someone who certainly wasn’t doing this for the limelight.” Using this example, Parvin also argued that fashion is far from frivolous and that we should “see fashion for its substance as we interrogate algorithms’ claim to reason.”

Thakurta’s response was fairly brief. He began with a simple admission: “I’ll be honest. I haven’t thought about this aspect of the problem.” He explained that he would be hesitant to stand in front of a self-driving car that could detect humans with 99.9% accuracy. Members of the audience chuckled at this. He seemed to agree with some of Parvin’s points about the shortcomings of algorithmic morality, explaining that something called ‘adversarial examples’ can throw off algorithms (unfortunately without much explanation of what these examples are). However, Thakurta wondered if algorithms should be thought of as the same kinds of moral actors as human beings at all. He noted that the human element of fear, which he called a ‘basal instinct,’ can make reason and ethics ‘go out the window.’ In other words, machines and humans each have different shortcomings as moral actors. Thakurta also noted that self-driving cars will at least have contractual agreements that state who they will save and under what circumstances, but noted that the question of whether these agreements are ‘moral’ or ‘ethical’ is separate.

Parvin responded, with the discussion seeming to turn into a bit of a debate. She argued again that ethics cannot be reduced to predefined parameters. She argued that we need to think less about how to solve the problem at hand and more about the problem we are solving and whether it is the right problem. Why, for example, decide whether to program self-driving cars to kill when we could redesign environments to eliminate the question altogether?

Thakurta provide an example of a relatively closed system, the airline industry, which he argued is ‘the most controlled environment.’ He recounted that a Canadian plan that ran out of fuel forced the pilot to make a decision of whether to land on an old runway that had been converted into a children’s go-kart track. The pilot made the decision to land, and remarkably, no one was hurt. Audible ‘wow’s could be heard form the audience upon hearing the story, but its larger point seemed somewhat lost. He claimed that self-driving cars should only be on the road when there are only self-driving cars on the road, to better approximate a closed system, and that the real issue is the level of relaxation of constraints of a closed system.

At this point, the event transitioned into a discussion between the presenters and the audience, with many eager to participate. It began with some clarifications about language and imagery, with SJRC Director Jenny Reardon asking if ‘kill decision,’ which Parvin had mentioned earlier, was a widely used term. Parvin repeated her point about ‘killing machines’ and how imagery changes perception. Thakurta provided the example of algorithms ‘thinking’ as both widespread and ‘utter bogus.’ Along these lines, another participant mentioned that the ‘learning’ in ‘machine learning’ is not what people tend to think, prompting Parvin to reiterate the importance of taking care with terminology. Thakurta, however, argued that in computer science, most of these terms are precise and well-defined, and often predate computer science itself. He framed this as the technical definitions being ‘right’ and popular usage being ‘loose’ and ‘wrong.’ Someone asked where the responsibility lies, and Thakurta responded, “with the person using the code,” insisting that “you have to know what the system is doing.” Who ‘you’ or ‘the user’ is in this case remained a bit unclear.

Jennifer Derr noted that there seemed to be two different questions in Parvin’s talk, one about ethics and where algorithms can operate and a bigger question about how artificial intelligence is structured with larger social structures, like gender and race. Parvin responded that “both have to deal with the question of action. What is the situation that demands action, and what is the action that it demands? What kinds of questions can algorithms answer?”.

Donna Haraway focused on what she called “the creepy desire to cede agency” to “seriously creepy companions.” She suggested that “the allure of the creepy” and a mix of terror and excitement is part of the appeal of these algorithmic technologies and noted that “the autonomous entity is a deep fantasy of Western science.” She asked Parvin what she though of ‘the creepy factor.’ Parvin responded that part of the allure is the idea of predictability, and that conversations often being with people being messy and unpredictable. Haraway mentioned the appeal of blaming others after ceding agency. Parvin suggested that it is less about ceding agency than simulating agency and again noted some of the language that makes this possible. Her example of a ‘made-up’ term this time, ‘precision targeting,’ received acknowledgement from the audience. A graduate student later returned to conversation to the question of the displacement of responsibility. He noted the obvious appeal of being able to blame a machine for the decision of wear a tacky shirt (prompting laughter from the audience and also somewhat contradicting Parvin’s earlier encouragement not to think of fashion as frivolous) but suggested that the case of autonomous vehicles is more troubling.

Drawing on Parvin’s feminist analysis of algorithmic technologies, an audience member initiated a discussion surrounding a past feminist intervention in the history of robotics. Lucy Suchman, who worked for Xerox for many years, encouraged focusing on interaction with the world, rather than on algorithms. This was a major intervention in the history of robotics. Acknowledging that the issue of ceding agency is really a question of what robots will do “without telling anyone,” an audience member agreed with Parvin that this is a reductive way to frame the problem of self-driving cars. Why not take up a different problem altogether, as Parvin suggests, like “how do we devise some computer programs to make driving safer with the driver there?”. Parvin agreed but noted the difficulty of changing the conversation at all, noting that follow-up publications to “Why Self-Driving Cars Must be Programmed to Kill” failed to cite critiques, such as her own, that had been published in the interim. Drawing on this idea of siloed conversations, Jenny Reardon described Suchman, an anthropologist, as ‘embedded’ at Xerox and wondered how to foster those kinds of relationships and change thinking in those spaces.

Lindsey Dillon changed the course of the conversation by insisting on consideration of the corporate and capitalist elements of algorithmic technologies and the fact that many of these technologies exist to generate new and different markets. Thakurta responded that even privacy researchers (like himself) and economists don’t know what an individual piece of data is worth. He suggested that individuals still have the choice not to buy these technologies. Jenny Reardon resisted this by saying that there are ways in which you can be forced to buy things, using the example of having to sacrifice her landline telephone. She wondered if the same could happen with driverless cars. Thakurta responded that it was more likely that cars would become a service. Countering the claim of individuals to simply opt out of algorithmic technologies, Parvin pointed out that, due in part to smart city initiatives in many places, “you are already giving up your data in the service of self-driving cars.” In response, Thakurta further expressed the complexity of data-driven economies. He pointed out that a company like Apple has a simple business model that one could explain to anyone: you pay them, and they give you a product. On the other hand, he pointed out the difficulty of trying to explain Google’s business model to someone 30 or 40 years ago. The audience laughed again, in recognition of this absurdity. He bigger point, however, is that strict data protection would immediately tank some major companies, doing harm to the economy in ways Thakurta argued would be “worse than the data being released.” This prompted some murmurs of acknowledgement from the audience (while also sounding a bit like a hostage situation).

Donna Haraway provided some more thoughts to close the evening, returning to the topic of Lucy Suchman’s work at Xerox as a way to discuss the possibilities of intervention in algorithmic technologies. Haraway pointed out that a major difficulty is simply that “we don’t know what game-changers we want. We lack what we want, because every single thing that we want, if we probe it, it actually makes things worse.” She wondered once again how to build communities and cultures like the one that existed at Xerox, of the type that can sustain conversations like the one this event fostered. Parvin parted with the difficulty of academia’s own economies, suggesting that part of the difficulty lies in what kinds of work academics receive credit for and how different types of scholarship are evaluated.

Nov 14, 2018 | Works-in-Progress with Lindsey Dillon

Wednesday, November 14, 2018

4:00pm – 5:30pm

SJRC Common Room, Oakes 231

Join SJRC scholars in the SJRC Common Room for an open discussion of works-in-progress! This is a wonderful chance to engage with one another’s ideas, and support our own internal work. At this session, we will hear from Assistant Professor of Sociology, Lindsey Dillon who will discuss her research on redevelopment and racial capitalism in San Francisco.

Read more on Lindsey’s work in the Hastings Environmental Law Journal article The Breathers of Bayview Hill: Redevelopment and Environmental Justice in Southeast San Francisco.

Lindsey Dillon is a geographer with research interests in urban environments and social justice. Her research and writing is deeply engaged with political ecology, feminist geography, critical race theory, and science and technology studies. In addition to being a SJRC Steering Committee member, Lindsey is affiliated with the Community Studies Program, the Environmental Studies Department, and co-founded and serves on the steering committee of the Environmental Data and Governance Initiative.

October 17, 2018 | Meet & Greet

Please join us for a beginning of quarter social hour. In addition to a chance to celebrate the new academic year and enjoy each other’s company over nice food and drink, we will be welcoming new members of our community, and welcoming back others.

This will be a great chance for everyone to meet the new faces in the Center and foster emerging collaborations! Attendees are highly encouraged to bring and share their objects of study.

Wednesday, October 17, 2018, 4:00-5:30 PM, SJRC Common Room, Oakes 231

Fall 2018 | Science & Justice Writing Together

Monday’s 1:00-4:00pm 
SJRC Common Room, Oakes 231

Wanting to establish a regular writing routine exploring science and justice? Beginning October 8th, Join SJRC scholars in the SJRC Common Room on Monday’s from 1:00-4:00pm for open writing sessions! Engage in six 25-minute writing sessions (with a 5 minute break in between). Open to all students, faculty and visiting scholars.

We continue to schedule quarterly writing sessions based on interest and availability.

For more information, please contact Lindsey Dillon (Assistant Professor of Sociology).

Writing Together sessions will not be held on campus holidays falling on the following Monday’s: Nov 12, Dec 24, Dec 31, Jan 21, Feb 18, March 25 (spring break), May 27.

March 5, 2018 | Can We Build a Trustworthy and Trusted Press: The Trust Project Challenge

Tuesday, March 5, 2019

7:00pm (reception to follow)

Kresge Town Hall

We all think we can tell the difference between information designed to deceive and journalism designed to inform. But how do we really know? Join Sally Lehrman in a discussion of this critical question in a climate of mistrust and misinformation. Lehrman, an award-winning journalist and Visiting Science & Justice Professor, founded and leads  the The Trust Project, a consortium of top news companies that are developing publicly accessible standards for assessing the quality and credibility of the news you see online.

Read all about The Trust Protocol and The Trust Project and its news partners (the Economist, Globe and Mail, the Independent Journal Review, Mic, Italy’s La Repubblica and La Stampa, Spain’s El País and El Mundo, the Washington Post, and more) at the following links:

The Verge: Facebook adds trust indicators to news articles in an effort to identify real journalism

CNN Tech: Facebook, Google, Twitter to fight fake news with ‘trust indicators’

The Atlantic: What People Really Want From News Organizations

The Trust Project research: Trust Indicators boost readers’ perceptions of news credibility

Sally Lehrman founded and directs the Trust Project, an international consortium of news outlets implementing a transparency standard for journalism to help the public — and news distribution platforms — identify quality news out of the hubbub online. Lehrman was named one of MediaShift’s Top 20 Digital Innovators in 2018 for this work. An award-winning reporter on medicine and science policy with an emphasis on coverage of social diversity, her honors also include a Peabody Award, duPont-Columbia and the John S. Knight Fellowship at Stanford University, among others. Lehrman’s byline credits include Scientific American, Nature, Health, The Boston Globe, The New York Times, Salon.com, and The DNA Files, distributed by NPR. Her book, “News in a New America,” argues for an inclusive U.S. news media. Her co-edited volume on covering structural inequality is due out from Routledge in 2019. She is Science and Justice Professor at the UC-Santa Cruz Center for Science and Justice.

Co-Sponsored by the UC Santa Cruz Science and Justice Research Center and Kresge College’s Media & Society Seminar Series.

May 30, 2018 | WiSE’s Science on Tap | The Postgenomic Condition

Wednesday, May 30, 2018 | The Crepe Place: 1134 Soquel Ave, Santa Cruz, CA 95062

(slideshare of presentation; begins about 10:55)

SJRC Director and Sociology Professor Jenny Reardon will discuss with us her new book, The Postgenomic Condition: Ethics, Justice & Knowledge After the Genome. Reardon’s research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices, particularly in modern genomic research. Her training spans molecular biology, the history of biology, science studies, feminist and critical race studies, and the sociology of science, technology and medicine.

Science on Tap is designed to connect the Santa Cruz community to the latest research at U.C. Santa Cruz.  It is not exclusive for scientists and science majors and aims to appeal to all audiences. So come, grab a drink and meal, relax and hear some interesting cutting edge science that’s happening near you!

Science on Tap is generally on the last Wednesday of every month at the Crepe Place. Due to the popularity of these events, to ensure that you’ll have a seat, we highly recommend that you reserve a table by calling the Crepe Place at (831) 429-6994. More information can be found on their website at: http://thecrepeplace.com/.

Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the UC Santa Cruz.

 

Hosted by the UC Santa Cruz Women in Science and Engineering (WiSE).