Nov 28, 2018 | Algorithms, Mobility, and Justice

Wednesday, November 28, 2018

4:00-6:00 PM

Engineering 2, Room 599

Are moral algorithms a reasonable solution for taking advantage of life-saving potentials of self-driving cars? In this talk, Nassim JafariNaimi (Assistant Professor, Georgia Institute of Technology)  engages the utilitarian framings that are dominant in the discourses on self-driving cars inclusive of the assumptions that are folded into the question above: that algorithms can be moral and self-driving cars will save lives. Drawing on feminist and care ethics, the talk brings to fore the injustices built into current and future mobility systems such as laws and policies that protect car manufacturers and algorithmic biases that will have disproportionate negative impacts on the most vulnerable. Moreover, it is argued that a constricted moral imagination dominated by the reductive scenarios of the Trolley Problems is impairing design imagination of alternative futures. More specifically, that a genuine caring concern for the many lives lost in car accidents now and in the future—a concern that transcends false binary trade-offs and that recognizes the systemic biases and power structures—could serve as a starting point to rethink mobility, as it connects to the design of cities, the well-being of communities, and the future of the planet.
Abhradeep Guha Thakurta (UCSC Assistant Professor of Computer Science and Engineering) will be offering a response and comments.
Event hosted/organized by Neda Atanasoski (UCSC Professor of Feminist Studies and Director of Critical Race and Ethnic Studies)

Neda Atanasoski is Professor of Feminist Studies at UC Santa Cruz, Director of Critical Race and Ethnic Studies and affiliated with the Film and Digital Media Department. Atanasoski has a PhD in Literature and Cultural Studies from the University of California, San Diego. Her research interests include race and technology; war and nationalism; gender, ethnicity, and religion; cultural studies and critical theory; media studies.

Nassim JafariNaimi is an Assistant Professor of Digital Media at the School of Literature, Media, and Communication at Georgia Tech and the director of the Design and Social Interaction Studio which she established in 2013. JafariNaimi’s research engages the ethical and political dimensions of design practice and technology especially as related to democracy and social justice. Her research spans both theoretical and design-based inquiries situated at the intersection of Design Studies, Science and Technology Studies, and Human Computer Interaction. Her writing on topics such as participatory media, smart cities, social and educational games, and algorithms have appeared in venues such as Science, Technology, and Human Values, Design Issues, Digital Creativity, and Computer Supported Cooperative Work (CSCW). JafariNaimi received her PhD in Design from Carnegie Mellon University. She also holds an MS in Information Design and Technology from the Georgia Institute of Technology and a BS in Electrical Engineering from the University of Tehran, Iran.

Abhradeep Guha Thakurta is Assistant Professor of Computer Science and Engineering at UC Santa Cruz. Thakurta’s research is at the intersection of machine learning and data privacy. Primary research interest include designing privacy-preserving machine learning algorithms with strong analytical guarantees, which are robust to errors in the data. In many instances, Thakurta harnesses the privacy property of the algorithms to obtain robustness and utility guarantees. A combination of academic and industrial experience has allowed Thakurta to draw non-obvious insights at the intersection of theoretical analysis and practical deployment of privacy-preserving machine learning algorithms.

Co-Sponsored by: Critical Race and Ethnic Studies, the Feminist Studies Department and The Humanities Institute’s Data and Democracy Initiative.

Rapporteur Report

By: Andy Murray

This event spawned from a discussion about the meaning of ‘intelligence’ and ‘smartness’ in so-called ‘smart cities.’ It brought Nassim Parvin, an Assistant Professor of Digital Media at Georgia Tech, into conversation with UC Santa Cruz’s own Abhradeep Thakurta, Assistant Professor of Computer Science and Engineering. The event was organized as a presentation by Parvin, followed by a response from Thakurta. The group that convened for the event as fairly large, consisting of a range of undergraduates, graduate students, and faculty, as well as some community members (one of whom noted, after several attendees had mentioned their publications by way of introduction, that he was “just a layperson; I don’t write books, I just read them”).

Parvin’s discussion was divided into two different analyses, both bringing feminist theory to bear on algorithmic technologies: self-driving vehicles and ‘virtual fashion assistance.’ Parvin’s analysis of selfdriving cars was fueled by an MIT Technology Review article entitled, “Why Self-Driving Cars Must be Programmed to Kill.” This article states that self-driving cars necessitate ‘algorithmic morality’ and suggested that this morality could be based on majority responses to a classic philosophical puzzle: the trolley problem. Parvin disagrees, arguing that the premises of its argument are false. These thought experiments, which Parvin notes were introduced in the context of abortion debates in the 1970s, are generally intended to illustrate that morality is complicated. Self-driving cars seem to bring the trolley problem to life. The problem, as Parvin sees it, is that the trolley problem is an example of what she calls ‘quandary ethics’—it demands a ‘god’s eye’ view and clearly defined parameters. In contrast, real-life decisions are always marked by uncertainty, organic, and ongoing in their effects. Parvin doubts the ability to agree on any set of moral principles for algorithms. During her talk, she demonstrated how the binary logic of algorithms quickly becomes problematic when tested. She worries that in the process of simplifying things for machines and machine learning, humans are ceding moral agency to computers. She also put this more poignantly: “what if Grandma is pushing a stroller?”. Parvin argues against accepting algorithmic morality, arguing that we could instead pursue a radical rethinking of design. She notes the power of imagery and the simple shift of calling the machines “killing robots” rather than “self-driving cars,” an example that generated murmurs of recognition and agreement from the audience.

Moving on to ‘algorithmic fashion,’ Parvin showed an advertisement for the technology in question: Amazon’s Echo Look. This device takes photos of users and provides algorithm-based fashion advice. Members of the audience laughed at the advertisement, as did Parvin herself, joking, “I guess the look on everybody’s face… I do not need to say any more.” She broke down the advertisement, noting that it was relatively “homogenous, despite surface-level diversity,” featuring almost exclusively women who appear to be upper middle-class. She also observed that the device’s command to “look up and smile” invokes street harassment. Like self-driving cars (killing machines?), Parvin argues that Echo Look is a case of “ceding judgment to code.” In this case, the code is “needy and greedy,” and it is to users’ benefit to share large quantities of personal data, with unclear terms of accountability and ownership. Just as the trolley problem belies the complex and situated nature of morality in practice, Parvin used the example of Ava Fisher’s “How I Learned to Look Believable” to illustrate the same about fashion. In this piece, Fisher describes agonizing over what to wear as a complainant in a sexual harassment case: “I have needed to be ready, at every moment, to be seen as both a poverty-stricken graduate student and a reliable adult. As an accuser, I need to be a news-team-ready correspondent and someone who certainly wasn’t doing this for the limelight.” Using this example, Parvin also argued that fashion is far from frivolous and that we should “see fashion for its substance as we interrogate algorithms’ claim to reason.”

Thakurta’s response was fairly brief. He began with a simple admission: “I’ll be honest. I haven’t thought about this aspect of the problem.” He explained that he would be hesitant to stand in front of a self-driving car that could detect humans with 99.9% accuracy. Members of the audience chuckled at this. He seemed to agree with some of Parvin’s points about the shortcomings of algorithmic morality, explaining that something called ‘adversarial examples’ can throw off algorithms (unfortunately without much explanation of what these examples are). However, Thakurta wondered if algorithms should be thought of as the same kinds of moral actors as human beings at all. He noted that the human element of fear, which he called a ‘basal instinct,’ can make reason and ethics ‘go out the window.’ In other words, machines and humans each have different shortcomings as moral actors. Thakurta also noted that self-driving cars will at least have contractual agreements that state who they will save and under what circumstances, but noted that the question of whether these agreements are ‘moral’ or ‘ethical’ is separate.

Parvin responded, with the discussion seeming to turn into a bit of a debate. She argued again that ethics cannot be reduced to predefined parameters. She argued that we need to think less about how to solve the problem at hand and more about the problem we are solving and whether it is the right problem. Why, for example, decide whether to program self-driving cars to kill when we could redesign environments to eliminate the question altogether?

Thakurta provide an example of a relatively closed system, the airline industry, which he argued is ‘the most controlled environment.’ He recounted that a Canadian plan that ran out of fuel forced the pilot to make a decision of whether to land on an old runway that had been converted into a children’s go-kart track. The pilot made the decision to land, and remarkably, no one was hurt. Audible ‘wow’s could be heard form the audience upon hearing the story, but its larger point seemed somewhat lost. He claimed that self-driving cars should only be on the road when there are only self-driving cars on the road, to better approximate a closed system, and that the real issue is the level of relaxation of constraints of a closed system.

At this point, the event transitioned into a discussion between the presenters and the audience, with many eager to participate. It began with some clarifications about language and imagery, with SJRC Director Jenny Reardon asking if ‘kill decision,’ which Parvin had mentioned earlier, was a widely used term. Parvin repeated her point about ‘killing machines’ and how imagery changes perception. Thakurta provided the example of algorithms ‘thinking’ as both widespread and ‘utter bogus.’ Along these lines, another participant mentioned that the ‘learning’ in ‘machine learning’ is not what people tend to think, prompting Parvin to reiterate the importance of taking care with terminology. Thakurta, however, argued that in computer science, most of these terms are precise and well-defined, and often predate computer science itself. He framed this as the technical definitions being ‘right’ and popular usage being ‘loose’ and ‘wrong.’ Someone asked where the responsibility lies, and Thakurta responded, “with the person using the code,” insisting that “you have to know what the system is doing.” Who ‘you’ or ‘the user’ is in this case remained a bit unclear.

Jennifer Derr noted that there seemed to be two different questions in Parvin’s talk, one about ethics and where algorithms can operate and a bigger question about how artificial intelligence is structured with larger social structures, like gender and race. Parvin responded that “both have to deal with the question of action. What is the situation that demands action, and what is the action that it demands? What kinds of questions can algorithms answer?”.

Donna Haraway focused on what she called “the creepy desire to cede agency” to “seriously creepy companions.” She suggested that “the allure of the creepy” and a mix of terror and excitement is part of the appeal of these algorithmic technologies and noted that “the autonomous entity is a deep fantasy of Western science.” She asked Parvin what she though of ‘the creepy factor.’ Parvin responded that part of the allure is the idea of predictability, and that conversations often being with people being messy and unpredictable. Haraway mentioned the appeal of blaming others after ceding agency. Parvin suggested that it is less about ceding agency than simulating agency and again noted some of the language that makes this possible. Her example of a ‘made-up’ term this time, ‘precision targeting,’ received acknowledgement from the audience. A graduate student later returned to conversation to the question of the displacement of responsibility. He noted the obvious appeal of being able to blame a machine for the decision of wear a tacky shirt (prompting laughter from the audience and also somewhat contradicting Parvin’s earlier encouragement not to think of fashion as frivolous) but suggested that the case of autonomous vehicles is more troubling.

Drawing on Parvin’s feminist analysis of algorithmic technologies, an audience member initiated a discussion surrounding a past feminist intervention in the history of robotics. Lucy Suchman, who worked for Xerox for many years, encouraged focusing on interaction with the world, rather than on algorithms. This was a major intervention in the history of robotics. Acknowledging that the issue of ceding agency is really a question of what robots will do “without telling anyone,” an audience member agreed with Parvin that this is a reductive way to frame the problem of self-driving cars. Why not take up a different problem altogether, as Parvin suggests, like “how do we devise some computer programs to make driving safer with the driver there?”. Parvin agreed but noted the difficulty of changing the conversation at all, noting that follow-up publications to “Why Self-Driving Cars Must be Programmed to Kill” failed to cite critiques, such as her own, that had been published in the interim. Drawing on this idea of siloed conversations, Jenny Reardon described Suchman, an anthropologist, as ‘embedded’ at Xerox and wondered how to foster those kinds of relationships and change thinking in those spaces.

Lindsey Dillon changed the course of the conversation by insisting on consideration of the corporate and capitalist elements of algorithmic technologies and the fact that many of these technologies exist to generate new and different markets. Thakurta responded that even privacy researchers (like himself) and economists don’t know what an individual piece of data is worth. He suggested that individuals still have the choice not to buy these technologies. Jenny Reardon resisted this by saying that there are ways in which you can be forced to buy things, using the example of having to sacrifice her landline telephone. She wondered if the same could happen with driverless cars. Thakurta responded that it was more likely that cars would become a service. Countering the claim of individuals to simply opt out of algorithmic technologies, Parvin pointed out that, due in part to smart city initiatives in many places, “you are already giving up your data in the service of self-driving cars.” In response, Thakurta further expressed the complexity of data-driven economies. He pointed out that a company like Apple has a simple business model that one could explain to anyone: you pay them, and they give you a product. On the other hand, he pointed out the difficulty of trying to explain Google’s business model to someone 30 or 40 years ago. The audience laughed again, in recognition of this absurdity. He bigger point, however, is that strict data protection would immediately tank some major companies, doing harm to the economy in ways Thakurta argued would be “worse than the data being released.” This prompted some murmurs of acknowledgement from the audience (while also sounding a bit like a hostage situation).

Donna Haraway provided some more thoughts to close the evening, returning to the topic of Lucy Suchman’s work at Xerox as a way to discuss the possibilities of intervention in algorithmic technologies. Haraway pointed out that a major difficulty is simply that “we don’t know what game-changers we want. We lack what we want, because every single thing that we want, if we probe it, it actually makes things worse.” She wondered once again how to build communities and cultures like the one that existed at Xerox, of the type that can sustain conversations like the one this event fostered. Parvin parted with the difficulty of academia’s own economies, suggesting that part of the difficulty lies in what kinds of work academics receive credit for and how different types of scholarship are evaluated.

Nov 14, 2018 | Works-in-Progress with Lindsey Dillon

Wednesday, November 14, 2018

4:00pm – 5:30pm

SJRC Common Room, Oakes 231

Join SJRC scholars in the SJRC Common Room for an open discussion of works-in-progress! This is a wonderful chance to engage with one another’s ideas, and support our own internal work. At this session, we will hear from Assistant Professor of Sociology, Lindsey Dillon who will discuss her research on redevelopment and racial capitalism in San Francisco.

Read more on Lindsey’s work in the Hastings Environmental Law Journal article The Breathers of Bayview Hill: Redevelopment and Environmental Justice in Southeast San Francisco.

Lindsey Dillon is a geographer with research interests in urban environments and social justice. Her research and writing is deeply engaged with political ecology, feminist geography, critical race theory, and science and technology studies. In addition to being a SJRC Steering Committee member, Lindsey is affiliated with the Community Studies Program, the Environmental Studies Department, and co-founded and serves on the steering committee of the Environmental Data and Governance Initiative.

October 17, 2018 | Meet & Greet

Please join us for a beginning of quarter social hour. In addition to a chance to celebrate the new academic year and enjoy each other’s company over nice food and drink, we will be welcoming new members of our community, and welcoming back others.

This will be a great chance for everyone to meet the new faces in the Center and foster emerging collaborations! Attendees are highly encouraged to bring and share their objects of study.

Wednesday, October 17, 2018, 4:00-5:30 PM, SJRC Common Room, Oakes 231

Fall 2018 | Science & Justice Writing Together

Monday’s 1:00-4:00pm 
SJRC Common Room, Oakes 231

Wanting to establish a regular writing routine exploring science and justice? Beginning October 8th, Join SJRC scholars in the SJRC Common Room on Monday’s from 1:00-4:00pm for open writing sessions! Engage in six 25-minute writing sessions (with a 5 minute break in between). Open to all students, faculty and visiting scholars.

We continue to schedule quarterly writing sessions based on interest and availability.

For more information, please contact Lindsey Dillon (Assistant Professor of Sociology).

Writing Together sessions will not be held on campus holidays falling on the following Monday’s: Nov 12, Dec 24, Dec 31, Jan 21, Feb 18, March 25 (spring break), May 27.

May 30, 2018 | WiSE’s Science on Tap | The Postgenomic Condition

Wednesday, May 30, 2018 | The Crepe Place: 1134 Soquel Ave, Santa Cruz, CA 95062

(slideshare of presentation; begins about 10:55)

SJRC Director and Sociology Professor Jenny Reardon will discuss with us her new book, The Postgenomic Condition: Ethics, Justice & Knowledge After the Genome. Reardon’s research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices, particularly in modern genomic research. Her training spans molecular biology, the history of biology, science studies, feminist and critical race studies, and the sociology of science, technology and medicine.

Science on Tap is designed to connect the Santa Cruz community to the latest research at U.C. Santa Cruz.  It is not exclusive for scientists and science majors and aims to appeal to all audiences. So come, grab a drink and meal, relax and hear some interesting cutting edge science that’s happening near you!

Science on Tap is generally on the last Wednesday of every month at the Crepe Place. Due to the popularity of these events, to ensure that you’ll have a seat, we highly recommend that you reserve a table by calling the Crepe Place at (831) 429-6994. More information can be found on their website at:

Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the UC Santa Cruz.


Hosted by the UC Santa Cruz Women in Science and Engineering (WiSE).

May 17, 2018 | Caring for Prairies: A Conversation with Wes Jackson

The ‘Alternative Nobel Prize’ Inaugural Gathering of Changemakers for Social and Environmental Justice

Tues-Thursday, May 15-17, 2018
Refer to schedule for locations

The Right Livelihood Award—widely known as the ‘Alternative Nobel Prize’—was established in 1980 to honor and support courageous people and organizations offering visionary and exemplary solutions to the root causes of global problems. In addition to presenting the annual award, the Right Livelihood Award Foundation also supports the work of its laureates, particularly those whose lives may be in danger due to the nature of their activities.

On Thursday May 17th, join SJRC Director Jenny Reardon from 9:50am – 11:20am in Social Sciences 2, room 179 for Caring for Prairies: A Conversation with Wes JacksonWes Jackson is founder and president emeritus of The Land Institute, a nonprofit science-based research organization working to develop an alternative to current destructive agricultural practices.

Events are free and open to the public. Register for each event.
Co-Sponsored by the Science and Justice Research Center.

May 16, 2018 | Bioengineering in the Open

Wednesday, May 16, 2018

4:00-6:00 PM

Engineering 2, room 599  (BitO poster

Bioengineering is an ascendant and elite field. Advocates of “open” bioengineering propose to expand the participants, methods and scope of practices & ideas for intervening in biology. Drawing on the perceived innovative successes of Silicon Valley, these advocates often promote analogies to computer and information technology to both frame and direct biological engineering’s development as a definitive technology of the twenty-first century. “Bioengineering in the Open” will explore the points of agreement and contention between different versions of “open” bioengineering, including what sources of inspiration and promise they find outside of biotechnology’s conventional borders.

Specifically, the event will compare the versions of “open” biotechnology espoused by a university-based bioengineer and a DIY biohacking collective. Drew Endy, Assistant Professor of Bioengineering at Stanford University, promotes the development of standardized biological engineering through the open-source Registry of Standard Biological Parts, working with biological systems to make them more “engineerable.” Oakland-based DIY “biohacking” and citizen science collective Counter Culture Labs compares the innovative powers of community laboratories to the garages that birthed many Silicon Valley startups and emphasizes “democratizing” and “demystifying” biotechnology by taking it outside the ivory towers of universities and research laboratories.

The Science and Justice Working Group will bring these advocates of different visions of “open” bioengineering together to discuss common concerns of innovation, accessibility, and intellectual property. Politics of openness in bioengineering have clear justice implications, as they present cases for who should be allowed to contribute to and benefit from the biology of the future. By shaping bioengineering in the image of computer and information technology, these political visions adopt some familiar models of participation and regulation. This raises some concerns, however: What lessons do these bioengineering advocates take from the less desirable features and outcomes—demographic inequality, monopoly, and information insecurity, for example—of the information technology industry? And does the push to equate biological engineering with computer engineering eclipse features that are unique to working with biomatter, like the ethics or risks of intervening in life forms that grow, mutate, and reproduce?

This working group event aims to discuss the following questions:

  • What ends (Knowledge production? Innovation? Profit? Ethics?) do advocates of “open bioengineering” expect it to better serve?
  • What does approaching biology as a form of engineering accomplish? What distinguishes “open,” as opposed to “closed,” engineering? What other analogies and metaphors do we have for developing and understanding biotechnology?
  • Who does “open” biotechnology help get involved and how? Who should have the right to participate in and benefit from bioengineering? What are the benefits that they receive from open biotechnology?
  • What are the risks and responsibilities of “open” bioengineering, and how are they distributed? How does “open bioengineering” anticipate and mitigate its own potential for harm?

Event Host:

Andy Murray, Sociology PhD Candidate and SJRC Graduate Student Researcher


Drew Endy, Associate Professor, Stanford University

Patrik D’haeseleer, Co-founder and Chair, Counter Culture Labs

Jenny Reardon, Professor and SJRC Director, UCSC


Co-Sponsored by the UCSC Departments of Biomolecular Engineering; Molecular, Cell, and Developmental Biology, and the UCSC Genomics Institute.

May 16, 2018 | Assembling Precision Medicine

Wednesday, May 16, 2018
Engineering 2, Room 599


Join S&J Visiting Scholars Declan Kuch and Matthew Kearnes in an informal discussion on how proponents of the bio-nano sciences, centered around polymer chemistry, have promised a new generation of targeting agents that will carry drug payloads to diseased cells with greater accuracy. Alongside these promises, proponents of precision medicine have sought to build new knowledge about health and illness through massive new databases that combine multiple ‘-omics’ with lifestyle and chemical exposure data. Much has already been written speculating about both the efficacy and social effects ‘downstream’ of these sciences, especially the likely consequences of precision medicine in domains of socio-economics, race and disability (Juengst et al., 2016; Meagher et al., 2016).We instead seek to discuss how these critiques are (or are not) affecting laboratory designs, practices, and methods, starting with a discussion of critiques of bio-nano science (Torrice, 2016; Wilhelm et al., 2016). How can bio-nano science and precision medicine practically address their critics in such disciplines as public health and sociology dismissing them as expensive indulgences to benefit mostly rich white people? What role can data sharing play in building public support? How can the open science ethos of bio-nano and much precision medicine research translate into public benefit considering the expanding ‘pharmaceuticalisation’ of illness (Dumit, 2012) and rising drug prices?

Declan Kuch is a Research Fellow in the School of Humanities and Languagues at UNSW. His research is situated between the fields of Science and Technology Studies and Economic Sociology. He has published on topics including public engagement with science and technology, precision medicine, energy and climate policy, and the sharing economy. He is the author of ‘The Rise and Fall of Carbon Emissions Trading’ (Palgrave MacMillan, 2015) and loves riding bikes.

Matthew Kearnes is an Australian Research Council Future Fellow, a CI with the ARC Centre of Excellence in Convergent Bio-Nano Science & Technology (CBNS) and member of the of Environmental Humanities Group at the School of Humanities and Languages, University of New South Wales. Matthew’s research is situated between the fields of Science and Technology Studies (STS), human geography and contemporary social theory. His current work is focused on the social and political dimensions of technological and environmental change, including ongoing work on nanotechnology, precision medicine, geoengineering and the development of negative emission strategies to anthropogenic climatic change. He has published widely on the ways in which the development of novel and emerging technologies is entangled with profound social, ethical and normative questions. Matthew serves on the editorial board Science, Technology and Society (Sage) and is an associate editor for Science as Culture (Taylor & Francis). Matthew is also co-convenor of the 4S 2018 conference, to be held in Sydney in August 2018.


  • Dumit J. (2012) Drugs for life: how pharmaceutical companies define our health, Durham, NC: Duke University Press.
  • Juengst E, McGowan ML, Fishman JR, et al. (2016) From “personalized” to “precision” medicine: the ethical and social implications of rhetorical reform in genomic medicine. Hastings Center Report 46: 21-33.
  • Meagher KM, McGowan ML, Settersten RA, et al. (2016) Precisely Where Are We Going? Charting the New Terrain of Precision Prevention. Annual Review of Genomics and Human Genetics.
  • Torrice M. (2016) Does Nanomedicine Have a Delivery Problem? ACS Central Science 2: 434-437.
  • Wilhelm S, Tavares AJ, Dai Q, et al. (2016) Analysis of nanoparticle delivery to tumours. Nature Reviews Materials 1: 16014.

May 09, 2018 | Timescales, Memory, and Nuclear Geographies: A Conversation with Gabrielle Hecht and Julie Salverson

Wednesday, May 09, 2018

4:00-6:00 PM

Louden Nelson Center, Room 1

301 Center St, Santa Cruz, CA 95060

Source: Michael Brill, Site Design to Mark the Dangers of Nuclear Waste for 10,000 Years. Buffalo: The Buffalo Organization for Social and Technological Innovation Inc. (BOSTI). 1991, pl 15.

Source: Michael Brill, Site Design to Mark the Dangers of Nuclear Waste for 10,000 Years. Buffalo: The Buffalo Organization for Social and Technological Innovation Inc. (BOSTI). 1991, pl 15.

Writers and activists researching nuclear things face “the challenge of rendering visible occluded, sprawling webs of interconnectedness” (Nixon 2011, 13). This discussion features two writers whose work traces the sprawling webs of nuclear geographies, binding uranium mining and its dispersed radioactive legacies. Julie Salverson (Professor of Drama and Cultural Studies, Queen’s University) links the mines of Northern Canada with the U.S.’s use of nuclear weapons in Japan, and the later disaster at Fukushima, while Gabrielle Hecht (Professor of History and Nuclear Security, Stanford) examines the afterlives of neocolonial uranium mining by French companies in Gabon. Salverson and Hecht experiment with different conceptual and writerly methods to trace the geographies of these extractive economies and their uneven effects.

This discussion with Salverson and Hecht is moderated by UCSC’s Science and Justice Research Center. We invite event attendees to read a representative article from each author, email Lindsey Dillon at for a copy of the readings.

  • Salverson, Julie and Peter C. van Wyck. “Through the Lens of Fukushima.” Forthcoming in Through Post Atomic Eyes, edited by Claudette Lauzon and John O’Brian. McGill-Queen’s University Press.
  • Hecht, Gabrielle. “Interscalar Vehicles for an African Anthropocene: On Waste, Temporality, and Violence.” Cultural Anthropology 33, no. 1 (2018): 109-141.


Nixon, Rob. Slow Violence and the Environmentalism of the Poor. Harvard University Press, 2011.

May 2, 2018 | “Sons and Daughters of Soil?” reflections on Life Sciences and Decoloniality in South Africa

Wednesday, May 02, 2018, 3:30-5:30 PM, Humanities 1, Room 210

Responding, as researchers, to Earth Mastery that includes not only violent machines, but a violation of evidence and epistemes including the scientific episteme, requires accumulating and presenting evidence for existences that do not exist — at least, not in neoliberal discourses. In trying to research and support specific situations of Black environmental struggle in South Africa, I find myself standing with that which has no existence in conventional discourses: for a cliff that no longer exists; for molecules that have no existence in local knowledge; for people who have no existence in the mining companies, for the assassinated Bazooka Radebe, whose existence is now with the Ancestors, and with the soil he died to conserve. Environmental Humanities South had begun by asking a question about how to generate evidence in the geological Anthropocene. By the time our first three years had ticked by and we had encountered the Capitalocene, I had learned that a far more fundamental struggle has to be the focus of our work. What exists? Who exists? In what registers and modes? How do we take on the new conquistadors with their machines called Earth Masters, given that it is their owners’ logic that has come to define who exists and what exists and what can be ground to dust? How can scholarship contribute to the building of a broad-based environmental public? Presented as a dilemma tale, this talk sketches six moves toward an ecopolitics in South Africa, with the question: what else could be in this discussion?

Lesley Green | Fulbright Fellow, Associate Professor of Anthropology in the School of African and Gender Studies, Anthropology and Linguistics, University of Cape Town; Director of Environmental Humanities South, Faculty of Humanities, University of Cape Town

Hosted by the IHR Race, Violence, Inequality and the Anthropocene Cluster.

Co-Sponsored by the Science and Justice Research Center and the Anthropology Department.