Sky-is-falling scenarios distract from risks AI poses today

In a paper, academic ethicists criticize tech leaders and others who emphasize calamity thinking with artificial intelligence.

Media Contact: Brian Donohue - 206-543-7856, bdonohue@uw.edu


Tech-industry leaders and spotlight-seeking luminaries are monopolizing global discourse about artificial intelligence (AI), and often conveying that its adoption carries catastrophic risks beyond humans’ comprehension. Doomsayers claim that AI carries “an adverse outcome so bad (that) it would either annihilate Earth-originating intelligent life or permanently or drastically curtail its potential.”

These projections of distant-future disasters — and the media reporters who invite them — do a disservice to the public by taking the focus away from societal problems that AI is causing today, and from consideration of AI’s benefits.  

A group of ethicists argue these points in an open-access article published this month in the Journal of Medical Ethics. Lead author Nancy Jecker is a professor of bioethics at the University of Washington School of Medicine.

“Existential risk, which we call ‘X-Risk,’ refers to activities that threaten grave dangers to humanity, like nuclear weapons, climate change and emerging infectious diseases,” she said. “These have tremendous capacity to wipe out large numbers of people and undermine human well-being. We approach them with a balanced risk assessment that includes risks occurring today.”

With AI, here-and-now concerns involve algorithmic bias leading to gender and racial discrimination, AI-generated child sexual abuse, labor exploitation, especially in  poorer countries, and displacement of human creative work. Misinformation during this election year is another major concern, Jecker said.

Media outlets looking for headline clicks are tempted to instead emphasize AI debates that raise the specter of distant disaster, she suggested. Likewise, technology leaders who have financial stakes in AI’s development know that ratcheting up public fears about far-off calamities can eclipse consideration of AI’s present harms.

“What I’m most concerned about is what’s not being said and where the spotlight isn’t,” Jecker offered. “Most people in the tech industry don’t need to personally worry about being declined for a job or a bank loan because of a sexist or ableist algorithm, or not being considered for parole because of a racist algorithm.”

Technology workers, especially leaders, are overwhelmingly white men without disabilities, and this standpoint informs their ethics assessments, she suggested.

The authors wrote the paper with the hope of broadening AI ethics conversation to include not only technologically aware voices but also those who are historically marginalized and whose opportunities are at risk.

The paper cited a Stanford University 2023 analysis of scholarly AI ethics literature that saw a shift away from academic authors and toward authors with tech-industry affiliations; tech authors produced 71% more publications than academics between 2014 and 2022.

“Tech workers lack formal training in ethics,” Jecker said. “They can tell us about choices to be made within AI, but they shouldn’t lead ethics debates in the public square. Not only are tech leaders not trained to do so, but they also have a conflict of interest, given their work in that industry.”

One example of positive direction for AI discourse, Jecker said, is the public-private partnership just announced by the University of Washington and the University of Tsukuba in Japan, with private sector investment by Amazon, Nvidia and other companies.  The project aims to further research, entrepreneurship, workforce development and social implementation of artificial intelligence

“We need to engage in cross-border efforts that involve diverse groups from different sectors of society to come together to work through complex issues,” Jecker said.

Catastrophe-fixated comments can also divert attention from AI benefits in areas like medicine, the authors wrote. They referenced AI’s ability to help radiologists identify high-risk patient cases, to advance precision medicine based on genomic analysis, and to scour large datasets to better predict patient outcomes. 

“A balanced approach to AI must weigh benefits as well as risks,” Jecker said.

 

For details about UW Medicine, please visit http://uwmedicine.org/about.


Tags:bioethicAI (artificial intelligence)disaster

UW Medicine