Labelers training AI say they're overworked, underpaid and exploited

60 Minutes spoke with data workers, including Data Workers' Inquiry community researcher Fasica Berhane Gebrekidan, about the working conditions they face as precarious employees of large American tech firms.

Watch the full segment on CBS News

Milagros Miceli, researcher: "It is a lie that AI will automate everything."

Spain's El País spoke to Mila about the hidden, precarious labor behind AI's seemingly automatic nature. [In Spanish]

Read the full article on El Pais

What Africa needs to do to become a major AI player

Abullahi Tsanni covered this year's Deep Learning Indaba in Senegal, and interviewed both Timnit and Kathleen for this MIT Technology Review article. In the process, he covers some of the factors that complicate the use and development of AI technology on the continent. But says Kathleen: “We’re starting to see a critical mass of people having basic foundational skills. They then go on to specialize.” She adds: “It’s like a wave that cannot be stopped.”

Read the full article on MIT Tech Review

DAIR's Alex Hanna explores AI's impact on communities

Business Insider named Alex to their 2024 AI Power list, recognizing her foundational research and tireless advocacy on behalf of people who are most negatively impacted by AI systems.

Read the full article on Business Insider

Your biggest AI questions, answered

National Geographic included Nyalleng in this wide-ranging feature, where she urges a distributed technology future - we need to build our own tech. "We absolutely have got to interrogate the question of power," she says. "We cannot expect the Googlers or the OpenAI people to understand all of us. We cannot ask Silicon Valley to represent all 8 billion of us. The best way is for each one of us to build the systems locally."

Read the full article on National Geographic

Data workers, in their own words

Tech Policy Press covered the Data Workers' Inquiry launch on their podcast.

"One thing that came up again and again in these reports is that the people who do this labor don't get any credit for helping build these powerful platforms and tools. Oftentimes they aren't even told who they're working for.

Not surprisingly, the main focus with this Data Workers' Inquiry is labor. How are these workers being exploited and what can they do to protect themselves? A lot of the conversation from this launch is about that. And one thing that came across in all this dialogue is how remarkable it is that this conversation is happening at all because it's risky for data workers to talk."

Listen to the full episode on Tech Policy Press

Data workers detail exploitation by tech industry in DAIR report

TechCrunch covers the launch of the Data Workers' Inquiry, our worker-led collaboration with the Weizenbaum Institute and 15 community researchers.

"Quantifying experiences like these often fails to capture the real costs — the statistics you end up with are the type that companies love to trumpet (and therefore to solicit in studies): higher wages than other companies in the area, job creation, savings passed on to clients. Seldom are things like moderation workers losing sleep to nightmares or rampant chemical dependency mentioned, let alone measured and presented."

Read the full article on TechCrunch

Artificial intelligence's thirst for electricity

Alex talks to NPR's Morning Edition about Google's admission that AI has driven a 50 percent increase in the company's greenhouse gas emissions in the last five years.

Listen to the interview on NPR

'Overlooked' workers who train AI can face harsh conditions, advocates say

Krystal talks to ABC News about the human labor that goes into training AI, including her own experience as a gig worker, and the precarity many face despite the necessity of this work to tech company profits.

Read the full article from ABC News

‘Hype and Magical Thinking’: The AI Healthcare Boom Is Here

Elaine talks about the ways AI currently fails to serve patients, both through missing context and the embedded biases of its training data. "And, as Nsoesie observes, perhaps we can entirely reframe the opportunity AI poses in health care. Instead of trying to measure the biological qualities of individuals with machines, we might deploy those models to learn something about entire regions and communities."

Read the full article from Rolling Stone

Africa's push to regulate AI starts now

Nyalleng: "If it works with people and works for people, then it has to be regulated.”

Read the article

How satellite images and AI could help fight spatial apartheid in South Africa

Our spatial apartheid work was featured in MIT Technology Review!

Read article

‘Stochastic Parrot’: A Name for AI That Sounds a Bit Less Intelligent

Stochastic Parrots is AI Related Word of the 2023! Read this article on the origins of the word.

Read article

The AI startup outperforming Google Translate in Ethiopian languages

Asme: "Chatbots like ChatGPT are utterly broken or useless for these languages."

Read the article

These Women Tried to Warn Us About AI

Timnit and Safiya were featured in this Rolling Stone article that reached more than 1 million readers.

Read article

An interview with Krystal Kauffman, lead organizer of Turkopticon

Remote Mechanical Turk workers are responsible for training artificial intelligence algorithms and completing other data-related business processes - we hear about the workplace issues they face.

Read the article

Meron Die einsame Jägerin der Menschenhändler

Article in German about our fellow Meron Estefanos. The English translation of the title is The lonely huntress of human traffickers.

Read article

Announcing the 2023 Just Tech Fellows

Our fellow, Adrienne Williams, has won a Just Tech fellowship. Read her project description here.

Read article

The Human Labor Powering AI Engines

Dylan Baker joins Sarah Roberts to discuss the exploited labor fueling AI systems.

Listen to the interview

Potentially Useful, but Error-Prone: ChatGPT on the Black Tech Ecosystem

In this interview, Asmelash Teka breaks down how ChatGPT fails to serve Black people around the world.

Read the article on The Plug

Taming the algorithms: The future is being coded now. Will it include us?

DAIR was the cover story of issue 9 of The Continent.

Read the article on The Continent

We need to examine the beliefs of today’s tech luminaries

This article in the Financial Times breaks down the TESCREAL bundle of idoelogies driving the race to attempt to build "artificial general intelligence."

Read the article on the Financial Times

Generative AI: What's all the hype about?

Alex was on NPR's Market place discussing what generative AI is and isn't.

Listen to the interview on NPR

Get a clue, says panel about buzzy AI tech: It’s being ‘deployed as surveillance’

Alex and other panelists at a Bloomberg conference in San Francisco remind the audience that AI is primarily being used for surveillance purposes.

Read the article on TechCrunch

Inside the AI factory: the humans that make tech seem human

Mila's work was discussed in this article detailing the labor exploitation fueling AI systems.

Read the article on The Verge

Can AI Avoid Bias?

Alex was on Bloomberg TV discussing bias and AI.

Watch interview

After the Whistle Blows

After the Whistle Blows: Silicon Valley likes to celebrate and lionize disruptors. But for women in the tech industry who speak out, there can be a high price to pay for rocking the boat.

Read the article on Harper's Bazaar

Trabajos repetitivos y mal pagados, la otra cara del avance de la Inteligencia Artificial

Adrienne, Mila and Timnit's works were discussed in this article detailing the labor exploitation fueling AI systems.

Read the article on El Economista

How the AI industry profits from catastrophe

Mila's work is discussed in this article discussing how the AI industry exploits labor.

Read the article on MIT Technology Review

Alex Hanna left Google to try to save AI’s future

After her departure, she joined Timnit Gebru’s Distributed AI Research Institute, and work is well underway.

Read the article on MIT Technology Review

AI researcher Timnit Gebru explains why large language models like ChatGPT have inherent bias and calls for oversight in the tech

Timnit appeared on the CBS show 60 minutes, to discuss the dangers of large language models.

Watch the interview on CBS

Google fired its star AI researcher one year ago. Now she’s launching her own institute

The Washington Post covered DAIR's launch.

Read article