Decoder with Nilay Patel
Description
Grok, the chatbot made by Elon Musk’s xAI, is able to make all manner of AI-generated images on demand, including non-consensual intimate images of women and minors. It's the kind of "controversy" that would have completely sunk a platform five or 10 years ago, but now it seems clear that Elon wants Grok to be able to do this.
A lot of people feel like someone should be able to do something about a one-click harassment machine like this. But who has that power, and what they can do with it, is a deeply complicated question,tied up in the thorny mess of history that is content moderation and the legal precedents that underpin it. So I invited Riana Pfefferkorn, from the Stanford Institute for Human-Centered Artificial Intelligence, to come talk me through it.
Links:
Grok’s gross AI deepfakes problem | The Verge
Grok is undressing children — can the law stop it? | The Verge
Tim Cook and Sundar Pichai are cowards | The Verge
Senate passes a bill to let nonconsensual deepfake victims sue | The Verge
EU looks to ban nudification apps following Grok outrage | Politico
Grok flooded X with millions of sexualized images | The New York Times
The Supreme Court just upended internet law | The Verge
Mother of Elon Musk’s son sues xAI over sexual deepfake images | AP
Subscribe to The Verge to access the ad-free version of Decoder!
Credits:
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Decoder is produced by Kate Cox and Nick Statt and edited by Ursa Wright. Our editorial director is Kevin McShane.
The Decoder music is by Breakmaster Cylinder.
Learn more about your ad choices. Visit podcastchoices.com/adchoices