‘Nobody Wants the Future Anymore’

As AI comes for podcasting (and everything else), humans are equipping listeners with research, nuance, and ways to fight back.

“Psychotic, honestly!” — Leon Nayfakh
“Orwellian bullshit.” — Linda Holmes
“MAKE IT STOP” — James Cridland

These are but a sampling of the community responses to The Hollywood Reporter’s piece on Inception Point AI, the startup led by ex-Wondery chief financial officer Jeanine Wright. Inception Point promises to deliver thousands of AI-generated podcasts a week, coming soon to an RSS feed near you. Naturally, this announcement was met with disgust from many, many creators, though plenty of people also congratulated Wright when she posted the announcement on LinkedIn. 

It seems uncontroversial to say that AI slop podcasts are a product category with no purpose beyond racking up ad revenue. But according to Wright, even using the word “slop” betrays a character failing. “I think that people who are still referring to all AI-generated content as AI slop are probably lazy luddites,” Wright told THR, in a statement of such exquisitely engineered rage-bait, ChatGPT could have crafted it.

But Wright’s comment speaks to a belief shared by many in the tech industry that critics of AI maximalism are intellectually incurious and/or incapable of understanding generative AI’s potential. 

What, then, might Wright make of the human journalists and podcasters disproving this belief with their informed, probing coverage of the current AI landscape? We spoke with a few such humans to learn more about their approach to reporting on AI and the wider world of technology. Be sure to listen to their (non-slop) shows wherever you get your podcasts. 

Unmaking the nuclear bomb

Of all the journalists sounding the alarm on AI, arguably no one has been more consistently relentless than Brian Merchant. His 2023 book “Blood in the Machine” is likely the reason you’ve heard the word Luddite surface in so many discussions of AI, as it connects the 19th-century Luddite labor movement to the struggles of modern-day workers. Merchant’s newsletter has been tracking the ongoing shrinking effect AI has on labor markets in a devastating series called “AI Killed My Job,” as well as the Luddites’ modern incarnation. If an interview podcast is recording an episode about AI’s impact on workers, Merchant is a go-to guest. He also cohosts a show of his own, “System Crash,” which is currently on hiatus, but the back catalog is worth a listen.

Merchant’s spiciest take might not even be that Luddites are good, actually. It’s that the AI takeover isn’t inevitable. 

“It’s in the technology companies’ interests to make it appear as though the genie is out of the bottle. That we are helpless before it,” Merchant told Good Tape in a conversation in April. “But that’s simply not true. An AI needs human users in order to function, and the human users can collectively decide how we do or do not want to use that technology.”

As proof of concept, Merchant points to another time when humans decided to shelve a seemingly unstoppable invention for the good of humanity. “We haven’t seen a nuclear bomb dropped in a wartime theater for half a century. So there are things that we can do to decide how we want to use the technology, what limits we want to place on it.”

Until that time, he said, AI is “a real and present threat” to workers’ livelihoods across creative industries, “and that’s what we should be thinking about. Less these lofty questions about ‘is AGI [artificial general intelligence] here? Is it going to beget a new religion for the world? Is it going to usher in this era of abundance?’ Again, look at what the companies actually want to do.”  

Doomerism is boosterism

The idea that the AI takeover could still be reversed is a belief shared by Merchant’s “System Crash” cohost, the tech journalist and author Paris Marx. Marx also hosts the long-running interview podcast “Tech Won’t Save Us” in association with The Nation

AI’s air of inevitability is bolstered not just by its hype, but by its naysayers, Marx told us by phone in April. In fact, he shared, many tech CEOs enjoy stoking the “doomer” narrative that AI could become so powerful that it could destroy humanity.

“If you look at OpenAI, you look at [when GPT-4 was released], you saw Sam Altman deploying that narrative a lot — alongside the narrative that actually AGI is going to be super helpful and a benefit to everybody.” Altman, OpenAI’s CEO, famously warned a global leadership summit in early 2024 that AI is accelerating at a rate that could “make things go horribly wrong.” At the same time, Altman was working to raise trillions of dollars to scale production of AI superconductor chips.

Why would Altman, an AI accelerationist, give oxygen to the doomer narrative? Marx said, “because I think it helps him to shape the public conversation in a way that distracts from present-day issues.” 

Among those myriad issues, Marx pointed out, is generative AI’s massive environmental impact, from its immense water usage to the toxic processes of chip manufacturing. “ These companies that have so long tried to present themselves as socially responsible, environmentally conscious, [they’ve] got these clean offices, doing capitalism… When you actually look at data centers, it starts to kind of pull the lens back and force you to look at the much bigger picture of what this industry is.”

“I should be able to have that data”

To be clear, not everyone digging into AI’s environmental impacts or its disruption of entire job sectors is a hater. “I’m very upfront that I use AI sometimes,” said writer, professor, and documentarian Dexter Thomas when we spoke to him in August. Thomas hosts “kill switch,” a tech podcast produced by Kaleidoscope.

Thomas is so upfront about his use of AI tools that the episodeis using AI worse than driving a car?” begins with Thomas asking the chatbot Claude whether he should buy his own vehicle. The episode didn’t start out intending to be an AI takedown, he told us. It was meant to answer the question: Does using AI to evaluate buying a car have more or less environmental impact than the car itself? 

But as the episode progresses, you can hear Thomas’ frustration as he discovers there’s a reason he can’t find the answer. Minor spoiler: The AI industry doesn’t make their water usage data public. “I truly expected that somebody would be able to tell me or that I’d be able to figure out a mathematical formula that, ‘OK, every time I type something into ChatGPT it’s like taking a bite out of a hamburger in terms of water expenditure.’ I should be able to have that data.”

Thomas’ mission with “kill switch” is not to turn people away from the tech world, he said. Rather it’s to decode it in a way that both technophiles and technophobes can understand. Simply refusing to reckon with AI isn’t an option, although he understands why people try. “There was a point where we were kind of excited about the future… The idea was, ‘Oh, the future is going to be cool. We’re gonna have all this cool stuff, right?’ Nobody wants the future anymore. Everybody is terrified… I don’t believe it’s OK for people to just not understand what’s going on. And I think a lot of companies are really depending on us kind of giving up.”

“A lot of people are drinking the Kool-Aid”

Among those who decidedly haven’t given up are Sam Cole, Joseph Cox, Jason Koebler, and Emanuel Maiberg, journalists and founders of 404 Media. They founded 404 in August 2023, following their exodus from Vice’s tech vertical, Motherboard. (Notably, Brian Merchant is also a former Motherboard contributor, and Dexter Thomas is a former host and contributor for Vice.) 

404 publishes deep original reporting at a blistering pace, which the team then unpacks on their weekly podcast

Via email in June, Sam Cole shared that she and her colleagues feel “a responsibility for providing as much context as possible” to their tech reporting, especially since they are so frequently the ones breaking the story. Their reporting has uncovered slop e-books flooding public libraries, slop art invading Facebook, and slop history shows drowning out human creators on YouTube. (Full disclosure: I was quoted in one of their stories about AI replacing human voice-over talent.)  

Cole noted that it’s possible for generative AI to be both dangerous and ineffective, citing the frequent and hazardous mistakes in Google’s AI Overview summaries and popular chatbots that send users spiraling into mental health crises. “It seems like a lot of people are drinking the Kool-Aid when it comes to AI companies’ claims about what their products can do,” she said. “[T]he so-called geniuses running these companies swear AGI is coming — it’s always coming soon — and we should all be trembling. Call me to warn me about how independently smart AI is when ChatGPT can reliably do basic math, I guess.”

Even if the Kool-Aid drinkers are wrong, it’s not stopping them from forcing AI tech into the podcast space. Inception Point AI and businesses like it, including voice automation companies like ElevenLabs, seem determined to make human voices redundant. But after speaking with journalists like Merchant, Marx, Thomas, and Cole, one thing seems clear. It’s neither lazy nor ignorant to push back against the AI Podcast Slop era. The laziest thing, in fact, would be to just let it happen.

 

 

THREE MORE TECH-CRITICAL SHOWS YOU MIGHT LIKE

“Better Offline” — Ed Zitron has made a solid case for himself as the AI industry’s biggest opp, debunking its business models and long-term viability with the fervor of a Pentecostal preacher and the receipts of a CVS customer. If you want to know how the AI bubble will burst and who will fall in the aftermath, Zitron is the tech-journo Cassandra for you. Just don’t say he didn’t warn you when the bubble finally bursts.

“Dystopia Now!” — Kate Willett is a comedian. Émile Torres is an academic. Together, they examine tech’s biggest weirdos and kookiest pseudo-religions. Squirm as Willett and Torres break down the oddball obsessions of Silicon Valley billionaires (Digital people! Radical life extension! Erection measuring!), and how far they’ll go to bend our present around their idea of a perfect future.

“Understood: Who Broke the Internet?” — This four-part CBC series is written and hosted by the Godfather of tech criticism, Cory Doctorow. This story isn’t about AI; rather, Doctorow and his team set the table for AI by showing how we got here. Doctorow expertly breaks down how the Internet and the tools we use to access it got so “enshittified,” a word he famously coined. Spoiler: It was an inside job.

Katie Clark Gray is an award-winning podcast producer, Pew Fellow, and partner at Uncompromised Creative. She is currently a writer/producer on Wondery’s “The Best Idea Yet,” a guest writer on “Fathom,” and in development on “Do Over,” a show about second chances and horrible mistakes. Learn more at Uncomp.ninja.