Saturday, November 23, 2024

If Google Kills News Media, Who Will Feed the AI Beast?

Must read

One of the big worries with the rise of these AI CliffsNotes products is how much they tend to get wrong. You can easily see how AI summarizations, without human intervention, can provide not just incorrect information, but sometimes dangerously incorrect results. For example, in response to a search query asking why cheese isn’t sticking to a pizza, Google’s AI suggested that you should add “1/8 cup of non-toxic glue to the sauce to give it more tackiness.” (X users later discovered the AI was taking this suggestion from an 11-year-old Reddit post by a user called “fucksmith.”) Another result told people who are bitten by a rattlesnake to “apply ice or heat to the wound,” which would do about as much to save your life as crossing your fingers and hoping for the best. Other search queries have just resulted in completely incorrect information, like one where someone asked which presidents attended University of Wisconsin—Madison, and Google explained that President Andrew Jackson attended college there in 2005, even though he died 160 years earlier, in 1845.

On Thursday, Google said in a blog post that it was scaling back some of its summarization results in certain areas, and working to try to fix the problems it did see. “We’ve been vigilant in monitoring feedback and external reports, and taking action on the small number of AI Overviews that violate content policies,” Liz Reid, who is Head of Google Search, wrote on the company’s website. “This means overviews that contain information that’s potentially harmful, obscene, or otherwise violative.”

Google has also tried to allay the concerns of publishers. In another post last month, Reid wrote that the company has seen “the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query” and that as Google expands this “experience, we’ll continue to focus on sending valuable traffic to publishers and creators.”

While AI can regurgitate facts, it lacks the human understanding and context necessary for truly insightful analysis. The oversimplification and potential misrepresentation of complex issues in AI summaries could further dumb down public discourse and lead to a dangerous spread of misinformation. This isn’t to say that humans are not capable of that. If there’s anything the last decade of social media has taught us it’s that humans are more than capable of spreading misinformation and prioritizing their own biases over facts. However, as AI-generated summaries become increasingly prevalent, even those who still value well-researched, nuanced journalism may find it increasingly difficult to access such content. If the economics of the news industry continue to deteriorate, it may be too late to prevent AI from becoming the primary gatekeeper of information, with all the risks that entails.

The news industry’s response to this threat has been mixed. Some outlets have sued OpenAI for copyright infringement—as The New York Times did in December—while others have decided to do business with them. This week The Atlantic and Vox became the latest news organizations to sign licensing deals with OpenAI, allowing the company to use their content to train AI models, which could be seen as training robots to take jobs even more quickly. Media giants like News Corp, Axel Springer, and the Associated Press are already on board. Still, proving it’s not beholden to any machine overlords, The Atlantic published a story on the media’s “devil’s bargain” with OpenAI on the same day its CEO, Nicholas Thompson, announced their partnership.

Another investor I spoke with likened the situation to a scene in Tom Stoppard’s Arcadia, in which one character remarks that if someone stirs jam into their porridge by swirling it in one direction, they can’t reconstitute the jam by then stirring the opposite way. “The same is going to be true for all of these summarizing products,” the investor continues. “Even if you tell them you don’t want them to make your articles shorter, it’s not like you can un-stir your content out of them.”

But here’s the question I have. Let’s just say Google and OpenAI and Facebook succeed, and we read summaries of news, rather than the real thing. Eventually, those news outlets will go out of business, and then who is going to be left to create the content that they need to summarize? Or maybe it won’t matter by then because we’ll be so lazy and obsessed with shorter content that the AI will choose to summarize everything into a single word, like Irtnog.

Latest article