Plus, Trump’s AI-generated images of Taylor Swift fans spark debate over content moderation.
Close to 200 employees from Google DeepMind, the tech giant’s AI research arm, signed a letter earlier this year protesting the company’s ties with military organizations, according to a Time report published Thursday.
No specific militaries were mentioned in the letter, according to Time, though it did reference an earlier Time report which found that Google had signed a contract with the Israeli military to provide AI and cloud computing technology. That agreement is part of a broader project with the Israel Defense Forces dubbed Project Nimbus, which some Google employees have publicly spoken out against.
Powered by AI
Explore frequently asked questions
OpenAI signs deal with Condé Nast
OpenAI announced Tuesday that it has entered into a content licensing agreement with publishing powerhouse Condé Nast, owner of household-name publications like The New Yorker, Wired and Vanity Fair.
The deal aims to “ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity and respect for quality reporting,” OpenAI chief operations officer Brad Lightcap said.
Condé Nast joins a long and growing list of other major publishers, including The Atlantic and Time, that have signed similar deals with OpenAI.
Following the recent debut of the SearchGPT prototype, OpenAI is testing new search tools to help users discover content sources more easily. SearchGPT also links directly to news stories.
Trump fakes Swifties’ support with AI-generated images
Former President Donald Trump has been posting a spate of AI-generated images on social media designed to garner support for his 2024 presidential bid.
Ahead of the Democratic National Convention in Chicago this week, he posted a synthetic image of what appeared to be Harris standing in front of a massive crowd of communist party members, with an enormous flag displaying the hammer and sickle in the center background. (Trump has taken to calling the Democratic candidate “Comrade Harris”).
He also posted AI-generated images of Taylor Swift fans supporting the former president, along with an Uncle Sam-style image of Swift herself pointing at the viewer and endorsing a vote for Trump.
The posts have ignited new debate about who should be responsible for policing manipulated and synthetic political content online.
Suggested newsletters for you
Anthropic sued by authors
A proposed class action lawsuit filed in San Francisco on Monday accuses AI startup Anthropic of illegally using pirated versions of copyrighted books in order to train its popular chatbot, Claude. The plaintiffs also claim that they’ve been deprived of revenue as a result of Claude’s ability to generate lookalikes of their work, which are then sold online as competitors.
Founded in 2021 by former OpenAI employees, Anthropic has been regarded by many as a beacon of safety and responsibility in an unregulated AI industry. The new lawsuit threatens to tarnish that reputation.
California launches news fund, AI innovation program
California state officials have signed a deal with tech companies and California-based news outlets in the hopes of providing some financial assistance to a journalism industry weakened by the rise of digital platforms.
The five-year deal enables the creation of a news fund, financed by the UC Berkeley School of Journalism, as well as a ‘National AI Innovation Accelerator,’ designed to bring new and supportive AI tools to journalism and other sectors.
Google has agreed to contribute $30m during the first year of the new program, followed by at least $20m annually over the following two to five years.
For more on the latest happenings in AI and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.