Sunday, December 22, 2024

AI Briefing: Google Deepmind open-sources SynthID for watermarking generated text

Must read

Last week, Google Deepmind expanded the availability of its SynthID tool for watermarking text generated by AI. After implementing SynthID text in Google Gemini earlier this year, it’s making the tool open-source to help improve transparency of AI-generated text from other large language models.

Watermarking AI content has been a high priority since AI-generated content began to proliferate over the last two years. While the focus has been largely on watermarking AI images and videos, watermarking text could help detect AI-generated misinformation and scams along with fake product reviews and copyrighted materials. This week’s updates, launching in beta, are part of a broader expansion of SynthID for text, music, images and video, with each content type having a different system for watermarking.

“Being able to identify AI-generated content is critical to promoting trust in information,” read a DeepMind blog post. “While not a silver bullet for addressing problems such as misinformation or misattribution, SynthID is a suite of promising technical solutions to this pressing AI safety issue.”

In a paper published this week in Nature, the Deepmind research team explained the creation and testing process. When generating text, the AI model embeds an invisible signature using a “random seed” and a statistical pattern with a score-based system to shape the output text. That process then helps on the other end when detecting whether text was created with generative AI using a secret watermarking key. Researchers also note generative watermarks are perfect and have the limitations when translated, re-written or when used with shorter text and factual content.

While some experts have previously warned that watermarking AI content doesn’t go far enough, organizations like C2PA are working to create new standards for AI content transparency and authenticity, with key members like Google and other tech giants.

Google also tested SynthID text with a live experiment using nearly 20 million Gemini responses. Along with testing the accuracy of the watermarking process, Google had Gemini users rate text with a thumbs-up or thumbs-down response. Researchers found SynthID text didn’t degrade the quality of outputs or noticeably alter the level of quality or helpfulness of the AI text with SynthID watermarks. The team also conducted a smaller test with 3,000 responses from both watermarked and un-watermarked outputs based on grammar, relevance, accuracy, helpfulness and quality.

Marketing and brand safety experts said the approach in general seems like a positive step, but that could depend on the overall efficacy and level of uptake. Various polls have found plenty of consumer interest in transparency for generative AI content, with a survey by SOCi finding 76% of consumers wanting at least some degree of clarity.

It’s too soon to know all the ways that SynthID might bring transparency across various LLMs, but it might help detect AI-generated misinformation on social platforms. Damian Rollison, director of market insight at SOCi, said it would be ideal if all AI-generated content — both helpful and harmful — could be explicitly identified.

“Platforms where AI content is known to be rampant must actually implement this or similar solutions in a way that is helpful to publishers, advertisers and consumers,” Rollison said. “Unfortunately, platforms like Google itself, Meta, X and others benefit indirectly from the proliferation of AI content, so their will to combat fake content will likely be balanced against a disincentive to impact ad revenues and engagement metrics.”

Others agree that scaling will be key, but perhaps not helpful for everything. Nick Sabharwal, vp of product at Seekr, said it might only help with unsophisticated actors. Examples might be teachers checking if a student used Gemini or ChatGPT to write a paper, or consumers checking if a marketing or customer service message was AI-generated. According to Sabharwal, detection tools can be helpful and needed, but are unlikely to prevent harmful content. That’s because malicious actors will move to other platforms that don’t use watermarks.

“There has already been a significant proliferation of LLMs and LLM providers,” Sabharwal said. “Consolidation does not seem likely. Furthermore, nefarious large actors such as nation-states who want to disseminate misinformation can also develop their proprietary models to bypass these guardrails.”

If scaling SynthID is successful, could it help prevent online ads from being served on AI-generated websites such as MFAs? Potentially, said Arielle Garcia, director of intelligence at Check My Ads. But unless SynthID provides clarity on its probability scores, there could be similar risks of false flags that already exist with ad verification. Garcia also noted MFAs and other schemes are still largely an incentive issue — not just a technical problem.

“There still is a lot of money being made from MFA by ad-tech companies, agencies and other vendors paid on volume,” Garcia said. “And if brands do not or are not able to confirm where their ads ran, this runs the risk of being another empty/false assurance.”

Prompts and Products — AI news and other announcements

  • More than 25,000 creators have signed a statement about the unlicensed use of creative works for training GenAI that warns of an “unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
  • A lawsuit against Character AI, filed by the mother of a teenager who committed suicided, alleges the role-playing chatbot app contributed to her son’s death. The news has prompted new discussions about the dangers of AI chatbots, especially for younger ages and impressionable users.
  • A new report from Snowflake looks at emerging trends related and unrelated to AI that are shaping the marketing landscape, including the rise of data gravity, generative AI, privacy concerns and commerce media.
  • The AI deepfake detection startup Reality Defender announced additional funding of $15 million in a Series A funding round with new investments from Accenture, IBM Ventures and other sources. (In other news, McAfee and Yahoo News are partnering to verify the authenticity of images and prevent AI misinformation.)
  • OpenAI and Microsoft announced a $10 million deal with numerous publishers including The Minnesota Star Tribune, Philadelphia Inquirer and other major metropolitan newspapers.
  • The European Parliament is using Anthropic’s Claude AI models to create and expand an interactive digital archive.
  • RunwayAI released a new motion-capture tool that lets people use generative AI in videos from their smartphone.
  • The FTC’s new rules banning fake reviews went into effect, which regulate deceptive marketing tactics from both humans and AI. (Read more in our recent explainer.)
  • Anthropic debuted a new AI tool that has the ability to “control” a PC by moving a mouse like a human. Meanwhile, Microsoft plans to start letting companies create their own autonomous AI agents next through its Copilot Studio.
  • Google released new generative AI tools for creating music.
  • Perplexity faces new legal pressure as it attempts to win over publishers and advertisers (Digiday)
  • How publishing execs are incorporating generative AI tools into their workflows (Digiday)
  • Mondelēz takes AI in-house to try and curb marketing costs (Digiday)
  • Media Briefing: This year’s search referral traffic shifts are giving publishers whiplash (Digiday)
  • Daze, a creative, AI-powered messaging app for Gen Z, is blowing up prelaunch (TechCrunch)
  • Companies look past chatbots for AI payoff (WSJ)
  • Google, Microsoft and Perplexity are promoting scientific racism in search results (Wired)

Fiverr CMO Matti Yahoo talks about new campaign making fun of AI buzz

Everyone is tired of hearing about AI, but Fiverr now has a new Broadway-style song all about how tired they really are. Guess what? It was made with AI.

The online marketplace for freelance services has launched a new campaign poking fun at the current hype cycle. It emphasizes AI as a tool rather than a snake oil or a corporate salve. The ad, called “Nobody Cares,” launched last week and touts the AI tools increasingly used by freelancers, some of whom are employing various generative AI platforms in projects for a range of clients.

“We see it in our feed, in our LinkedIn feed, and whatever social media you’re on all day: AI. AI. AI,” Fiverr CMO Matti Yahav told Digiday. “Of course, AI is amazing. It’s a fantastic tool. But at the end of the day, the important thing is the result. It’s the impact.”

In fact, the commercial itself was created using nearly a dozen generative AI tools, according to Yahav. For images, it used Midjourney, Ideogram, Leonardo, Flux and Photoshop Generative Fill. Animations were made using Runway Gen3, Kling and Minimax. Lipsyncs and Faceswaps also used Runway and Kling along with Liveport.

Fiverr is seeing increased interest in AI-powered projects on the platform, Yahav said. While growth varies by domain and industry, he said music and audio are two that have stood out. He thinks the industry should stop talking about whether something is made with AI and instead talk more about the services it powers: “Do you know how much of the service was made with AI? I’m not sure, but it will be like the human talent behind it, consolidating everything, making sure you get the best output.”

While AI is boosting productivity for many, concerns about accuracy, quality and legal implications persist. Around 25% of U.S.-based freelancers are worried about privacy, legal and copyright concerns along with 50% of U.K.-based respondents. Fiverr also said it has guardrails to prevent AI misuse on the platform, but Yahav declined to disclose much about what those tactics are. (Instead, a company spokesperson directed Digiday to Fiverr’s community standards page.)

Here are some other details from Fiverr’s new report on trends in AI use this year across the platform. The results are based on a survey of 3,300 Fiverr freelancers.

  • Top tools include ChatGPT (used by 88% of respondents), Midjourney (used by 37%), Firefly (29%), Quillbot (18%), Hugging Face (18%) and Gemini (15%).
  • Programming and tech industry use of AI has jumped from 10% in 2023 to 86% in 2024. Music and audio has jumped from 8% in 2023 to 41% this year.
  • Text and content generation is now the most popular type of Fiverr work using AI (40% of respondents use it) while respondents are using it in 19% of visual and arts projects, and it’s used in data analysis 18% of the time.
  • More than a third of freelancers are now paying for AI tool subscriptions, with total paying subscribers to AI tools up 10% over 2023.
  • Two thirds of freelancers say AI helps boost productivity.

Latest article