Friday, November 22, 2024

Hollywood’s Divide on Artificial Intelligence Is Only Growing

Must read

The arrival of artificial intelligence hit Hollywood like an earthquake.

It was 2022. Layoffs, cost-cutting and what appeared to be an inevitable strike from writers was looming and the industry was in flux. That fall, OpenAI released an early demo of ChatGPT in what became the first momentous moment of the tech entering the public consciousness. Everything changed, starting with “Heart on my Sleeve,” a song that used AI versions of voices from Drake and the Weeknd.

SAG-AFTRA mobilized its fleet of lobbyists in Washington D.C, as did the Recording Industry Association of America, according to people familiar with the situation. They found receptive ears in Senators Chris Coons, Marsha Blackburn, Amy Klobuchar and Thom Tillis, who later unveiled a discussion draft of the legislation providing protections from unauthorized uses of their appearance and voice in generative AI tools. On July 31, an updated version of the bill was introduced — a landmark entry in the debate over AI guardrails.

“The Drafting of No Fakes began well before the strike, right after ChatGPT and Fake Drake,” says SAG-AFTRA general counsel Jeffrey Bennett. “It wasn’t difficult to go to the Senate, because everyone now sees what’s going on.”

The studios, meanwhile, stayed on the sidelines, at least until there was a clearer picture of what the bill would look like. For them, the calculus was different. Executives supported the measure as long as it didn’t interfere with their right to use so-called “digital replicas” in parodies and documentaries, among other things, people familiar with studios’ lobbying efforts tell The Hollywood Reporter

When it was introduced, the studios’ trade group, the Motion Picture Association, said in a statement, “We particularly appreciate the sponsors’ inclusion of safeguards intended to prevent the chilling of constitutionally protected speech,” which will “be necessary for any new law to be durable.”

The MPA has other concerns — and interests — in the realm of AI. While its members have a trove of movies and TV shows to protect against AI companies that may be hoovering up their intellectual property to power their systems, as well as incentives in staving off machine-generated works that they could possibly compete against, they also create a lot of content, which AI tools may have a bigger hand in creating one day. The MPA’s sight is set, at least in part, on the future of production.

The break marks a growing rift between Hollywood’s unions and studios on issues related to AI.

It’s a divide that executives may need to bridge diplomatically. On Netflix’s July 18 earnings call, co-CEO Ted Sarandos was asked about the potential impact that generative AI would have on content creation. The exec replied that AI “is going to generate a great set of creator tools, a great way for creators to tell better stories” but noted “there’s a better business and a bigger business in making content 10 percent better than it is making it 50 percent cheaper.”

Much of that fight over AI has quietly played out in an unexpected arena: The U.S. Copyright Office, which has been exploring policy questions surrounding the intersection of intellectual property and AI and on Wednesday issued a report warning of the “urgent need” for laws regulating deepfakes. The agency has been in talks with representatives from the Writers Guild of America, the Directors Guild of America, SAG-AFTRA and MPA, among others. Those conversations signal that Hollywood’s unions and the tech giants leading the charge on developing AI tools — some of whom have gained a foothold in the industry and are members of the Alliance of Motion Picture and Television Producers alongside the legacy studios — are on a collision course over utilization of AI tools in the production pipeline.

The unions landed on opposite sides of several hot-button issues with the MPA, which was joined by Meta, OpenAI and tech advocacy groups. Where they clashed the most was whether new legislation is warranted to address the unauthorized and uncompensated use of copyrighted material to train AI systems and the mass generation of potentially infringing works that appear similar to existing content.

Not only did the studios say current laws are sufficient, they also argued in favor of looser standards to copyright works created by AI. It said that the copyright office is “too rigid” in its human authorship requirement, which holds that intellectual property rights can only be granted to works created by humans, because “it does not take into account the human creativity that goes into creating a work using AI as a tool.”

The hotly contested issue is a major battleground in the exploitation of machine-generated materials. Among the main reasons preventing the large-scale adoption of AI tools in the production pipeline is that resulting works are not eligible for copyright protection.

“Contracts say you need to ask permission of studios, and a lot of studios’ policies is that it’s simply not allowed,” said showrunner and writer Mark Goffman (Bull, Limitless, The West Wing) at AI on the Lot, a conference AI in the entertainment industry, in May. He pointed to legal constraints in the chain of title and having to sign a “certificate of authenticity that you wrote it by yourself.”

Still, the tech is increasingly being adopted, even in the writing process. “I use [large language models] to do research,” said Momo Wang (Minions, Despicable Me, Sing), director of animation at Illumination, at the AI conference. “I first write a story in Chinese and translate with the LLM to English, which is easier for me and better than any translation software.”

The utilization of AI tools has split filmmakers too. No tool has piqued the town’s interest more than OpenAI’s Sora, which was unveiled in February as capable of creating hyperrealistic clips in response to a text prompt of just a couple of sentences. OpenAI — which is dealing with infighting of its own over issues related to safely rolling out the tech, most recently seen in a lawsuit filed on Aug. 5 by ex-board member Elon Musk targeting the startup’s for-profit pivot — has been releasing videos demonstrating the tech from beta testers who are providing feedback to the company as it marches on Hollywood. Some creators have dismissed the work under critiques that it undermines the integrity of moviemaking, while others have praised the incorporation of AI as one of many tools, like Adobe After Effects or Premiere, in their arsenal without completely relying on it.

The studios’ position indicating an eagerness to adopt AI in the production pipeline stands in stark contrast to that of SAG-AFTRA and WGA, which urged the Copyright Office to recommend that lawmakers pass legislation requiring companies to secure consent from creators to train their tech on copyrighted material. The agency is expected to issue another report on the issue within the year.

“At a fundamental level, AI cannot create,” says Laura Blum-Smith, WGA senior director of research and policy. “We’ve been active in talking about this issue both with regulators and agencies.”

The DGA, meanwhile, hedged. While it expressed concern about misuse of the technology, it said it “fully expects our directors and members” to “integrate it into the filmmaking process.” With the WGA, it advocated for the institution of “moral rights” that would recognize writers and directors as the original authors of their work, giving them larger financial and creative control over the exploitation of their material even when they don’t own the copyrights.

“We’re not doomsayers about AI,” says DGA executive director Russell Hollander. “It can be a tool that can be used effectively in filmmaking. But like many other tools, there needs to be appropriate guardrails, and it needs to be a tool that’s assisting filmmaking opposed to destroying it.”

Latest article