YouTube endorsed a ban on AI “deepfakes” on Wednesday, as lawmakers renewed efforts to stamp out unauthorized digital clones of people’s voices and images.
Sens. Chris Coons, D-Del., and Marsha Blackburn, R-Tenn., originally introduced the No Fakes Act in 2023, which would make it illegal to distribute a computer-generated likeness of a person without their consent. SAG-AFTRA, which represents 160,000 performers, and the Recording Industry Association of America have each made the bill a top priority, as they fear that artificial intelligence will lead to a deluge of impersonation.
The Motion Picture Association, which represents the major film and TV studios, endorsed the bill last year, after allowances were made for recreations of historical figures. Since then, the RIAA has negotiated with YouTube over the liability that platforms could face if they host AI deepfakes.
“We brought together the people affected by the impacts and challenges of AI and the people who have the power to do something about it,” Coons said at a press conference on Wednesday.
The bill includes a notice-and-takedown provision, similar to existing regulation of online copyright infringement. Under the provision, platforms would be immunized for unwittingly hosting deepfakes, provided they act swiftly once notified.
“The NO FAKES Act provides a smart path forward because it focuses on the best way to balance
protection with innovation: putting power directly in the hands of individuals to notify platforms of
AI-generated likenesses they believe should come down,” YouTube said in a blog post on Wednesday. “This notification process is critical because it makes it possible for platforms to distinguish between authorized content and harmful fakes — without it, platforms simply can’t make informed decisions.”
VIP+ Analysis: Survey Shows Most Consumers Wary of Fully AI-Generated Creative Content
Many states, including California, already allow performers to sue for misuse of their name and likeness. The advent of AI has threatened to make the problem much worse, leading to a wave of fake celebrity endorsements and “sound-alike” music tracks.
In December, YouTube announced a pilot program, using a version of its Content ID technology, to flag unauthorized AI deepfakes and allow creators to ask for them to be removed. On Wednesday, YouTube said that participants include Mr. Beast and Mark Rober — among other major stars on the platform.
In the absence of federal law, several states have already taken steps to outlaw AI deepfakes, including California and Tennessee. The No Fakes Act would not preempt those laws, but would preempt any future state legislation on the issue.
VIP+ Unearths Generative AI Data & Insights From All Angles — Pick a Story
Read the full article here