US passes law to curb AI deepfakes

2 months ago 34

In addition to combating deepfakes, the COPIED Act will address the worries of journalists, artists, singers, and content producers who believe AI has been making money off of their work without giving them credit or paying them what is fair read more

US passes law to curb AI deepfakes

All US political parties have overwhelmingly endorsed the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT)

Last week, a law to combat AI deepfakes and prevent original content from being used for AI training was introduced in the US.

All US political parties have overwhelmingly endorsed the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT).

Another Senate bill, known as the “Take It Down Act,” was introduced last month and called for the elimination of AI deepfakes that target non-consensual intimate pictures.

Notably, a number of issues, including as Taylor Swift’s deepfake nude photos created by AI, went viral in January on X (previously Twitter), Facebook, and Instagram, igniting a national conversation about the dangers of AI technology.

In addition to combating deepfakes, the COPIED Act will address the worries of journalists, artists, singers, and content producers who believe AI has been making money off of their work without giving them credit or paying them what is fair.

A Forbes article from last month claimed that the AI-enabled search engine Perplexity AI had stolen its content. After that, a technology magazine headquartered in New York by the name of Wired conducted an investigation and discovered that Perplexity was summarizing its articles in spite of the Robot Exclusion Protocol being in place and invading portions of their website that were forbidden to search bots.

The COPIED act will make it possible to implement a mechanism that will ensure the authentication and detection of all AI-generated content. This mechanism will be provided through the creation of a digital document known as “content provenance information,” which is similar to a logbook for all content, including news articles, creative expressions, images, and videos.

Additionally, it aims to include clauses that would make it unlawful to alter this data, assisting journalists and other creative professionals in protecting their creations from AI. The measure would also give state officials the authority to enforce it, opening the door to legal action against AI businesses that remove watermarks or use content without permission or payment.

Read Entire Article