Microsoft has called on Congress to pass new legislation targeting AI-generated deepfakes. Brad Smith, Vice Chair and President of Microsoft has emphasized the urgency for lawmakers to address the growing threat of deepfake technology.
In a recent blog post , Smith highlighted the importance of adapting laws to address deepfake fraud and prevent exploitation. According to Smith, there should be a statute that one can use to charge scams and frauds of deepfakes.
Microsoft proposes a federal deepfake fraud statute
According to Microsoft’s report, several legal interventions can be taken to prevent the misuse of deepfake technology. One of the suggestions is to create a federal ‘deepfake fraud statute.’ This new law will deal with both civil and criminal dimensions of synthetic content fraud and may entail the following measures: criminal charges, civil seizure, and injunctions.
The report also supports the requirement of synthetic content identification. Through the regulation of the use of advanced provenance tools, the public would hence appreciate the origin of the content they receive online. This is important for the credibility of digital information and for stopping the dissemination of fake news.
“Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content. This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”
Brad Smith
Additionally, Microsoft suggests amending the current laws on child exploitation and non-consensual explicit images to cover AI-produced images. This would guarantee that legal frameworks are in sync with technological developments so as to protect the susceptible group of people as well.
See also Coinbase seeks court order for SEC to produce Gensler linked docs
The US Senate has recently taken steps in this regard by passing a bill that is aimed at sexually explicit deepfakes. This new law allows the victims of non-consensual sexually explicit AI deepfakes to sue the creators of the content.
FCC says “no” to AI voice robocalls
Microsoft has also responded to the misuse of AI and strengthened safety measures on its products. The company recently enhanced the Designer AI image creator following a vulnerability that had been exploited to produce obscene pictures of celebrities. Smith said that the private sector needs to put in measures to ensure that AI is not misused, and it is up to the technology firms to ensure that users are not harmed.
The FCC has already acted against the misuse of AI by banning the use of AI voice in robocalls. Still, generative AI is improving increasingly in generating fake audio, images, and videos. This issue has recently been exacerbated by a deepfake video of US Vice President Kamala Harris, which was spread on social media and exemplified the growing danger of deepfake technology.
Other nonprofit organizations, such as the Center for Democracy and Technology (CDT), are also involved in fighting deepfake abuse. As noted by Tim Harper, the CDT’s senior policy analyst, the year 2024 marks the critical turning point for AI in elections, and people need to prepare for it. The current pushback against deepfakes is an early form of what may be a protracted struggle against technological manipulation.