Texas-based telecommunications company Lingo Telecom has been hit with a $1 million fine by the United States Federal Communications Commission (FCC) for its involvement in the illegal Biden deepfake scam. 

The scam involved using an artificial intelligence-generated recording of President Joe Biden’s voice, which was spread via robocalls to discourage people from voting in the New Hampshire primary election in January.

FCC cracks down

According to a press release by the FCC, the $1 million fine is not merely a punitive measure but also a step toward holding telecommunications companies accountable for the content they allow to be disseminated through their networks. 

In addition to the monetary penalty, Lingo Telecom has been ordered to implement what the FCC describes as a “historic compliance plan.” This plan includes strict adherence to the FCC’s caller ID authentication rules, which are designed to prevent the kind of fraud and deception that occurred in this case.

Moreover, Lingo Telecom must now follow the “Know Your Customer” and “Know Your Upstream Provider” principles, which are crucial in enabling phone carriers to monitor call traffic effectively and ensure that all calls are properly authenticated. 


Danger to democratic processes

The robocalls, orchestrated by political consultant Steve Kramer, were part of a broader effort to interfere with the New Hampshire primary election. By utilizing AI technology to create a convincing imitation of Biden’s voice, the calls sought to manipulate and intimidate voters, undermining the democratic process. 

Related: McAfee introduces AI deepfake detection software for PCs

Kramer was indicted over his role in launching the robocalls on May 23. Kramer, who was working for rival candidate Dean Phillips, was indicted for impersonating a candidate during New Hampshire’s Democratic Party primary election.

The use of deepfake technology in this scam is especially concerning, as it marks a new and troubling development in the ongoing fight against disinformation. Deepfakes, which utilize AI to generate highly realistic yet fraudulent audio or video recordings, present a serious threat to the integrity of democratic processes. 

In March, Cointelegraph shed light on the growing issue of AI-created deepfakes in the ongoing election cycle, underscoring the critical need for voters to distinguish between fact and fiction.

In February, a group of 20 leading AI technology firms committed to ensuring their software would not be used to impact electoral outcomes .

Magazine: AI may already use more power than Bitcoin — and it threatens Bitcoin mining