Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Bipartisan Bill proposes harsher penalties for AI-enabled fraud

Written by Gugu Ntsele | December 06, 2025

A new bipartisan bill introduced in the House aims to increase criminal penalties for fraudsters who use AI tools to create convincing fake audio, video, or text as part of their schemes.

 

What happened

The AI Fraud Deterrence Act, introduced by Representatives Ted Lieu (D-Calif.) and Neal Dunn (R-Md.), targets scammers who use artificial intelligence to deceive victims. The legislation raises financial penalties and prison sentences for various fraud crimes when AI tools are involved. Total potential fines for mail fraud, wire fraud, bank fraud, and money laundering would increase to between $1-2 million. The bill specifies that using AI-assisted tools carries a maximum prison sentence of 20-30 years. Additionally, scammers who use AI to impersonate government officials face fines up to $1 million and up to 3 years in prison.

 

The backstory

Over the past year, multiple incidents have involved AI-assisted impersonation of top US officials. In May 2025, the FBI issued a public service announcement warning that malicious actors were using text messages and AI-generated voice messages to impersonate senior US officials. The scheme targeted current and former senior federal and state government officials and their contacts. The FBI noted that access to these accounts could be used to target additional government officials or their associates and contacts, and could be used to elicit information or funds.

That same month, federal authorities investigated fraudulent calls and texts sent to senators, governors, business leaders, and other VIPs from someone impersonating White House Chief of Staff Susie Wiles' voice and number. Wiles said her phone had been hacked, which President Donald Trump later confirmed publicly, telling the press they breached the phone and tried to impersonate her. Some recipients reported the voice sounded AI-generated.

Less than two months later, the State Department warned diplomats that someone impersonated Secretary of State Marco Rubio in voice mails, texts, and Signal messages. The messages reached at least three foreign ministers, a US senator, and a governor in what appeared to be a scam. Rubio was also targeted in a deepfake earlier this year, making it appear he was on CNN vowing to persuade Elon Musk to cut off Starlink access to Ukraine. 

 

What was said

Representative Ted Lieu stated that both everyday Americans and government officials have been victims of fraud and scams using AI, noting that this can be ruinous for people who fall prey to financial scams and disastrous for national security if government officials are impersonated by bad actors.

 

Why it matters

The impersonation of government officials like Susie Wiles and Marco Rubio shows how AI tools can threaten national security by allowing bad actors to communicate with foreign ministers, senators, and governors under false pretenses. These incidents show that AI-assisted fraud is actively being used to target the highest levels of government. 

 

FAQs

How would prosecutors determine whether a scammer actually used AI in a fraud scheme?

They would rely on digital forensics, metadata, and expert analysis to confirm that AI tools were used to generate the deceptive content.

 

Will this bill also apply to foreign actors who use AI tools to target people in the U.S.?

No, the bill targets intentional misuse of AI for deception, not lawful AI applications.

 

Does the bill address AI companies’ responsibility for misuse of their tools?

No, it focuses on criminal penalties for perpetrators, not liability for AI developers.