The NO FAKED Act, introduced in the 119th Congress as H.R. 2794 and S. 1367, is a proposed federal bill that would create a property right over an individual’s voice and visual likeness. If enacted, it would prohibit the creation or distribution of unauthorized ‘digital replicas,’ whether generated by AI or other technologies, without the subject’s consent.
Research from NPJ Digital Medicine notes, “Synthetic tabular data generation is increasingly important in healthcare research and innovation while preserving patients’ privacy,” but it is also widely acknowledged that “synthetic data is not inherently free from disclosure risks” and that residual vulnerabilities may persist.
In healthcare, privacy protections under HIPAA focus on protected health information (PHI), while de-identified or fully synthetic data generally falls outside its scope. Although the NO FAKED Act does not directly regulate health data, it could strengthen privacy protections indirectly.
By discouraging the unauthorized creation of AI-generated likenesses of patients or clinicians, the Act may help prevent scenarios such as deepfake videos impersonating physicians, fabricated patient consent recordings, or other deceptive media that could be used in phishing schemes or misinformation campaigns targeting healthcare systems.
If enacted, the NO FAKES Act would prohibit the creation, distribution, or public display of synthetic media that convincingly depicts a real person’s identifiable image or voice unless that individual has given explicit permission.
As a JD Supra article explains, “In a nutshell, the NO FAKES Act creates a federally recognized property right in and to an individual’s ‘digital replica’, [and] this digital replica right can be licensed by a living individual for up to ten years and can be assigned outright by the estate or heirs of a deceased individual, with the US Copyright Office authorized to maintain a registry of post-mortem licenses and assignments of digital replica rights.”
The bill proposes a range of civil remedies for rightsholders, including injunctive relief, statutory damages of up to $5,000 per infringing work, and the recovery of attorney’s fees. Its enforcement framework is modeled in part on the DMCA, requiring online platforms to remove infringing content upon receiving proper notice and to adopt technical measures, such as digital fingerprinting, to limit repeat uploads.
Under the proposal, the rights created by the Act would last for the individual’s lifetime, with renewable ten-year terms, and would extend for seventy years after death, allowing enforcement by estates or designated successors. The bill also seeks to preempt inconsistent state laws governing digital replicas and likeness rights to establish a uniform national standard.
At the same time, it preserves core First Amendment protections by carving out exceptions for parody, satire, commentary, news reporting, and other expressive uses where the use of a replica is materially relevant.
Deepfakes are highly realistic synthetic media created using artificial intelligence, often through methods that generate and refine images or audio by pitting two systems against each other or by gradually constructing them from random noise. They allow one person’s face, voice, or mannerisms to be convincingly placed onto another person’s in video, audio, or images.
As Qureshi and Khan note in Artificial Intelligence (AI) Deepfakes in Healthcare Systems: A Double-Edged Sword? Balancing Opportunities and Navigating Risks, “The exponential growth of Artificial Intelligence has birthed a fascinating technology known as deepfakes, capable of generating hyper-realistic audio-visual manipulations that seamlessly mimic individuals.”
These capabilities are the result of advances in machine learning, which allow content to be created quickly using very little source material. Detecting deepfakes remains difficult because the technology can reproduce subtle facial expressions, lighting, and lip movements with high precision. The impact on society so far has been limited, as human observers can detect deepfakes correctly only about sixty to seventy percent of the time when given contextual information.
Positive uses of deepfakes in healthcare include medical training simulations, patient education through virtual avatars, therapy for grief by recreating deceased loved ones, and training nurses. On the other hand, deepfakes can spread false information, such as fabricated endorsements of unproven treatments, or be used to impersonate healthcare providers to gather personal health information or carry out scams.
Tools such as Paubox’s Generative AI feature for inbound email security help mitigate these risks, using AI-driven detection to flag suspicious or manipulated content in email communications.
Liability generally applies to creators, distributors, online platforms, and service providers that help produce or share unauthorized replicas. Rightsholders like living individuals, estates, guardians of minors, or holders of personal services contracts, can file federal lawsuits against anyone who creates, distributes, or publicly displays AI-generated representations of a person’s voice or likeness without permission.
Remedies can include court orders to stop the infringement, recovery of actual damages and profits, statutory damages of five thousand dollars per work for individuals or platforms and twenty-five thousand dollars for larger entities, punitive damages for willful violations, and reimbursement of attorney fees.
Online platforms are offered protections similar to those in the Digital Millennium Copyright Act. They are shielded from liability for user-uploaded content if they promptly remove infringing material after verified notice, use tools such as digital fingerprinting to block repeat uploads, and avoid promoting tools designed mainly to produce unauthorized replicas. Platforms that fail to meet these requirements can face direct liability, while penalties for false notices help prevent abuse.
As an attorney at Law magazine article explains, “The Act would establish a federal property right for all individuals, not just celebrities, in their own voices and likenesses.”
Enforcement can become complex when multiple parties have overlapping claims, such as estates, agents, or licensees pursuing the same violation. Service providers that profit from creating replicas may face strict liability unless they follow safe harbor rules. A three-year statute of limitations would begin when the violation is discovered, and federal law preempts conflicting state rules to ensure consistency.
Deepfakes can be used for impersonation scams that extract personal health information or to spread false claims, such as fabricated endorsements by providers, which undermine patient trust and compliance with privacy standards. The potential benefits of synthetic data in research: de-identified replicas allow for model training without violating privacy rules, though hybrid forms of synthetic data carry a risk of re-identification.
As one study Synthetic data in medicine: Legal and ethical considerations for patient profiling, notes, “while fully synthetic data may not constitute personal data, its downstream application in clinical or decision-making systems can still raise fairness, bias, and accountability concerns.”
If the NO FAKES Act requires consent for the creation and distribution of replicas and mandates platform takedowns, it could reduce the misuse of patient likenesses in telehealth deepfakes or fraudulent training datasets. Treating an individual’s digital likeness as protected property would strengthen compliance with privacy regulations, lower phishing and fraud risks, and enable safer applications of AI in diagnostics and virtual patient simulations.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
No. There is no single comprehensive federal statute specifically governing all aspects of AI combined with data privacy.
Regulators like the Federal Trade Commission (FTC) and other agencies have stated that existing consumer protection and privacy authorities apply to AI systems.
There have been legislative proposals to preempt state AI regulations for a period of years, but they have not yet become law.