2 min read
China-Linked hackers use ChatGPT to enhance phishing and malware campaigns
Farah Amod
October 29, 2025
AI-powered cyberattacks are on the rise as researchers uncover how threat groups use ChatGPT to create multilingual phishing lures and custom malware.
What happened
According to Cyber Press, security researchers have identified a China-aligned threat group, tracked as UTA0388, using artificial intelligence tools like ChatGPT to scale and automate their spear phishing campaigns. Since June 2025, the group has targeted organizations in North America, Asia, and Europe with highly customized phishing emails and AI-assisted malware.
UTA0388 generated messages in multiple languages, including English, Chinese, Japanese, French, and German. While the emails appeared linguistically fluent, many showed semantically incoherent combinations, such as German bodies with Mandarin subject lines, suggesting AI-generated content.
Going deeper
UTA0388 used fabricated personas and fake research organizations to gain trust and lure victims into extended message threads. The campaigns often relied on “rapport-building phishing,” in which the first emails seem harmless, with malicious payloads only introduced after gaining the target's trust.
At the technical level, researchers analyzed five separate variants of malware dubbed, each version featured major rewrites and included new persistence methods, command-and-control mechanisms (like fake TLS and WebSockets), and obfuscation techniques. Notably, indicators such as Simplified Chinese in developer paths and the use of python-docx—a library often employed by LLMs—point to the role of AI in their creation.
OpenAI’s October 2025 report independently confirmed the group’s use of ChatGPT for malware development and spear phishing, citing telltale signs like patterned fake organization names, odd malware archive contents, and the scraping of unrealistic email addresses.
What was said
Security researchers warn that traditional defenses relying on detecting spelling errors or obvious red flags may fail against fluent AI-generated text. Volexity and OpenAI recommend shifting toward behavioral detection, monitoring response chains and unexpected activity within emails or documents.
The presence of nontraditional artifacts (such as pornographic files and Buddhist chants embedded in malware) and the use of fake yet fluent multilingual communication further demonstrate how AI can produce convincing but contextually flawed attacks at scale.
The big picture
The UTA0388 campaign shows how quickly state-backed hackers are adapting artificial intelligence for cyber operations. Instead of clumsy phishing messages or simple malware, attackers are now using ChatGPT to write fluent, multilingual emails and generate custom code at scale. The result is more believable communication, more varied attacks, and far fewer telltale signs for defenders to spot.
Paubox recommends Inbound Email Security to help organizations defend against this new wave of AI-assisted threats. Its generative AI studies tone, context, and sender behavior to catch messages that don’t align with normal communication—even when they look polished and legitimate. That kind of intent-based detection gives security teams a fighting chance against phishing and malware campaigns built with the same AI tools they use to defend.
FAQs
How can organizations detect AI-generated phishing content?
Look for linguistic mismatches (e.g., mixed languages), reply-chain manipulation, and unnatural sentence construction despite correct grammar, signs of LLM-generated text rather than human-authored content.
What is "rapport-building phishing" and why is it harder to detect?
It's a technique where attackers send benign emails to initiate trust before delivering malicious links or attachments later, making it difficult for traditional filters to block the early-stage communication.
Why is the use of python-docx significant in this context?
The presence of python-docx suggests automation using AI tools, as it’s commonly used by large language models and scripting engines to generate formatted documents at scale.
What steps should security teams take to defend against AI-assisted malware?
Enhance detection of unusual persistence methods (e.g., search order hijacking), monitor for non-standard C2 protocols, and stay updated on threat intelligence indicators tied to changing malware variants.
How does this relate to CVE vulnerabilities like Follina or Outlook flaws?
UTA0388’s malware may exploit known vulnerabilities like CVE-2022-30190 and CVE-2023-23397 to execute malicious payloads, underscoring the importance of regular patching and document-based exploit detection.
Subscribe to Paubox Weekly
Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.
