Attackers are impersonating well-known AI brands to distribute fake mobile apps designed to steal Facebook credentials.

 

What happened

A phishing campaign is exploiting the names of popular AI platforms ChatGPT and Gemini to trick users into downloading fake iOS apps that steal Facebook login credentials. According to CyberPress, the attack begins with phishing emails promoting supposed AI business or advertising tools. The emails link to apps that appear to be legitimate listings in the Apple App Store. After installation, the apps ask users to sign in with Facebook credentials under the pretense of accessing advertising or account management features. Instead of providing any real service, the apps capture the usernames and passwords entered, allowing attackers to take over Facebook accounts, particularly those used by marketers or businesses managing advertising from mobile devices.

 

Going deeper

The campaign shows a change in phishing tactics, moving from traditional fake websites to malicious mobile apps. Mobile apps often appear more legitimate because they resemble tools from official app stores and familiar interfaces. In this case, attackers used branding associated with well-known artificial intelligence platforms to make the apps look credible and increase downloads. Instead of asking for login details for the AI services, the apps prompt users to authenticate through Facebook, suggesting the goal is access to business advertising accounts. Once attackers gain control of these accounts, they can run scam advertisements, distribute malicious content, or launch additional phishing campaigns through trusted business pages.

 

What was said

Security researchers reported that the campaign demonstrates how trusted platforms can still be misused for credential theft. The researchers stated that “trusted platforms are not immune to abuse,” and warned that users should carefully verify the publisher and purpose of any application claiming to be linked to major AI tools before entering login information. The findings were presented in an analysis of the phishing campaign referenced in the CyberPress report in March 2026.

 

In the know

Besides attackers impersonating well-known AI brands, threat actors are also using the tools themselves to strengthen phishing and malware campaigns. According to reporting from the Paubox blog, security researchers identified a China-aligned threat group tracked as UTA0388 that has used AI tools such as ChatGPT to scale spear phishing operations since June 2025. The group targeted organizations across North America, Asia, and Europe with highly customized phishing emails and AI-assisted malware. Messages were generated in multiple languages, including English, Chinese, Japanese, French, and German. Although the emails appeared fluent, some contained unusual combinations such as German message bodies paired with Mandarin subject lines, suggesting the content had been produced with AI assistance.

 

The big picture

Impersonation attacks that exploit well-known technology brands are becoming a growing cybersecurity risk as artificial intelligence tools gain widespread adoption. Familiar names such as ChatGPT and Gemini carry a level of trust that attackers can easily exploit. A similar pattern has long existed in email-based communication. In the study Email in healthcare: pros, cons and efficient use, Stephen Ginn notes that “email is a major means of communication in healthcare and it facilitates the fast delivery of messages and information.” The study also explains that “email's ubiquity has brought challenges” and that working days can be “dictated by the receipt and reply of multiple email messages, which drown out other priorities.” In such fast-paced digital environments, attackers can take advantage of familiarity and time pressure by disguising malicious apps or messages as trusted tools, increasing the likelihood that users will download fake applications or enter sensitive credentials.

 

FAQs

Why are attackers using AI brand names in phishing campaigns?

Recognized technology brands increase user trust, which improves the likelihood that victims will install malicious apps or enter credentials.

 

Why target Facebook credentials instead of AI accounts?

Compromised Facebook business accounts can be used for advertising fraud, scam promotion, and further phishing campaigns, making them valuable to attackers.

 

Are official app stores always safe from malicious apps?

App stores implement security reviews, however attackers sometimes bypass controls or disguise malicious functionality to appear legitimate.

 

What warning signs can indicate a malicious mobile app?

Unexpected requests for unrelated credentials, unclear publisher information, and app descriptions that do not match the requested permissions may indicate malicious intent.

 

How can organizations reduce the risk of similar attacks?

Organizations can educate employees about phishing techniques, encourage verification of app publishers, and monitor for unusual login activity linked to corporate social media or advertising accounts.