Threat Intelligence

Threat Intelligence Briefing: Abuse of Custom GPTs for Brand Impersonation and Phishing

This method introduces a new threat vector: platform-hosted social engineering through trusted AI interfaces.
Doppel Team
August 13, 2025

Author: Aarsh Jawa

Doppel's threat intelligence team recently observed threat actors abusing custom features on trusted AI platforms to create malicious chatbots that impersonate legitimate brands. These GPTs are designed to look like official support assistants.

Industries affected include cryptocurrency exchanges and commercial airlines, as well as IT help desks from multiple industries. The threat actors’ primary objective is to manipulate users into providing sensitive information and clicking on phishing links.

A New Social Engineering Attack Vector

This method introduces a new threat vector: platform-hosted social engineering through trusted AI interfaces.

Several publicly available Custom GPTs have been observed impersonating well-known companies, including the examples below:

  • Airline Assistant  – Imitates a global airline support for refunds and check-in.
  • Cryptocurrency Exchange Expert – Pretends to be a trading or support assistant for a well-known cryptocurrency company.
  • Cryptocurrency Purchase – Simulates guidance for purchasing crypto.

Each of these GPTs is accessible publicly and set up to confuse consumers into interacting with the fake GPTs.

How Threat Actors Set Up the Scam

  1. Create a Custom GPT Using a Brand-Like Name
  • Attackers use names that reference the brand name “(Company Name) Support Agent” or “(Company Name) Login Help.”
  • They may include brand terms or logos in the GPT instructions to make it appear official.

  1. Preload the GPT with Helpful-Looking Prompts
  • Instructions are crafted to mimic customer support, such as:
  • “You are a (Company Name) support agent. Help users reset their account.”
  • This makes the GPT respond in a helpful, believable tone.

  1. Use Social Engineering Techniques in the Responses

        The GPT may:

  • Prompt users to share login credentials, 2FA codes, or wallet seed phrases
  • Provide links to phishing websites
  • Encourage users to download fake apps containing malware


Risks

  • Credential Theft: Users may share sensitive account or financial information.
  • Malware Delivery: Users may be tricked into downloading malicious APKs or software.
  • Brand Damage: Legitimate companies may experience reputational harm and increased user support load due to the consumers interacting with these fake GPTs.


There have been real-world incidents where AI models, when responding to user queries, surfaced data from malicious websites indexed on Google. In some cases, this included fake customer support numbers from phishing pages. Though unintentional, such behavior demonstrates how AI can amplify access to deceptive or harmful content, especially when it’s trained or prompted without proper safeguards.

While there has not been a confirmed case of a Custom GPT directly executing a phishing redirect or distributing malware, the concern lies in how AI tools can be misused within a broader social engineering chain. These GPTs can impersonate brands, guide users with convincing language, and encourage actions like visiting links or verifying identity — potentially leading to external phishing sites or scams.


Recommendations For End Users:

  • Never assume a GPT assistant is official unless verified by the actual brand.
  • Do not share sensitive data (logins, 2FA, wallet keys) with chatbots.
  • Be cautious of GPT links received through untrusted sources.


Monitoring GPT Abuse

The misuse of Custom GPTs is a growing concern in the broader phishing and brand impersonation landscape. These AI-hosted chatbots offer a low-cost, scalable method for attackers to run socially engineered scams directly from within a trusted platform. We recommend organizations begin monitoring GPT abuse as part of their threat intelligence and brand protection efforts.

Related Blogs

Threat Intelligence
Social Engineering Tactics: Malware-as-a-Service Fuels Scalable Mobile Threats
Learn More
Threat Intelligence
Doppel Intelligence Briefing: Scripted Defenses in Phishing Kits Evade Analysts
Learn More
Threat Intelligence
Threat Intelligence Brief: Scattered Spider Campaigns and Domain Abuse Trends Detected by Doppel Vision
Learn More

Learn how Doppel can protect your business.