Don't Miss Our Upcoming Webinar: Defending Against Multi-Channel Threats with Tripadvisor. Register now!Doppel Image
  • Platform
    • Platform Overview
      • Doppel Vision
        Doppel VisionAI-powered social engineering defense platform
      • Doppel Integrations
        IntegrationsSee our integrations partners
    • Products
      • Brand Protection
        Brand ProtectionDismantle threats and protect your brand's reputation
      • Executive Protection
        Executive ProtectionPrevent impersonation, phishing, and identity-based attacks
      • Simluation
        SimulationStrengthen your business again social engineering attacks
      • Brand AbuseBox
        Brand AbuseBoxConnect customer-detected scams; take down attacks
  • Solutions
      • Financial Services
      • Crypto
      • Government
      • Healthcare
      • Retail
      • Technology
  • Customers
  • Resources
  • Company
      • About us
      • Leadership
      • Doppelpedia
      • Events
      • Careers
      • Newsroom
  • Blog
Customers
Resources
Blog
Book a Demo
  • Platform
    • Platform Overview
      • Doppel Vision
        Doppel VisionAI-powered social engineering defense platform
      • Doppel Integrations
        IntegrationsSee our integrations partners
    • Products
      • Brand Protection
        Brand ProtectionDismantle threats and protect your brand's reputation
      • Executive Protection
        Executive ProtectionPrevent impersonation, phishing, and identity-based attacks
      • Simluation
        SimulationStrengthen your business again social engineering attacks
      • Brand AbuseBox
        Brand AbuseBoxConnect customer-detected scams; take down attacks
  • Solutions
      • Financial Services
      • Crypto
      • Government
      • Healthcare
      • Retail
      • Technology
  • Customers
  • Resources
  • Company
      • About us
      • Leadership
      • Doppelpedia
      • Events
      • Careers
      • Newsroom
  • Blog
Customers
Resources
Blog
Book a Demo
HomeHome
BlogBlog
Threat Intelligence Briefing Abuse Custom Gpts Brand Impersonation And PhishingThreat Intelligence Briefing Abuse Custom Gpts Brand Impersonation And Phishing
Threat Intelligence

Threat Intelligence Briefing: Abuse of Custom GPTs for Brand Impersonation and Phishing

This method introduces a new threat vector: platform-hosted social engineering through trusted AI interfaces.

Doppel Team

Doppel Team

August 13, 2025
Threat Intelligence Briefing: Abuse of Custom GPTs for Brand Impersonation and Phishing

Share this article

Author: Aarsh Jawa

Doppel's threat intelligence team recently observed threat actors abusing custom features on trusted AI platforms to create malicious chatbots that impersonate legitimate brands. These GPTs are designed to look like official support assistants.

Industries affected include cryptocurrency exchanges and commercial airlines, as well as IT help desks from multiple industries. The threat actors’ primary objective is to manipulate users into providing sensitive information and clicking on phishing links.

A New Social Engineering Attack Vector

This method introduces a new threat vector: platform-hosted social engineering through trusted AI interfaces.

Several publicly available Custom GPTs have been observed impersonating well-known companies, including the examples below:

  • Airline Assistant – Imitates a global airline support for refunds and check-in.
  • Cryptocurrency Exchange Expert – Pretends to be a trading or support assistant for a well-known cryptocurrency company.
  • Cryptocurrency Purchase – Simulates guidance for purchasing crypto.
Image removed.

Each of these GPTs is accessible publicly and set up to confuse consumers into interacting with the fake GPTs.

How Threat Actors Set Up the Scam

  1. Create a Custom GPT Using a Brand-Like Name
  • Attackers use names that reference the brand name “(Company Name) Support Agent” or “(Company Name) Login Help.”
  • They may include brand terms or logos in the GPT instructions to make it appear official.
  1. Preload the GPT with Helpful-Looking Prompts
  • Instructions are crafted to mimic customer support, such as:
  • “You are a (Company Name) support agent. Help users reset their account.”
  • This makes the GPT respond in a helpful, believable tone.
  1. Use Social Engineering Techniques in the Responses

The GPT may:

  • Prompt users to share login credentials, 2FA codes, or wallet seed phrases
  • Provide links to phishing websites
  • Encourage users to download fake apps containing malware


Risks

  • Credential Theft: Users may share sensitive account or financial information.
  • Malware Delivery: Users may be tricked into downloading malicious APKs or software.
  • Brand Damage: Legitimate companies may experience reputational harm and increased user support load due to the consumers interacting with these fake GPTs.


There have been real-world incidents where AI models, when responding to user queries, surfaced data from malicious websites indexed on Google. In some cases, this included fake customer support numbers from phishing pages. Though unintentional, such behavior demonstrates how AI can amplify access to deceptive or harmful content, especially when it’s trained or prompted without proper safeguards.

While there has not been a confirmed case of a Custom GPT directly executing a phishing redirect or distributing malware, the concern lies in how AI tools can be misused within a broader social engineering chain. These GPTs can impersonate brands, guide users with convincing language, and encourage actions like visiting links or verifying identity — potentially leading to external phishing sites or scams.


Recommendations For End Users:

  • Never assume a GPT assistant is official unless verified by the actual brand.
  • Do not share sensitive data (logins, 2FA, wallet keys) with chatbots.
  • Be cautious of GPT links received through untrusted sources.


Monitoring GPT Abuse

The misuse of Custom GPTs is a growing concern in the broader phishing and brand impersonation landscape. These AI-hosted chatbots offer a low-cost, scalable method for attackers to run socially engineered scams directly from within a trusted platform. We recommend organizations begin monitoring GPT abuse as part of their threat intelligence and brand protection efforts.

Related Articles

Disrupting Deception: A New Era in Cybersecurity with Social Engineering Defense (SED)

Disrupting Deception: A New Era in Cybersecurity with Social Engineering Defense (SED)

Learn how Doppel can protect your business

Join hundreds of companies already using our platform to protect their brand and people from social engineering attacks.

PlatformDoppel VisionBrand ProtectionExecutive ProtectionSimulationBrand AbuseBoxIntegrations
SolutionsFinancial ServicesGovernmentTechnologyCrypoHealthcareRetail
CompanyAbout usCareersLeadershipCustomersDoppelpediaNewsroom
LearnResourcesBlogEvents
Theme
© 2025 Doppel, All rights reserved
Terms of ServicePrivacy PolicySecurityStatus