Author: Aarsh Jawa
Doppel's threat intelligence team recently observed threat actors abusing custom features on trusted AI platforms to create malicious chatbots that impersonate legitimate brands. These GPTs are designed to look like official support assistants.
Industries affected include cryptocurrency exchanges and commercial airlines, as well as IT help desks from multiple industries. The threat actors’ primary objective is to manipulate users into providing sensitive information and clicking on phishing links.
This method introduces a new threat vector: platform-hosted social engineering through trusted AI interfaces.
Several publicly available Custom GPTs have been observed impersonating well-known companies, including the examples below:
Each of these GPTs is accessible publicly and set up to confuse consumers into interacting with the fake GPTs.
The GPT may:
Risks
There have been real-world incidents where AI models, when responding to user queries, surfaced data from malicious websites indexed on Google. In some cases, this included fake customer support numbers from phishing pages. Though unintentional, such behavior demonstrates how AI can amplify access to deceptive or harmful content, especially when it’s trained or prompted without proper safeguards.
While there has not been a confirmed case of a Custom GPT directly executing a phishing redirect or distributing malware, the concern lies in how AI tools can be misused within a broader social engineering chain. These GPTs can impersonate brands, guide users with convincing language, and encourage actions like visiting links or verifying identity — potentially leading to external phishing sites or scams.
The misuse of Custom GPTs is a growing concern in the broader phishing and brand impersonation landscape. These AI-hosted chatbots offer a low-cost, scalable method for attackers to run socially engineered scams directly from within a trusted platform. We recommend organizations begin monitoring GPT abuse as part of their threat intelligence and brand protection efforts.