The new tempo of social engineering
Generative AI has collapsed the cost of deception. A single impersonation site can launch in minutes, target thousands of potential victims, and disappear just as quickly — and attackers can generate thousands more with a single prompt.
That’s the problem Doppel is solving. The OpenAI × Doppel case study details how we used the latest GPT models to rethink the entire decision loop—from detection to classification to automated action—so we can neutralize attacks before the damage occurs.
Reasoning at production scale
Traditional detection & response systems in cybersecurity were built for manual triage – with analysts reviewing alerts (and all their metadata) one at a time. This approach doesn’t work when attackers can generate countless threats with the press of a button. Doppel’s platform uses LLMs to reason about intent and automate decision-making in order to combat mass-manufactured social engineering.
Every day, Doppel ingests hundreds of millions of URLs into its platform. Each website contains a variety of signals, from the domain registrar to whether or not a brand logo was detected. The signals are filtered for relevance, aggregated, and finally analyzed through reasoning layers that evaluate context and intent. These reasoning layers consist of LLM models that have been aligned with analyst judgement through prompt engineering and Reinforcement Fine-Tuning (RFT), teaching the system to reason clearly and explain its conclusions.
How are the models receiving feedback and internalizing ground truth? We have a team of cybersecurity experts who meticulously label data, update prompts, and correct mistakes based on years of experience in the field. They are effectively “coaching” the system to handle more and more edge cases and adapt to the latest threats.
The result is a feedback-driven engine that learns continuously, scales efficiently, and communicates decisions transparently.
Why it matters
As mentioned above, automation is a requirement to combat social engineering attacks with speed. But our success in replacing manual triage with AI agents also has significant implications for the cybersecurity industry, where the tolerance for error is low.
Most AI deployments in cybersecurity are exciting but fail to automate core workflows in production. They are simply too inaccurate and unpredictable. The OpenAI × Doppel case study proved that with the right data, careful engineering, and a team of expert supervisors, it is possible to automate detection and response in this domain (and likely in many others). And more importantly, this can be achieved while reducing errors and increasing transparency.
Read the full OpenAI × Doppel customer story to see how LLMs and reinforcement fine-tuning helped create a world class AI cybersecurity agent that can keep pace with attackers.
We're also hosting a live webinar with OpenAI in December about how we've transformed threat response while reducing operational burden. Join us on 12/10.






