See how AI is powering the 5-stage social engineering attack chain — and how to break it (opens in new tab)
Threat Intelligence

They Didn’t Hack In, They Logged In: 4 Social Engineering Examples

Cybercriminals are tricking employees into opening the door to corporate data. Explore real-world social engineering examples spanning attack surfaces.

April 7, 2026
They Didn’t Hack In, They Logged In: 4 Social Engineering Examples

Organizations spend billions of dollars annually on next-generation firewalls, endpoint detection, and complex zero-day vulnerability patching.

But the most devastating cyberattacks of the last few years didn't rely on brute-force decryption or highly sophisticated code to break into a network.

Adversaries relied on a simple conversation.

Social engineering is cheaper, faster, and more effective than hacking a server. A cybercriminal doesn't need to break in if an employee simply opens the door and lets them in.

Security leaders, listen up. Stop viewing breaches as unpredictable user errors. Start recognizing the highly engineered, repeatable patterns attackers use to bypass human logic.

Pattern Recognition: Follow Social Engineering's Attack Chain

Analyzing real-world social engineering attacks, you discover a distinct pattern emerges.

The attacker's playbook follows a predictable attack chain, whether the attack occurs over a phone call or in a Slack channel.

  1. Pretext: Attackers scrape open-source intelligence (OSINT) from platforms like LinkedIn to craft a highly personalized scenario. They establish authority or urgency, like posing as IT support or a company executive.
  2. Channel Shift: To evade technical controls, attackers initiate contact on or quickly move the target to unmanaged channels, such as SMS, WhatsApp, Microsoft Teams, or personal phone calls.
  3. Credential & Approval Capture: The attacker tricks the user into a specific action; reading a multi-factor authentication (MFA) token aloud, approving a push notification, or logging into an adversary-in-the-middle (AitM) portal.
  4. Escalation: With legitimate access secured, the attacker moves laterally, deploys ransomware, or initiates fraudulent wire transfers.

Now you can examine how this flow plays out in the real world.

What Happens During a Social Engineering Attack? 4 Real-World Examples

Social engineering is a very real threat, with serious consequences. It costs more than $1 trillion annually, and the average cost of a data breach exceeds $4 million.

Here are real-world social engineering attacks that have occurred recently.

The Help Desk Hack

In late 2023, threat group Scattered Spider bypassed a global security perimeter with a simple, 10-minute phone call.

Attackers use OSINT gathered from LinkedIn to identify a highly privileged IT employee. They called the IT help desk, impersonated that employee, and claimed they were locked out of their account.

The help desk analyst, following a desire to be helpful, reset the MFA credentials.

This single human bypass led to a catastrophic ransomware deployment that crippled the company's hotel and casino operations, costing an estimated $100+ million.

The lesson from this social engineering attack: Technical controls like MFA are rendered ineffective if the human process that governs them (the help desk, in this case) is vulnerable to manipulation.

Microsoft Teams Backdoor

In a highly sophisticated campaign observed in March 2025, a threat actor linked to the Black Basta ransomware group weaponized Microsoft Teams. The attackers, posing as internal 'technical support,' reached out to targeted employees directly via chat.

What makes this social engineering attack terrifying is the psychological precision: Attackers specifically targeted female, executive-level employees in the finance and tech sectors while timing messaging between 2:00 PM and 3:00 PM local time, capitalizing on the cognitive fatigue of the 'afternoon slump.'

Under the guise of fixing a technical issue, they convinced the targets to open the built-in Windows Quick Assist tool to grant remote access, which they used to alter the Windows Registry and deploy a custom PowerShell backdoor. This evaded almost all antivirus detections.

The lesson from this social engineering attack: Internal collaboration tools like Microsoft Teams and Slack are implicitly trusted by employees. When attackers breach this trust boundary, standard skepticism evaporates.

Lazarus Group's Long Con on LinkedIn

State-sponsored actors, notably North Korea's Lazarus Group, have perfected the art of the professional 'long con.'

In several high-profile cryptocurrency heists, the attacks began not with malware, but with a fake LinkedIn profile. They target senior engineers and developers, building rapport over days or weeks of messaging. Once trust is established, the 'recruiter' invites the target to complete a technical assessment or to review a job description.

This file contains a malicious payload that compromises the target's device and provides a backdoor in the corporate network.

The lesson from this social engineering attack: Hyper-targeted social engineering can stretch over weeks. The initial contact is entirely benign, designed solely to build the trust required to deliver the payload later.

$25 Million Lost to a Deepfake

In early 2024, a finance associate at a global engineering firm received an email requesting a massive fund transfer.

The employee was correctly suspicious of the initial email. However, the attackers anticipated this and invited the employee to a video conference to 'verify' the transaction.

When the employee joined the call, they saw the company's CFO and several colleagues. Recognizing their faces, the finance associate authorized a $25 million transfer.

Everyone on the call, except the victim, was actually an AI-generated deepfake.

The lesson from this social engineering attack: Trusting your eyes and ears isn't a valid security protocol. The verification mechanism itself was spoofed.

Social Engineering in 2026: AI is the Accelerant

In 2026, social engineering attacks are multi-channel and AI-native. Generative AI is the accelerant, pouring gasoline on traditional tactics.

  • Hyper-Personalization & Realism: LLMs write flawless, context-aware scripts. The grammar mistakes and strange formatting that used to be hallmarks of phishing emails are gone.
  • Multi-Channel Assaults: Attackers no longer rely on a single email. They coordinate strikes across SMS, social media, and other channels to create an overwhelming illusion of legitimacy.
  • Rapid Interaction: AI agents automate the response process, acting as chatbots that speak to victims in real time, shrinking the window between initial contact and credential capture from days down to minutes.

Human Risk Management: Resilience in Practice

Technical controls can be bypassed. Deepfakes can fool our senses. So, how do you defend the organization? The answer is human risk management (HRM).

Resilience requires abandoning static compliance training in favor of dynamic, behavioral defense.

Role-Specific Training

Security awareness training (SAT) must reflect the reality that cybercriminals are carrying out highly personalized attacks, also known as spear phishing.

The IT help desk needs rigorous training on identity verification to stop vishing. The finance team needs training on out-of-band transaction verification to stop deepfakes.

A one-size-fits-all approach to SAT leaves your most critical roles exposed to all types of social engineering attacks.

Simulating Real Attacker Flows

Sending a generic, static phishing email template to your employees doesn't prepare them for a multi-channel attack.

Run phishing simulations that match real-world flows. Send a simulated SMS that directs the user to a fake Microsoft login page. Run safe vishing campaigns against your help desk.

If you don't test the unmanaged channels, you're blind to your biggest risks.

Feedback Loops, Not Punishments

When an employee falls for a simulation, it's a data point. It's not a disciplinary issue.

If an employee approves a simulated malicious MFA prompt, the goal is to identify the process gap. It indicates that your organization needs to move toward phishing-resistant authentication (like FIDO2 keys) and implement micro-coaching to correct the behavior in the moment.

Social Engineering Defense: Train Like You Fight

Once-a-year, hour-long training for employees doesn't defend your organization against a multi-channel, AI-driven adversary. The attackers are innovating at the speed of SaaS.

Doppel delivers AI-powered, multi-channel simulations grounded in real-world TTPs. You'll test your employees against the exact threats they face today, and coupled with micro-coaching and deep analytics, our platform moves security awareness training and simulations beyond vanity metrics to prove actual risk reduction.

The attackers are already trying to log in. Are your employees ready? See Doppel in action to build your HRM strategy.

Learn how Doppel can protect your business

Join hundreds of companies already using our platform to protect their brand and people from social engineering attacks.