Business Email Compromise Detection: Signals, Patterns, and Response Windows

Abstract visualization representing business email compromise attack patterns

The most expensive email attacks in mid-market organizations don't contain malware. They don't include phishing links. They don't trigger sandbox analysis. They're plain text messages asking someone to do something: redirect a payment, change bank account details, approve a wire transfer, send W-2 data to a new HR address. Business email compromise works because it doesn't look like an attack. It looks like work.

This is why traditional email security controls have a BEC blind spot. They're built to detect technical artifacts, malicious code, known-bad URLs, sender authentication failures. BEC attacks pass all of those checks because they were designed to. A well-crafted BEC message from a spoofed CEO domain or a compromised vendor account has clean headers, correct DMARC alignment, and no attached files. Gateway filters see nothing to flag.

The Signals That Actually Matter

Detecting BEC requires looking at a different layer entirely. Not technical artifacts, but behavioral and contextual signals. In our experience, the signals that distinguish a real BEC attempt from legitimate business communication fall into three categories:

Relationship anomalies. A message requesting financial action from someone who has never previously sent financial requests to this recipient. A vendor contact reaching out through a different email address than the established one. A message thread that begins with a new message rather than continuing an established conversation. These aren't indicators of compromise in isolation, but they're meaningful when combined.

Urgency and pressure framing. BEC attackers reliably use urgency as a control mechanism. "This needs to happen before the wire cutoff." "Please handle this before you leave today." "My assistant usually does this but she's out, I need you to cover." The language pattern is identifiable. Urgency framing combined with a financial action request is one of the highest-confidence BEC signals available.

Context breaks. A payment request for a vendor relationship that has never involved this type of transaction. A redirect to a bank account in a different country than the established vendor address. A request to keep the transaction confidential from other staff. Each of these is a break from established business context, and each one raises the probability of fraud significantly.

Why the 22-Minute Window Matters

Here's the thing about BEC dwell time: the median time between a BEC message being delivered and a victim taking action is roughly 22 minutes. That's not a long window. Seriously. Twenty-two minutes from delivery to wire authorization is not an unusual timeline, and in that window, most organizations have no automated detection running at all.

The response window problem is compounded by how BEC attacks are structured. Unlike ransomware, where the attack signature is obvious once damage starts, BEC often isn't discovered until the wire has cleared and the money is gone. Average remediation cost for a mid-market BEC incident runs between $38,000 and $90,000 when you factor in forensics, legal review, bank recall attempts, and staff time. That's before the operational disruption of conducting an internal investigation.

The detection window is before the action is taken, not after. This is why real-time scoring at message delivery matters more than post-incident analysis. By the time a security team is reviewing logs, the damage is typically already done.

How Relationship Graph Anomaly Detection Works in Practice

Phishaver's approach to BEC detection starts with the relationship graph built from 90 days of Gmail API metadata. Every sender-recipient pair in an organization's communication history has an established pattern: frequency, timing, message structure, topic domain based on subject line analysis, and response behavior.

When a message arrives that requests financial action, the system evaluates it against that established context. Has this sender ever sent financial requests to this recipient? What is the typical communication cadence between these two parties? Is the sending domain the same one that appears in previous communications, or has it changed? Does the timing of this message fit the established pattern for this contact?

CEO impersonation attacks, where an attacker spoofs or compromises an executive's email account to request urgent wire transfers, are caught by this method because executives almost never send direct financial instructions to individual contributors. The relationship graph knows this. A message claiming to be from the CEO, sent directly to an accounts payable specialist, with no prior communication history between those two addresses, triggers an anomaly score regardless of how authentic the message content looks.

Vendor fraud follows a similar detection pattern. A vendor contact who has sent invoices for 14 months from the same domain, then suddenly sends a bank account change request from a slightly different domain, creates a clear relationship anomaly. The invoice amount may be plausible. The vendor name may be correct. The relationship discontinuity is not.

LLM Intent Scoring at the Content Layer

Relationship graph anomalies cover the structural signals. LLM intent scoring covers the content layer. For BEC specifically, the content patterns are identifiable: urgency language, financial action verbs (wire, transfer, pay, redirect, send), authority invocation, and confidentiality requests (don't loop in anyone else on this).

Our data shows that LLM scoring at 800ms latency catches the language patterns that correlate with BEC attempts at a meaningful rate when combined with relationship graph context. Neither signal alone is sufficient. A message with urgency language from an established vendor contact in a normal relationship pattern is probably just a legitimate vendor with a tight deadline. The same urgency language from a new contact, or from an established contact using an unfamiliar domain, changes the risk calculation substantially.

Fact: the combination of relationship anomaly detection and LLM intent scoring doesn't eliminate false positives entirely. No detection system does. What it does is surface the right messages for analyst review before the 22-minute action window closes. That's the value proposition. Not zero misses. Faster detection on the attacks that matter.

What Mid-Market Teams Should Prioritize

In our experience, mid-market organizations that have already deployed an email gateway but are still seeing BEC attempts land in inboxes are dealing with an architecture problem, not a product problem. Their gateway is working as designed. BEC lives above the gateway's detection layer by design.

The practical question is whether to add behavioral context detection before or after an incident. We've spoken with teams that started evaluating BEC-specific tooling following a $50,000 wire transfer fraud, and teams that added coverage proactively because they understood the exposure. The technology is identical in both cases. The timing difference is significant.

For organizations running on Google Workspace or Microsoft 365 with no additional behavioral layer, the coverage gap is real and measurable. Forty-seven percent of organizations reporting BEC incidents in recent surveys had email security products deployed at the time of the attack. The products weren't defective. They just weren't built for this attack category. Understanding that distinction is where a realistic BEC defense starts.