OSINT as Email Threat Context: Domain Registration, CT Logs, and Campaign Infrastructure

Abstract visualization representing open-source threat intelligence data flows

Commodity blocklists work until they don't. In our tracking of phishing campaigns over the past two years, we've consistently seen a gap of 6 to 18 hours between when attacker infrastructure goes live and when it starts appearing in shared threat feeds. That window is exactly where successful campaigns operate.

The problem isn't the blocklist providers. It's the data model. Blocklists are reactive by design. They aggregate reports, wait for volume, validate submissions, and propagate entries. By the time a domain shows up in a feed your MTA queries, the phishing kit has already processed a few hundred clicks. Some campaigns burn infrastructure in under four hours. Traditional IOC sharing doesn't catch those.

OSINT signals upstream of the attack change that calculus. Domain registration events and certificate transparency logs don't wait for victims. They fire the moment infrastructure is provisioned.

What Certificate Transparency Actually Tells You

Certificate Transparency (CT) is a public audit framework mandated by browser vendors since 2018. Every publicly trusted TLS certificate issued by any CA must be logged to a public CT log before browsers will accept it. That's tens of millions of new certificate entries every day.

For defenders, this is surveillance infrastructure attackers can't opt out of. When someone registers payroll-portal-corp-acme.com and immediately obtains an HTTPS certificate to make it look legitimate, that certificate appears in CT logs within seconds. Before the phishing email is even drafted.

What we've found in practice: certificate subjects are extraordinarily noisy with attacker intent. Real companies buying certs for new services tend to use their established brand naming conventions. Attackers improvise. They grab brand names, insert hyphens and action words, and sometimes use mismatched organizational names on the cert. A certificate for *.microsoft-secure-login.com issued to an LLC registered two weeks ago in a privacy-shield jurisdiction is a fairly clear signal. The subject mismatch alone, where the cert says one brand in the CN but the issuing org is something like “Digital Solutions Ltd,” is a pattern we can match before any email is sent.

Volume matters here. You can't manually review CT log entries. Phishaver ingests CT log feeds from Google, Cloudflare, DigiCert, and others, filtering for subjects that match customer brand terms and typosquatting variants within 2-3 edit distances. That filtering step cuts the noise from tens of millions to a few hundred candidate entries per day per customer. Actionable, not overwhelming.

Domain Registration as Early Warning

Newly registered domains follow predictable attacker patterns. Not because attackers are uncreative, but because operational security and speed create constraints. They need the domain quickly. They need it to survive for a campaign window. They need it cheap. Those requirements push behavior into recognizable clusters.

Here's what we see in the data. Roughly 73% of phishing domains we've traced back to delivery events were registered within 14 days of first use. About 40% were registered within 48 hours. Compare that to legitimate business domains, where the median age at first email delivery is over 18 months. Registration recency alone, even without any other signal, has real predictive power.

The specific patterns that correlate with phishing campaigns:

  • Registrar concentration. Several low-cost registrars with lax abuse response turn up disproportionately. When a newly registered domain comes from one of five registrars that account for 60% of takedown-resistant phishing infrastructure, that's a prior worth incorporating.
  • Privacy shield plus immediate MX records. Privacy-protected WHOIS is table stakes now and not itself a signal. But a privacy-shielded domain that configures MX records within 24 hours of registration, skipping A record setup entirely, is almost certainly not a legitimate business launching a new service.
  • Hosting ASN fingerprint. Phishing kit hosting concentrates in a small number of ASNs. When registration, CT log, and hosting ASN all align with known campaign infrastructure, that's not coincidence. That's the same actor.

Individually, each signal has false positive problems. Together, they start building something more reliable than any single feed.

Connecting OSINT to Email Delivery Context

Raw OSINT intelligence without delivery context is a threat intelligence report. Useful for teams with dedicated analyst capacity. Not useful for the mid-market security team handling email security alongside six other priorities.

The integration point that makes this actionable: when an inbound email arrives, its sender domain gets queried against a continuously refreshed index of OSINT signals. Not a static blocklist. Not a cached lookup from yesterday's batch job. A live query against a structured data store that's been receiving CT log events, registration events, and campaign infrastructure reports from Cofense and Abnormal Security shared indicators in real time.

That query returns a composite score. A domain registered 3 days ago, with a cert subject that matches a customer brand term plus a hyphen and an action word, hosted on an ASN that shows up in 4 recent campaign reports, gets a high infrastructure risk score. That score merges with Phishaver's content risk score from LLM inspection of the email body. The combined indicator is what drives the delivery decision.

In our experience, the highest-confidence catch rate comes from cases where content risk is medium but infrastructure risk is high. Those are the carefully written spear-phishing emails that would fool a naive content scanner. The email body reads clean. But the infrastructure is screaming.

Real Numbers on Lead Time

Across customer tenants, we've measured average lead time from CT log entry to first email delivery attempt for campaigns that eventually hit commodity blocklists. The median lead time is 9 hours. The 75th percentile is 22 hours. For campaigns that never hit commodity blocklists because they burned out fast, the only detection was OSINT-derived. Full stop.

Nine hours of lead time is meaningful. It's the difference between a phishing email that arrives after the infrastructure domain has been flagged in your MTA's blocklist query, and one that arrives while it's still completely clean. It's the difference between a campaign that catches 0 users and one that catches 200 before the first report comes in.

Honest caveat: lead time varies by campaign type and attacker sophistication. Nation-state actors age their infrastructure. They register domains weeks in advance, let them sit, build up organic traffic through benign content, then activate. Against that model, registration recency signals fail. CT log subject matching still applies, but the signal is weaker. Infrastructure fingerprinting from hosting and MX patterns becomes more important in those cases.

No single intelligence source catches everything. But for the commodity phishing campaigns that account for the majority of actual incidents in mid-market organizations, the infrastructure usually looks like infrastructure. It's just that traditional email security tools aren't looking at it until after the damage is done.

What This Means for Your Detection Stack

If your current email security posture relies primarily on AV scanning and blocklist queries, you're operating on attacker-defined timelines. They know the average delay from infrastructure registration to blocklist coverage. They're designing campaigns to fit inside that window. Every time.

OSINT-derived signals shift the detection window upstream. You're not waiting for victim reports to propagate through sharing feeds. You're correlating against observable attacker preparation steps: certificate provisioning, domain registration behavior, hosting choices, and the cross-referenced campaign infrastructure that shows up in analyst-sourced reporting from organizations like Cofense and Abnormal Security.

The practical recommendation: add infrastructure risk as a parallel scoring lane alongside content inspection, not a downstream filter. A clean email body doesn't make suspicious infrastructure safe. An email from a 4-day-old domain with a cert subject spoofing your financial institution partner, hosted on infrastructure in three recent campaign reports, warrants aggressive handling regardless of how benign the email text reads.

Practical note: the phishing emails that cause the most damage aren't the ones that look wrong. They're the ones that look right, while the infrastructure behind them tells a completely different story. OSINT signals are how you read that story before your users become part of it.