Proof of Work

The Suppression 3-Gate Protocol

How ENPITS keeps sender domains clean — and prospects' inboxes respected — through three independent filtering gates.

Published 2026-04-18

Why This Document Exists

Most "AI outbound" services treat deliverability as a warmup problem. We treat it as a data discipline problem.

If you hire us and your domain ends up on a blocklist, your next three years of outreach become three times harder. That risk is not abstract. It is the single most expensive failure mode in this category, and it is almost always caused by one thing: sending to people who should not have been on the list in the first place.

This page documents the three independent gates every prospect passes through before a single email is sent on your behalf. They are not "best practices we try to follow." They are hard-coded rules, backed by scripts and logs, that we cannot bypass without a visible audit trail.


Log vs. Logic

A typical vendor will tell you what they did:

"Today we pulled 100 leads from Apollo and sent them to n8n."

That is a log. It tells you nothing about whether those leads were safe to contact or whether the sequence that hit their inbox was aligned to how they actually work.

We tell you what the system enforces:

"Apollo raw data is filtered through three independent gates — sourcing, writing, sending — that together reject approximately 60 to 80 percent of the original list before any email leaves the server."

The gates below describe that enforcement.


Gate 1 — Sourcing

Question the gate answers: Is this person even a legitimate target?

This gate runs before any outreach copy is written. Most deliverability disasters happen upstream of sending, in the list itself.

What the gate rejects

ICP score under 7/10. Every lead pulled from Apollo is scored against five attributes — industry, headcount, decision-maker role, outbound maturity, and budget proxy. Score below 7 and the record is dropped without review.

Anti-ICP Dossier patterns. We maintain a living list of four structural anti-patterns we have learned to reject:

Each pattern has been earned from a real incident where a lead looked like ICP on paper and turned out to be noise or friction.

Suppression list. A Google Sheet with three columns — email, domain, date — holds every address that has ever opted out, bounced hard, or explicitly declined. The sourcing gate does a left-join against that list and drops any match, no exceptions.

What the gate does not do

It does not "enrich" or "verify." Those are downstream problems. Gate 1 is purely about whether a human belongs on the list at all.


Gate 2 — Writing

Question the gate answers: If we write to this person, will the message respect how they actually work?

Even a legitimate ICP can be burned by the wrong message. Gate 2 is about the content, not the contact.

What the gate enforces

Deduplication across time. Before a sequence is drafted, the target email and domain are checked against the last 180 days of outreach logs. A prospect who got a message from us in the last six months does not get another one unless they replied. No polite "bumping this" follow-ups outside an active thread.

Template gate. The copy passes through a structural check — 60 to 90 words for the first message, signal-first opener tied to something specific about the prospect's recent work, no product pitch, no em dashes, no tool names (n8n, Apollo, Clay are not mentioned to prospects). This is not stylistic preference. It is the minimum bar that keeps the message below the "this is a template" detection threshold used by every B2B inbox filter in 2026.

Anti-ICP re-scan. After the copy is drafted but before it is queued, the prospect's public footprint is scanned one more time — LinkedIn bio, company homepage, recent posts — for any Pattern A through D signals that slipped past Gate 1. About 5 to 10 percent of otherwise-valid leads get dropped at this stage.

Content reviewer, 85-point threshold. A separate pass scores the final draft on a rubric covering relevance, specificity, tone, and absence of red-flag phrases. Below 85 it does not go out. It gets rewritten.

What the gate does not do

It does not personalize with templated tokens like "I saw you work at {{company}}." Every line that references the prospect is written against their actual material. If we cannot find a real signal, the message is killed, not softened.


Gate 3 — Sending

Question the gate answers: Is this physical send safe for both sides — their inbox and your domain?

Gate 3 is the last chance to stop. Everything upstream is about quality; this gate is about physics — bounces, spam complaints, and sender reputation.

What the gate enforces

ZeroBounce validation. Every address is re-verified at send time, not at sourcing time. Emails go stale. An address that was valid last month may have bounced once between then and now. If the validator returns anything other than "valid," the send is aborted and the address is added to the suppression list.

Opt-out allowlist. Any address that has ever replied "unsubscribe," used the one-click unsubscribe header, or sent a polite "please remove me" note is permanently blocked. The check runs on every outgoing message, not just the first one in a sequence. This matters because a prospect who opts out at step 2 must never receive steps 3 or 4.

Bounce guard. If weekly bounce rate crosses 2 percent, all sending is paused for 48 hours while the affected list is re-verified. This is stricter than the Google-stated red line of 2 percent because we want margin, not ceiling.

Postmaster monitoring. Spam complaint rate is pulled from Google Postmaster Tools. The public red line is 0.3 percent. Our internal pause trigger is 0.1 percent — one-third of the regulatory limit. If we hit 0.1, sending stops regardless of list quality, and the content is audited for red-flag language.

What the gate does not do

It does not use shared IP pools from warmup tools or mass-sender infrastructure. Every message sends from a dedicated cold-outbound domain with aligned SPF, DKIM, and DMARC records.


Receipts

The numbers below are the current state of outbound on a single dedicated cold domain (enpits.dev) across Waves 1 through 7.

Metric Value
Total sent 41
Hard bounces 0
Opt-outs 1
Spam complaints 0
Domain blocklists 0
Postmaster spam rate below 0.1% (Google's internal threshold)

Volume is intentionally small. The interesting number is not the size of the list — it is the zeros. Zero bounces, zero complaints, zero blocklist entries, over a period where the same volume sent from a typical warmup-tool configuration would usually produce at least one of each.

The single opt-out was from a Pattern A (outreach agency) contact who passed Gate 1 but was correctly identified post-send. That incident is the reason Gate 2's re-scan step exists — it was added the same week.


What This Means For You

If we run your pipeline, these three gates run on your list too. You do not have to trust that we "follow best practice." The enforcement is written into the system, and the receipts are visible every week.

If something ever does get through — a bounce, a complaint, a blocklist hit — we have a standing rule: the incident is documented, the gate that missed it is patched, and the patch is published as a version note on this page.

That is what "Proof of Work" means here. Not a case study with a logo wall. A live document that is updated every time the system learns something new.


This document is part of a four-part infrastructure series. See also: Sniper Formula (Psychology Logic), Architecture (Technical Logic), Vertical Analysis (Insight Logic) — forthcoming.

Your pipeline, engineered to the same standard.

Managed AI outbound for 1-20 person B2B agencies. Founder-operated, not outsourced.

See how it works →