Most webhook guides focus on a single provider pushing events to a single endpoint. Real systems are messier: your platform might receive webhooks from Stripe for payments, GitHub for CI triggers, Twilio for SMS delivery receipts, and Shopify for order events — all at once, all with different payload shapes, authentication schemes, and reliability characteristics.
Fan-in is the architectural pattern for handling this. Instead of treating each provider's webhooks as a separate integration, you funnel all of them into one normalized event pipeline. The benefits compound: unified retry logic, one observability surface, centralized replay, and deduplication that works across every source.
This post covers how to design and implement a webhook fan-in architecture that holds up in production.
The Problem with N Independent Integrations
Without fan-in, each provider integration looks like this:
Stripe ──► /webhooks/stripe ──► stripe_handler() ──► DB
GitHub ──► /webhooks/github ──► github_handler() ──► DB
Twilio ──► /webhooks/twilio ──► twilio_handler() ──► DB
Shopify ──► /webhooks/shopify ──► shopify_handler() ──► DBEvery handler is a snowflake. Some do idempotency checks, some don't. Some log failures, some swallow errors silently. Retry logic (if it exists at all) is duplicated. When a provider changes their signature format, you find out via a failed delivery at 2am.
The operational overhead grows with every new provider you add. Fan-in collapses this into:
Stripe ──►
GitHub ──► Ingest Layer ──► Normalized Event Store ──► Workers
Twilio ──►
Shopify ──►The ingest layer handles per-provider concerns (authentication, payload parsing). Everything downstream is provider-agnostic.
Step 1: Per-Provider Ingest Endpoints
You still need one endpoint per provider — but now each endpoint has a single job: verify the payload and emit a normalized internal event.
type NormalizedEvent struct {
ID string `json:"id"`
Source string `json:"source"` // "stripe", "github", etc.
EventType string `json:"event_type"` // "payment.succeeded", "push", etc.
ReceivedAt time.Time `json:"received_at"`
ProviderID string `json:"provider_id"` // provider's own event ID
RawPayload json.RawMessage `json:"raw_payload"`
Headers map[string]string `json:"headers"` // relevant headers only
}Each ingest handler does three things and nothing else:
- ›Verify the provider's signature (HMAC, RSA, basic token — depends on provider)
- ›Extract the provider's event ID for deduplication
- ›Emit a
NormalizedEventto the queue
func (h *StripeIngestHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(io.LimitReader(r.Body, 10<<20))
if err != nil {
http.Error(w, "read error", http.StatusBadRequest)
return
}
sig := r.Header.Get("Stripe-Signature")
if !verifyStripeSignature(body, sig, h.webhookSecret) {
http.Error(w, "invalid signature", http.StatusUnauthorized)
return
}
var envelope struct {
ID string `json:"id"`
Type string `json:"type"`
}
if err := json.Unmarshal(body, &envelope); err != nil {
http.Error(w, "invalid payload", http.StatusBadRequest)
return
}
event := NormalizedEvent{
ID: newEventID(),
Source: "stripe",
EventType: envelope.Type,
ReceivedAt: time.Now().UTC(),
ProviderID: envelope.ID,
RawPayload: json.RawMessage(body),
Headers: map[string]string{"Stripe-Signature": sig},
}
if err := h.store.Enqueue(r.Context(), event); err != nil {
http.Error(w, "internal error", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
}The handler returns 200 OK immediately after enqueueing — before any downstream processing. This is important: providers measure response latency and will retry if you take too long.
Step 2: Deduplication at the Ingest Layer
Every provider sends duplicate events under certain conditions. Network timeouts, retries after a 500, and at-least-once delivery guarantees all produce duplicates. You need to catch them at the ingest layer before they hit your business logic.
The right primitive is a deduplication table keyed on (source, provider_id):
CREATE TABLE ingest_dedup (
source TEXT NOT NULL,
provider_id TEXT NOT NULL,
event_id UUID NOT NULL,
received_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
PRIMARY KEY (source, provider_id)
);
-- Clean up old entries (providers guarantee no duplicates after 24h)
CREATE INDEX ingest_dedup_received_at ON ingest_dedup (received_at);In your ingest handler, attempt an INSERT ... ON CONFLICT DO NOTHING and check whether a row was actually inserted:
INSERT INTO ingest_dedup (source, provider_id, event_id)
VALUES ($1, $2, $3)
ON CONFLICT (source, provider_id) DO NOTHING;If zero rows were inserted, you've seen this event before. Return 200 OK (don't return 4xx — the provider will retry) and skip enqueueing.
Run a periodic cleanup job to delete rows older than 48 hours to keep the table from growing unbounded.
Step 3: Event Normalization (Optional but Valuable)
Storing raw payloads is fine for the queue. But if you want to write routing rules, filters, or alerts that work across providers, you need a normalized schema.
Different providers use different conventions:
| Field | Stripe | GitHub | Twilio | Shopify |
|---|---|---|---|---|
| Event type | type | X-GitHub-Event header | EventType | X-Shopify-Topic header |
| Event ID | id | X-GitHub-Delivery header | MessageSid | X-Shopify-Webhook-Id header |
| Timestamp | created (Unix int) | created_at (ISO 8601) | Not provided | created_at (ISO 8601) |
| Resource ID | data.object.id | repository.id | SmsSid | id in body |
Normalization code belongs in a thin mapping layer that runs after deduplication and before your business logic workers:
func NormalizeEvent(raw NormalizedEvent) (Event, error) {
switch raw.Source {
case "stripe":
return normalizeStripe(raw)
case "github":
return normalizeGitHub(raw)
case "twilio":
return normalizeTwilio(raw)
case "shopify":
return normalizeShopify(raw)
default:
return Event{}, fmt.Errorf("unknown source: %s", raw.Source)
}
}Keep the raw payload alongside the normalized form. You will need it when your normalization logic has a bug.
Step 4: Routing Fan-In Events to Consumers
With a normalized event stream, you can route events to multiple internal consumers using a pattern-matching router:
type Route struct {
Source string // "stripe", "*" for any
EventType string // "payment.succeeded", "payment.*", "*"
Handler EventHandler
}A wildcard route ("*", "*") sends every event to an audit log. A specific route ("stripe", "payment.succeeded") triggers your revenue accounting service. Multiple routes can match the same event — this is intentional fan-out from fan-in.
If you're using GetHook as the ingest and delivery layer, routes work the same way: each source maps to one or more destinations, and the event_type_pattern field supports glob matching so payment.* catches every payment event from Stripe without listing each variant.
Step 5: Unified Observability
Fan-in's payoff is operational: one dashboard instead of four.
The metrics that matter for a fan-in setup:
| Metric | Description |
|---|---|
ingest.events_received by source | Volume per provider — detects when a provider stops sending |
ingest.duplicate_rate by source | Dedup hits — spikes indicate upstream retry storms |
ingest.signature_failures by source | Auth failures — often means a secret rotation upstream |
queue.depth by source | Per-source backlog — isolates a slow consumer from affecting others |
delivery.success_rate by source | End-to-end delivery health per provider |
The source label is what makes this useful. A delivery failure is much easier to triage when you know it's affecting only Twilio events, not the entire pipeline.
Step 6: Replay and Backfill
Fan-in makes replay dramatically easier. Because every event — regardless of origin — is stored in the same normalized format, you can:
- ›Replay all events from a specific provider for a time window
- ›Replay only events of a specific type (e.g., re-process all
order.createdevents from Shopify) - ›Backfill a new consumer with historical events without touching provider APIs
# Replay all Stripe payment events from the last 6 hours
curl -X POST https://api.yoursaas.com/v1/events/replay \
-H "Authorization: Bearer hk_..." \
-H "Content-Type: application/json" \
-d '{
"source": "stripe",
"event_type_pattern": "payment.*",
"from": "2026-03-25T00:00:00Z",
"to": "2026-03-25T06:00:00Z"
}'Without a unified store, replaying events from multiple providers means writing separate backfill scripts against separate tables, coordinating timing across them, and hoping you didn't miss any edge cases. Fan-in makes it a single query.
Common Pitfalls
Mixing normalization with business logic in the ingest handler. Keep handlers thin. If normalization fails on a malformed payload, you still want the raw event stored for debugging. Process in layers.
Not accounting for provider downtime windows. Some providers (notably Shopify) have known maintenance windows. Your fan-in queue depth will spike when a provider resumes delivery after a pause. Size your workers and queue for burst capacity, not just steady-state.
Assuming event ordering. Even within a single provider, events can arrive out of order. A payment.failed can arrive before the payment.created it references. Design your consumers to handle out-of-order events or use a sequencing field (like Stripe's created timestamp) to reorder.
Shared retry configuration. Different providers have different retry budgets and semantics. Stripe retries for 3 days. GitHub retries for 3 days. Twilio retries for 48 hours. Your internal retry policy should be at least as aggressive as the most conservative provider — but treat each source's events independently so a retry storm from one provider doesn't exhaust capacity for others.
Fan-in is one of those architectural patterns that seems like overhead at two providers but becomes indispensable at five. The earlier you wire it in, the less you pay to migrate later.
If you want the ingest, deduplication, and delivery layers handled for you, GetHook supports multiple sources per account with per-source routing, unified event history, and pattern-based routing out of the box. Start building →