Every time a webhook fires, your system is participating in an event-driven architecture — whether you've designed it that way or not.
The question is whether you've applied the architectural patterns that make event-driven systems reliable at scale, or whether you've just wired together HTTP calls and hoped for the best.
This post covers the core patterns that separate resilient webhook integrations from the ones that fail silently during the next outage.
Webhooks as Events vs. Webhooks as Commands
The first design decision is the semantic model of your webhooks: are they events or commands?
Event model: "Something happened — payment.succeeded at 14:22 UTC."
Command model: "Do this — process this payment confirmation."
The event model is almost always superior for webhooks:
| Event model | Command model | |
|---|---|---|
| Consumer coupling | Low — consumer decides what to do | High — producer dictates action |
| Multiple consumers | Natural — each processes independently | Awkward — who receives the command? |
| Replay semantics | Clear — "replay the event as it happened" | Ambiguous — "replay the command"? |
| Versioning | Forwards-compatible | Tightly coupled to consumer logic |
Design your webhooks as events: "these are the facts about what happened in our system." Let consumers decide what those facts mean for their own systems.
The Fan-Out Pattern
A single source event typically needs to trigger multiple downstream actions. A payment.succeeded event might need to:
- ›Update your internal order database
- ›Send a confirmation email
- ›Notify the fulfillment service
- ›Update your analytics warehouse
- ›Trigger a Slack notification to customer success
Anti-pattern: Sequential processing
func handlePaymentSucceeded(event PaymentEvent) error {
if err := updateDatabase(event); err != nil {
return err // If this fails, nothing else runs
}
if err := sendEmail(event); err != nil {
return err // If email fails, fulfillment doesn't run
}
if err := notifyFulfillment(event); err != nil {
return err // Fulfillment outage blocks analytics update
}
// ...
}If step 2 fails, steps 3–5 never run. If step 3's destination is slow, it blocks all subsequent steps.
Pattern: Fan-out to independent routes
Route the event to independent destinations, each with its own retry policy:
payment.succeeded event
├── Route A → order-service (critical, retry aggressively)
├── Route B → email-service (important, retry 5x)
├── Route C → fulfillment-api (critical, retry aggressively)
└── Route D → analytics-warehouse (non-critical, retry 3x)Each route has independent state. A failure on Route D doesn't affect Route A. This is exactly what GetHook's routing engine implements — one event fans out to multiple destinations, each tracked independently.
The Inbox Pattern
The inbox pattern (also called "transactional outbox" on the write side) is the standard approach for reliably connecting a webhook event to a database write.
The problem: You receive a webhook, need to update your database, and return 200. If your database write fails, you've acknowledged the event but didn't process it. The provider thinks it's delivered; you've lost it.
The solution: Write to an inbox table in the same transaction as your business logic, then process from the inbox asynchronously.
-- Inbox table
CREATE TABLE webhook_inbox (
id UUID PRIMARY KEY,
source TEXT NOT NULL,
event_type TEXT NOT NULL,
payload JSONB NOT NULL,
received_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
processed_at TIMESTAMPTZ,
status TEXT NOT NULL DEFAULT 'pending'
);// Webhook handler: write to inbox, return 200 immediately
func HandleWebhook(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
// ... verify signature ...
_, err := db.ExecContext(r.Context(), `
INSERT INTO webhook_inbox (id, source, event_type, payload)
VALUES ($1, $2, $3, $4)
ON CONFLICT (id) DO NOTHING
`, extractEventID(r, body), "stripe", extractEventType(body), body)
if err != nil {
http.Error(w, "failed to persist", 500)
return
}
w.WriteHeader(200) // Acknowledge immediately
}
// Background worker: process from inbox
func ProcessInbox(ctx context.Context, db *sql.DB) {
for {
var event InboxEvent
err := db.QueryRowContext(ctx, `
SELECT id, source, event_type, payload
FROM webhook_inbox
WHERE status = 'pending'
ORDER BY received_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED
`).Scan(&event.ID, &event.Source, &event.EventType, &event.Payload)
if err == sql.ErrNoRows {
time.Sleep(1 * time.Second)
continue
}
processEvent(ctx, db, event)
}
}The inbox table is a first-class database table — it participates in transactions. Writing to the inbox is either committed (event preserved) or rolled back (event not persisted, client gets 500 and retries). You can never lose an event that was written to the inbox.
The Saga Pattern for Multi-Step Workflows
Some business processes span multiple steps across multiple services. A customer order cancellation might require:
- ›Cancel the order in your order service
- ›Issue a refund via Stripe
- ›Release inventory back to warehouse
- ›Send cancellation email
If step 3 fails after steps 1 and 2 succeeded, you have a partially-cancelled order. This is a saga — a distributed transaction that needs compensating actions on failure.
Choreography-based saga with webhooks:
Step 1: Order service cancels order → emits order.cancelled webhook
Step 2: Payment service receives order.cancelled → issues refund → emits payment.refunded webhook
Step 3: Inventory service receives payment.refunded → releases stock → emits inventory.released webhook
Step 4: Notification service receives inventory.released → sends emailEach step listens for the previous step's completion event. Failure at any step triggers compensating events (e.g., refund.failed triggers order.cancel_reversed).
Orchestration-based saga (simpler for most teams):
Saga coordinator receives order.cancellation_requested
→ Calls order service: cancel order ✅
→ Calls Stripe: issue refund ✅
→ Calls warehouse: release inventory ❌ (failure)
→ Calls Stripe compensating: reverse refund
→ Calls order service compensating: restore order
→ Emits order.cancellation_failedThe coordinator is a single process that manages state and knows what to compensate. Easier to reason about than choreography, but creates a central point of coordination.
Event Sourcing Meets Webhooks
If you're using event sourcing (storing state as a sequence of events rather than current state), webhooks are a natural integration point.
Your event store is the authoritative record of what happened. Webhooks are projections of those events to external systems.
Event store: [order.created, payment.captured, order.fulfilled, ...]
↓
Webhook projector: publishes relevant events as outbound webhooks
↓
Customer endpoint: receives order.fulfilled webhookThis separation has powerful benefits:
- ›Replay: Re-project any time window of events as webhooks
- ›New integrations: A new customer integration can receive all historical events from the event store
- ›Consistency: Webhooks are derived from the authoritative event log, not from application code
Eventual Consistency: Setting Customer Expectations
Event-driven systems are eventually consistent, not immediately consistent. When you send a payment.succeeded webhook, the customer's system may not see the update in your REST API yet if they immediately query after receiving the webhook.
This is normal and expected, but you need to set expectations clearly in your documentation:
"After receiving a webhook, allow up to 5 seconds before querying the API for updated state. If the API returns the previous state, retry the query."
Some platforms include the resource in the webhook payload to avoid this query race:
{
"type": "payment.succeeded",
"data": {
"object": { /* full payment object */ }
}
}Including the full object in the webhook eliminates the need for customers to fetch the updated state — they have it already.
Backpressure and Flow Control
When a downstream service is struggling, event-driven systems can create runaway feedback loops:
Events arrive → Service overwhelmed → Retries accumulate → More events → Service crashesFlow control patterns break this loop:
Adaptive batch size: Reduce delivery batch size when destination error rates are high.
Exponential backoff on circuit open: When a destination's circuit breaker opens, the retry schedule extends automatically.
Queue pressure monitoring: When queue depth exceeds a threshold, alert — don't just retry faster.
Consumer-controlled pace: Allow destinations to signal "I'm ready for the next batch" via a polling mechanism as an alternative to push delivery for high-volume, latency-tolerant integrations.
Choosing Between Push and Pull Delivery
Webhooks are push-based: you send events to the consumer. Some use cases benefit from pull-based delivery instead:
| Push (webhooks) | Pull (polling / event streams) |
|---|---|
| Low-latency delivery | Consumer controls pace |
| No consumer infrastructure required | Consumer can replay and skip events |
| Provider manages retry | Consumer manages offset |
| Simpler for consumers | Better for high-volume, ordered processing |
For most integrations, push webhooks are the right default. For high-volume, ordered event streams (analytics pipelines, data sync), consider offering a pull-based event log API alongside webhooks.
GetHook supports both: push delivery via standard webhook routing, and event history access via the GET /v1/events API for pull-based reconciliation.