When a webhook delivery returns HTTP 200, most platforms mark it as "delivered" and move on. That's accurate at the transport layer — the HTTP exchange completed — but it tells the sender almost nothing about what actually happened inside the consumer.
Did the consumer parse the payload? Did it write to its database? Did it enqueue a downstream job that will run in 30 seconds, or fail silently in 10 minutes? HTTP 200 doesn't answer any of these questions.
For low-stakes events — analytics pings, notification triggers — this ambiguity is tolerable. For events that drive business processes — payment confirmations, inventory adjustments, subscription lifecycle changes — the gap between "delivered" and "processed" is where incidents live.
This post covers how to design a delivery receipt system that gives webhook senders meaningful signal about consumer outcomes, how to implement it without creating tight coupling, and where the practical limits of this approach are.
The Problem with HTTP 200 as a Proxy for Success
Consider a payment confirmation webhook. Your platform fires payment.captured and receives a 200 response in 180ms. You log it as delivered. Your customer's backend — the consumer — immediately queued the event for async processing in a job worker. Three hours later, that worker crashes due to an OOM error. The payment was never reconciled in the consumer's accounting system.
From your platform's perspective: delivered successfully. From the consumer's perspective: a ghost payment that will surface as a discrepancy in month-end reconciliation.
The mismatch exists because HTTP 200 signals transport success, not processing success. Your webhook endpoint is a receiving buffer, not a processing confirmation.
| Signal | What it means | What it doesn't mean |
|---|---|---|
| HTTP 200 | Bytes received, response formed | Payload parsed, business logic executed |
| HTTP 202 Accepted | Event acknowledged, processing deferred | Processing will succeed |
| HTTP 4xx | Consumer rejected this request | Consumer is healthy in general |
| HTTP 5xx | Consumer had an error | Error is permanent or transient |
| TCP timeout | Consumer unreachable or overwhelmed | Consumer is down vs. slow |
The only way to know that a consumer processed an event correctly is for the consumer to tell you — explicitly, through a mechanism you design for that purpose.
Pattern 1: Structured Acknowledgment Responses
The simplest extension to the HTTP 200 convention is to standardize the response body. Instead of an empty 200 or an arbitrary payload, define a schema that consumers SHOULD return:
{
"acknowledged": true,
"event_id": "evt_01JQMR9KXV4B2P7TNWY63ZHFD",
"processing_status": "accepted",
"consumer_version": "2.4.1"
}Or for async processing:
{
"acknowledged": true,
"event_id": "evt_01JQMR9KXV4B2P7TNWY63ZHFD",
"processing_status": "queued",
"estimated_completion_ms": 5000
}This is a "SHOULD" not a "MUST" — you can't force consumers to implement a specific response format, and your platform must still treat a plain 200 as successful delivery. But you can document the schema, incentivize compliance through dashboard visibility, and surface processing_status in your delivery logs.
The Go handler on the sender side that parses and stores these responses:
type AckResponse struct {
Acknowledged bool `json:"acknowledged"`
EventID string `json:"event_id"`
ProcessingStatus string `json:"processing_status"` // "accepted" | "queued" | "rejected" | "duplicate"
ConsumerVersion string `json:"consumer_version,omitempty"`
EstimatedMS int `json:"estimated_completion_ms,omitempty"`
}
func parseAckResponse(body []byte, statusCode int) (*AckResponse, error) {
if statusCode != http.StatusOK && statusCode != http.StatusAccepted {
return nil, fmt.Errorf("non-success status: %d", statusCode)
}
var ack AckResponse
if err := json.Unmarshal(body, &ack); err != nil {
// Consumer returned non-JSON 200 — treat as implicit acknowledgment
return &AckResponse{
Acknowledged: true,
ProcessingStatus: "unknown",
}, nil
}
return &ack, nil
}The key is the fallback: a non-JSON 200 body still counts as delivery success. You record the processing status when it's present, but you don't break delivery for consumers that haven't adopted the schema.
Pattern 2: Asynchronous Outcome Callbacks
For events where processing is genuinely long-running — batch imports, ML inference pipelines, multi-step financial reconciliation — a synchronous acknowledgment is not enough. The consumer needs a way to report back minutes or hours later.
This is an outcome callback: a webhook going the other direction.
Sender platform → fires event → Consumer endpoint (returns 202 Accepted)
Consumer → processes event (async, takes N seconds)
Consumer → calls back to sender: POST /v1/webhook-outcomesThe outcome callback endpoint on your platform:
POST /v1/webhook-outcomes
Authorization: Bearer hk_...
Content-Type: application/json
{
"event_id": "evt_01JQMR9KXV4B2P7TNWY63ZHFD",
"outcome": "processed",
"consumer_reference": "internal-job-id-8823",
"processed_at": "2026-04-19T14:33:12Z",
"metadata": {
"records_written": 47,
"duration_ms": 2340
}
}type WebhookOutcome struct {
EventID string `json:"event_id"`
Outcome string `json:"outcome"` // "processed" | "failed" | "skipped"
ConsumerReference string `json:"consumer_reference,omitempty"`
ProcessedAt time.Time `json:"processed_at"`
Metadata map[string]any `json:"metadata,omitempty"`
ErrorMessage string `json:"error_message,omitempty"`
}Storing the outcome alongside the delivery attempt gives you a complete picture of the event lifecycle: when it was sent, when it was received, and when — if ever — it was processed.
The tradeoff is complexity. The consumer now needs to implement an outbound HTTP call to your platform as part of its processing pipeline. This is a meaningful integration burden, so outcome callbacks work best as an opt-in feature for high-value event types, not as a universal requirement.
Pattern 3: Consumer-Side Delivery Receipts via Status API
A middle ground between structured response bodies and full callback infrastructure is a pull-based status API. Instead of the consumer pushing outcome data to you, you poll a consumer-provided status endpoint.
Sender fires event → Consumer returns 202 Accepted with a status URL:
Location: https://consumer.example.com/webhook-status/job_abc123
Sender polls periodically:
GET https://consumer.example.com/webhook-status/job_abc123
→ { "status": "processing", "progress": 0.4 }
GET https://consumer.example.com/webhook-status/job_abc123
→ { "status": "complete", "processed_at": "2026-04-19T14:33:12Z" }This is the pattern used by the HTTP 202 Accepted + Location header convention from RFC 7231. It's well-understood, doesn't require the consumer to hold a callback URL, and works across network boundaries where the consumer can't reach your platform.
The practical problem: polling adds operational complexity to your delivery infrastructure. You now have two HTTP call patterns — the initial delivery and the status poll — each of which can fail independently.
| Pattern | Consumer effort | Sender complexity | Latency to outcome |
|---|---|---|---|
| Structured sync response | Low | Low | Immediate |
| Async outcome callback | High | Medium | Minutes to hours |
| Pull-based status API | Medium | High | Polling interval |
| None (HTTP 200 only) | Zero | Zero | Never |
Choose based on what the event type demands, not what's technically interesting.
Storing and Surfacing Delivery Receipt Data
Whatever pattern you use, the receipt data needs to live somewhere queryable. Extend your delivery attempts table to capture consumer-reported outcomes:
ALTER TABLE delivery_attempts
ADD COLUMN consumer_ack_status TEXT, -- 'accepted' | 'queued' | 'rejected' | 'duplicate' | 'unknown'
ADD COLUMN consumer_version TEXT,
ADD COLUMN consumer_processed_at TIMESTAMPTZ,
ADD COLUMN consumer_reference TEXT, -- consumer's internal job/transaction ID
ADD COLUMN outcome_source TEXT; -- 'sync_response' | 'callback' | 'status_poll'With this schema, you can answer questions that HTTP 200 alone cannot:
-- Events acknowledged as received but never confirmed as processed
SELECT
e.id,
e.event_type,
da.delivered_at,
da.consumer_ack_status,
da.consumer_processed_at
FROM events e
JOIN delivery_attempts da ON da.event_id = e.id
WHERE da.outcome = 'success'
AND da.consumer_ack_status = 'queued'
AND da.consumer_processed_at IS NULL
AND da.delivered_at < now() - INTERVAL '1 hour'
ORDER BY da.delivered_at ASC;This query surfaces events the consumer acknowledged as "queued" more than an hour ago but has not yet reported as processed. For a payment pipeline, that's a list of transactions worth investigating before they become a reconciliation problem.
The Idempotency Key as a Receipt Anchor
If your consumers implement idempotent processing — and they should — the idempotency key provides a natural anchor for delivery receipts. The consumer uses the event ID as its idempotency key, records the processing outcome against it, and can report that outcome via any of the patterns above.
On the sender side, correlating receipts by event ID gives you a clean audit trail:
{
"event_id": "evt_01JQMR9KXV4B2P7TNWY63ZHFD",
"delivery_attempts": [
{
"attempt": 1,
"delivered_at": "2026-04-19T14:30:01Z",
"http_status": 500,
"outcome": "http_5xx"
},
{
"attempt": 2,
"delivered_at": "2026-04-19T14:32:31Z",
"http_status": 200,
"consumer_ack_status": "accepted",
"consumer_processed_at": "2026-04-19T14:32:45Z",
"outcome": "success"
}
]
}When a consumer implements the callback or status-poll pattern, the event ID is the key they use to route the outcome report back to the correct delivery attempt. This is why event IDs must be globally unique, stable, and included in every delivered payload — they're the thread that ties the entire lifecycle together.
Where This Breaks Down
Delivery receipts don't solve every problem. Three failure modes remain outside their reach:
Silent consumer failures. If the consumer's background worker silently drops events — no exception, no error log, no callback — your platform has no visibility. This is why receipts should be complemented with consumer-side monitoring, not replace it.
Network partitions after acknowledgment. The consumer returns 200, writes the event to a local queue, and then the queue process crashes before draining. The transport layer succeeded; the processing layer failed. No receipt scheme catches this without end-to-end monitoring on the consumer side.
Malicious or mistaken acknowledgment. A consumer can acknowledge receipt of an event it never actually processed. Receipts are cooperative infrastructure — they work when both sides implement them honestly and correctly.
These limitations are inherent to asynchronous distributed systems, not specific to webhook infrastructure. The goal of delivery receipts is not perfect certainty — it's reducing the gap between what you know and what you need to know to operate responsibly.
Delivery receipts are most valuable in two scenarios: high-stakes event types where a processing failure has real business consequences, and large platforms where your customers need proof of processing for compliance or SLA purposes. For everything else, a good retry policy and observable delivery logs are sufficient.
If you want to start instrumenting your webhook delivery pipeline with structured delivery metadata, GetHook's delivery dashboard gives you per-attempt HTTP status, response body capture, and the hooks you need to build receipt correlation on top.