Webhooks fail silently. Your provider sent an event, nothing happened on your end, and now you're staring at two systems that each think they did their part correctly. Unlike a failed API call where you have an immediate error response to work from, webhook failures arrive minutes or hours later — as the absence of something that should have happened.
This guide gives you a structured debugging process. Follow it in order and you'll have the root cause within 10 minutes for the vast majority of failures.
The 5 Places a Webhook Can Break
Before diving into steps, it helps to have a mental model. A webhook delivery has five distinct failure points:
| Layer | What can go wrong | Who owns the fix |
|---|---|---|
| Provider side | Event not generated, delivery paused, destination URL misconfigured | You (config) or provider (bug) |
| Network / DNS | Firewall block, DNS resolution failure, TLS cert mismatch | You or your infrastructure team |
| Your endpoint | Process down, wrong port, route not matched, wrong method | You |
| Signature verification | Mismatch due to wrong secret, body already read, encoding error | You |
| Handler logic | Unhandled event type, database error, crash after 200 | You |
The debugging process below moves through these layers in order — cheapest to verify first.
Step 1: Check the Provider's Delivery Logs (2 minutes)
Every serious webhook provider has a delivery log in their dashboard. This is your first stop because it tells you whether the problem is on your side or theirs.
What to look for:
- ›Was the event generated at all? Check the event list, not just the webhook log.
- ›What HTTP status did your endpoint return? A
200means your server received and acknowledged the event. A0orconnection refusedmeans your server was unreachable. - ›Has the provider been retrying? If there are multiple attempts with the same failure, the problem is consistent.
| Status in provider log | Most likely cause |
|---|---|
| Connection refused / timeout | Your process is down or port is wrong |
| TLS error | Certificate mismatch or expired cert |
401 Unauthorized | Signature verification rejecting the request |
404 Not Found | Route doesn't exist or URL is misconfigured |
500 Internal Server Error | Handler code is crashing |
200 OK | Event was received — look in your handler logic |
If the provider dashboard shows 200 OK but your system shows no effect, jump straight to Step 5.
Step 2: Confirm Your Endpoint is Reachable (2 minutes)
Before suspecting code, confirm the server is up and the endpoint exists.
# Confirm the server is responding at all
curl -I https://yourapp.com/webhooks/stripe
# Send a POST with a minimal body to confirm the route exists
curl -X POST https://yourapp.com/webhooks/stripe \
-H "Content-Type: application/json" \
-d '{"type":"test"}' \
-vA 405 Method Not Allowed means the route exists but is registered as GET. A 404 means the route isn't registered. A connection refused means the process isn't running or the port is wrong.
If you're debugging a local environment exposed with ngrok or Cloudflare Tunnel, check that the tunnel is still running — they expire after inactivity.
# Check ngrok status
curl http://localhost:4040/api/tunnelsFor production environments, verify your load balancer or ingress is routing to the correct backend, and check that no firewall rule is blocking inbound POST requests from the provider's IP ranges.
Step 3: Replay the Event from the Provider (2 minutes)
If the endpoint is reachable, use the provider's replay feature to resend the exact original event. This is faster than waiting for a real event and gives you a controlled test case.
Most providers support this from their dashboard. From the CLI:
# Stripe: replay a specific event
stripe events resend evt_1PxABC123def
# GitHub: redeliver a webhook via the API
curl -X POST \
-H "Authorization: Bearer $GITHUB_TOKEN" \
https://api.github.com/repos/OWNER/REPO/hooks/HOOK_ID/deliveries/DELIVERY_ID/attemptsWatch your server logs in real time while replaying:
# Tail your application logs
tail -f /var/log/app/app.log | grep -i webhook
# Or if you're running in a container
docker logs -f app_container 2>&1 | grep -i webhook
# Or with journald
journalctl -u your-app.service -fIf you see a log entry appear when you replay, your endpoint is receiving events and the problem is in the handler. If you see nothing, the problem is network or routing.
Step 4: Verify Signature Verification Isn't Rejecting Events (2 minutes)
Signature verification rejections are one of the most common webhook debugging failures, and they often produce an opaque 401 or 400 with no log entry because the rejection happens before the handler code runs.
Common causes:
1. Wrong secret. You're using a test mode secret against a live mode event, or you rotated the secret but haven't updated the config.
2. Body already read. If any middleware reads r.Body before your verification code, the body is consumed and HMAC of an empty string won't match.
// ❌ Middleware that reads the body first breaks signature verification
func LoggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body) // body is now consumed
log.Printf("body: %s", body)
next.ServeHTTP(w, r) // handler gets empty body
})
}
// ✅ Buffer the body so it can be read twice
func LoggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
r.Body = io.NopCloser(bytes.NewReader(body)) // restore
log.Printf("body: %s", body)
next.ServeHTTP(w, r)
})
}3. Encoding mismatch. The HMAC is being compared as hex but the provider sends base64, or vice versa.
4. Charset/whitespace differences. Some providers send a trailing newline in the body. If you're stripping or trimming the body before hashing, the signature won't match.
To isolate whether signature verification is the problem, temporarily add a log line that prints the computed HMAC and the received HMAC before the comparison:
computed := computeHMAC(body, secret)
received := r.Header.Get("X-Webhook-Signature")
log.Printf("debug: computed=%s received=%s match=%v",
computed, received, hmac.Equal([]byte(computed), []byte(received)))Remove this log before you ship — it's a debugging tool, not a production pattern.
Step 5: Trace the Handler Logic (2 minutes)
If the event arrives and signature verification passes, but nothing happens, the bug is in your handler code. Add structured logging at the entry point of your handler:
func (h *StripeHandler) Handle(w http.ResponseWriter, r *http.Request) {
var event stripe.Event
if err := json.NewDecoder(r.Body).Decode(&event); err != nil {
log.Printf("stripe webhook: decode error: %v", err)
http.Error(w, "bad request", 400)
return
}
log.Printf("stripe webhook: received type=%s id=%s", event.Type, event.ID)
switch event.Type {
case "payment_intent.succeeded":
if err := h.handlePaymentSucceeded(r.Context(), event); err != nil {
log.Printf("stripe webhook: handler error type=%s id=%s err=%v",
event.Type, event.ID, err)
// Return 500 so the provider retries
http.Error(w, "internal error", 500)
return
}
default:
log.Printf("stripe webhook: unhandled type=%s", event.Type)
// Still return 200 — unhandled events are not errors
}
w.WriteHeader(http.StatusOK)
}Key things to check in your handler:
- ›Are you handling the specific event type the provider sent? An unmatched
switchcase that falls through to a200looks like success to the provider but does nothing. - ›Are errors in your downstream calls (database writes, API calls) being swallowed? A
_ = doSomething()that discards errors will silently fail. - ›Are you responding
200before processing finishes? Some teams do this to avoid provider retries, then crash partway through processing. Use a job queue for anything that might fail.
Using GetHook to Speed Up Debugging
If you're managing multiple webhook sources, GetHook's delivery attempt log gives you a single place to see every attempt — the provider's payload, your endpoint's response code, and the latency — without tailing server logs across multiple services.
The event detail view shows the full request and response for every attempt:
GET /v1/events/evt_01HX.../
{
"id": "evt_01HX...",
"status": "dead_letter",
"attempts": [
{
"attempt_number": 1,
"outcome": "http_5xx",
"response_status": 500,
"response_body_excerpt": "internal server error",
"attempted_at": "2026-03-28T09:00:00Z"
},
...
]
}From there, you can replay the specific event directly without going back to the provider dashboard — which is helpful when the provider doesn't support individual event replay or has a short replay window.
Build a Local Debugging Workflow
The best time to set up webhook debugging tools is before you need them. A few things worth having in place:
Local webhook receiver. Keep a simple log-everything endpoint in your dev environment:
http.HandleFunc("/webhooks/debug", func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
log.Printf("--- WEBHOOK ---\nHeaders: %v\nBody: %s\n---", r.Header, body)
w.WriteHeader(http.StatusOK)
})Point your tunnel at this handler when debugging unfamiliar provider payloads.
Structured logs with event IDs. Every log line in your webhook handlers should include the event ID and type. This makes it possible to trace a single event through your entire system with grep.
grep "evt_01HX..." /var/log/app/app.logDead-letter queue alerting. Set up an alert when events enter the dead-letter state. Webhook failures are often silent — by the time a customer reports a problem, it's been broken for hours. Proactive alerting on DLQ growth cuts your mean time to detect from hours to minutes.
Most webhook bugs fall into one of three categories: the endpoint wasn't reachable, signature verification was rejecting the request silently, or the handler had a logic error for a specific event type. The steps above narrow it down to one of those categories in under 10 minutes.
If you want delivery logs, replay, and dead-letter management out of the box without building the observability layer yourself, start with GetHook — the event timeline and attempt log are available from day one.