A customer's payment fails. In the next 30 seconds your system receives a payment_intent.payment_failed event from Stripe, a delivered event from SendGrid confirming the failure email went out, and a trigger.sent event from PagerDuty waking up your on-call engineer. Three different providers, three different payload shapes, three different delivery timings — all describing the same business moment.
If you're debugging a support ticket or a production incident, correlating these events manually is painful. You're cross-referencing dashboards, grepping logs, and mentally reconstructing a timeline that your infrastructure should be able to produce automatically.
This post covers how to design a correlation layer that ties webhook events from multiple providers into a unified, queryable event timeline — without building a custom ETL pipeline for each provider.
Why Provider Events Arrive Out of Context
Each third-party provider operates in its own namespace. Stripe identifies a payment with pi_3Pq.... SendGrid identifies an email with a sg_message_id. PagerDuty identifies an incident with a UUID in its own format. None of these IDs mean anything to the others.
The correlation problem has two layers:
- ›Structural: Every provider sends a different JSON shape. There is no common envelope.
- ›Semantic: The same business concept (a failed payment) generates events with different names, different field paths, and different timestamps across providers.
You cannot solve this at the provider side — they won't change their schemas for you. You solve it at ingest time by extracting a correlation key and attaching it to every event you store.
Choosing a Correlation Key
The correlation key is the value that connects events from different providers back to the same business object in your system. The right key is whatever your system treats as the canonical transaction or record identifier — typically your internal order_id, subscription_id, or user_id.
The challenge: providers do not send your internal IDs. They send their own. You need a mapping.
Pattern 1: Pass your ID through as metadata.
Stripe, SendGrid, and most mature providers let you attach arbitrary metadata to objects:
// Stripe PaymentIntent creation
{
"amount": 4999,
"currency": "usd",
"metadata": {
"order_id": "ord_8bK2mxP9",
"account_id": "acct_01HX..."
}
}When Stripe sends a webhook for this PaymentIntent, the metadata travels with it. Your ingest handler can extract metadata.order_id and index the event against it.
Pattern 2: Use the provider's ID as the foreign key.
If you create a Stripe customer and store stripe_customer_id on your users table, you can look up user_id at ingest time by querying your database for the Stripe customer ID in the webhook payload.
Pattern 3: Embed your ID in email subject lines or tags.
For providers like SendGrid, you can pass your internal reference in the email's custom_args field. These appear in the webhook payload as-is.
Normalizing the Envelope at Ingest
Once you can extract a correlation key, normalize every inbound webhook into a common envelope before storing it. This is the step most teams skip — and regret.
Here is a minimal normalized event record:
type NormalizedEvent struct {
ID string `json:"id"` // your internal event ID
CorrelationKey string `json:"correlation_key"` // e.g., "ord_8bK2mxP9"
Provider string `json:"provider"` // "stripe", "sendgrid", "pagerduty"
ProviderEventID string `json:"provider_event_id"` // provider's own event ID
EventType string `json:"event_type"` // normalized: "payment.failed", "email.delivered"
OccurredAt time.Time `json:"occurred_at"` // from provider payload, not ingest time
RawPayload []byte `json:"raw_payload"` // original bytes, never discarded
IngestedAt time.Time `json:"ingested_at"` // when you received it
}A few decisions worth highlighting:
- ›
occurred_atvsingested_at. Always extract the event timestamp from the provider's payload — this is when the event actually happened.ingested_atis when your server received the webhook. Network delays and retries mean these can differ by seconds or minutes. You want to sort your timeline byoccurred_at. - ›
raw_payloadis never discarded. Normalization is lossy by definition. You keep the raw bytes so you can re-parse with updated logic, replay events, or inspect provider-specific fields that didn't make it into your normalized schema. - ›
event_typeis yours, not the provider's. Map provider-specific event names to your own taxonomy at ingest:payment_intent.payment_failedbecomespayment.failed,delivered(SendGrid) becomesemail.delivered. This lets you write correlation queries against a consistent vocabulary.
The Provider Normalizer Pattern
Implement a normalizer per provider. Each normalizer implements a common interface:
type ProviderNormalizer interface {
Provider() string
CanHandle(r *http.Request) bool
Verify(r *http.Request, body []byte) error
Normalize(body []byte) (*NormalizedEvent, error)
}A Stripe normalizer looks like this:
type StripeNormalizer struct {
webhookSecret string
}
func (s *StripeNormalizer) Provider() string { return "stripe" }
func (s *StripeNormalizer) CanHandle(r *http.Request) bool {
return r.Header.Get("Stripe-Signature") != ""
}
func (s *StripeNormalizer) Verify(r *http.Request, body []byte) error {
sig := r.Header.Get("Stripe-Signature")
return verifyStripeSignature(sig, body, s.webhookSecret)
}
func (s *StripeNormalizer) Normalize(body []byte) (*NormalizedEvent, error) {
var payload struct {
ID string `json:"id"`
Type string `json:"type"`
Created int64 `json:"created"`
Data struct {
Object struct {
Metadata map[string]string `json:"metadata"`
} `json:"object"`
} `json:"data"`
}
if err := json.Unmarshal(body, &payload); err != nil {
return nil, err
}
return &NormalizedEvent{
Provider: "stripe",
ProviderEventID: payload.ID,
EventType: mapStripeEventType(payload.Type),
CorrelationKey: payload.Data.Object.Metadata["order_id"],
OccurredAt: time.Unix(payload.Created, 0).UTC(),
RawPayload: body,
}, nil
}Your single ingest endpoint dispatches to the right normalizer based on CanHandle:
func (h *IngestHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
for _, normalizer := range h.normalizers {
if !normalizer.CanHandle(r) {
continue
}
if err := normalizer.Verify(r, body); err != nil {
http.Error(w, "signature verification failed", http.StatusUnauthorized)
return
}
event, err := normalizer.Normalize(body)
if err != nil {
http.Error(w, "normalization failed", http.StatusBadRequest)
return
}
h.store.Save(r.Context(), event)
w.WriteHeader(http.StatusOK)
return
}
http.Error(w, "unknown provider", http.StatusBadRequest)
}This pattern scales cleanly. Adding a new provider means writing one new normalizer struct — the ingest path does not change.
Querying the Unified Timeline
With events stored in a normalized table, building a timeline for a given business object is a single query:
SELECT
provider,
event_type,
occurred_at,
provider_event_id
FROM normalized_events
WHERE correlation_key = $1
ORDER BY occurred_at ASC;Result for ord_8bK2mxP9:
| provider | event_type | occurred_at | provider_event_id |
|---|---|---|---|
| stripe | payment.failed | 2026-04-08 14:31:02 UTC | evt_3Pq... |
| sendgrid | email.delivered | 2026-04-08 14:31:09 UTC | sg_01HX... |
| pagerduty | incident.triggered | 2026-04-08 14:31:14 UTC | Q2PRBR... |
| pagerduty | incident.acknowledged | 2026-04-08 14:33:55 UTC | Q2PRBR... |
Seven seconds from payment failure to email delivery. Fourteen seconds to PagerDuty alert. Two minutes forty-one seconds to acknowledgment. This is the timeline you need for a post-incident review — and it required no manual reconstruction.
Handling Missing and Late Correlation Keys
Not every webhook will carry a correlation key. A few cases to plan for:
1. The metadata wasn't set. Some events fire before your code attaches metadata — a Stripe customer.created event, for example, fires at creation time, before you've associated the customer with an internal account. Store these events with a null correlation_key and back-fill it asynchronously when you process the event downstream.
2. The provider doesn't support metadata. Some providers (older PagerDuty integrations, some SMS gateways) have no metadata passthrough. In this case, correlation depends on your own lookup: given the PagerDuty incident ID, query your incidents table for the order_id that triggered it. Attach the correlation_key at ingest time by querying your own database.
3. The webhook arrived late. Webhooks can be delayed by minutes or hours during provider outages. Your occurred_at timestamp handles this correctly — the event sorts into the right position in the timeline regardless of when it arrived. ingested_at tells you the actual delivery lag.
Surfacing the Timeline to Developers
A queryable database is a foundation, not a product. The payoff comes when your team can access the correlation timeline without writing SQL.
The minimum useful interface is a timeline API:
GET /v1/timeline?correlation_key=ord_8bK2mxP9{
"data": [
{
"provider": "stripe",
"event_type": "payment.failed",
"occurred_at": "2026-04-08T14:31:02Z",
"provider_event_id": "evt_3Pq..."
},
{
"provider": "sendgrid",
"event_type": "email.delivered",
"occurred_at": "2026-04-08T14:31:09Z",
"provider_event_id": "sg_01HX..."
}
]
}Expose this in your dashboard alongside your order or subscription detail page. When a customer opens a support ticket about order ord_8bK2mxP9, you should be able to pull the complete webhook timeline for that order in under a second. This replaces a multi-tab debugging session with a single view.
If you're using GetHook to receive inbound webhooks from your providers, each inbound event is stored with its full payload and delivery metadata, giving you the raw material to build this correlation layer on top of a reliable event log.
The Correlation Key Is a Contract
Once you start storing events by correlation key, you're implicitly defining a contract: this key will be present, consistent, and queryable for the lifetime of the object it represents.
Enforce this as a hard requirement in your provider integration code. If your Stripe PaymentIntent creation doesn't attach order_id to the metadata, the downstream correlation breaks silently — you just won't see those events in the timeline. Write a test that asserts the metadata is present:
func TestStripePaymentIntentHasCorrelationKey(t *testing.T) {
pi := createTestPaymentIntent(t, orderID)
if pi.Metadata["order_id"] == "" {
t.Fatal("PaymentIntent missing order_id in metadata — correlation will break")
}
}This is the kind of test that saves a two-hour debugging session six months from now, when a new engineer creates a PaymentIntent through a slightly different code path and forgets the metadata.
Building a correlation layer takes roughly a day of implementation work per provider. The return is permanent: every future incident investigation, SLA audit, and customer support ticket becomes faster. You stop rebuilding timelines from scattered logs and start querying the one you already have.
Start building your unified event timeline with a reliable ingest foundation at https://gethook.to/setup.