Back to Blog
schema designAPI designversioningplatform

Webhook Event Schema Design: Versioning, Backwards Compatibility, and Migration

The event schemas you ship today will need to evolve. Poor schema design creates breaking changes that knock out customer integrations. Here's how to design webhook payloads for long-term maintainability.

M
Marcus Webb
Platform Engineer
February 26, 2026
9 min read

Webhook event schemas are a public API. Once you ship a payload format to customers, they build integrations against it. Changing that format breaks those integrations — and unlike REST APIs where you can version the URL, webhook payloads arrive at the customer's endpoint on your schedule, not theirs.

This post covers how to design webhook schemas for durability, how to version them when changes are unavoidable, and how to migrate customers through breaking changes without incident.


The Core Problem: Webhooks Are Pushed, Not Pulled

REST API versioning is relatively forgiving. You can run /v1/ and /v2/ endpoints simultaneously. Clients opt into the new version when they're ready. You deprecate the old version on a public schedule.

Webhooks don't work this way. You push events to customers. If you change the payload schema, customers receive the new format immediately — whether or not they've updated their integration.

This makes backwards compatibility more important for webhook schemas than for REST responses.


Anatomy of a Good Webhook Event

Before discussing versioning, establish what a webhook event payload should contain:

json
{
  "id": "evt_01HX9P3...",
  "type": "order.completed",
  "api_version": "2024-01",
  "created_at": "2026-02-26T14:22:01Z",
  "data": {
    "object": {
      "id": "ord_abc123",
      "status": "completed",
      "total": 14999,
      "currency": "usd",
      "customer_id": "cus_xyz789",
      "line_items": [...]
    },
    "previous_attributes": {
      "status": "processing"
    }
  }
}

Key structural decisions:

id — stable event identifier for idempotency. Required.

type — dot-notation event type. resource.action format (e.g., order.completed, subscription.cancelled). Never change these after shipping.

api_version — the schema version this payload conforms to. Customers can filter or route by this.

created_at — ISO 8601 timestamp, always UTC.

data.object — the resource that changed. Always include the full current state, not just changed fields.

data.previous_attributes — fields that changed and their old values. Optional but extremely useful for customers who need to react to state transitions.


Schema Design Rules for Backwards Compatibility

Rule 1: Only add fields, never remove them

Adding a new field to a JSON payload is backwards compatible. Removing or renaming a field is breaking.

json
// Version 1
{ "customer_email": "user@example.com" }

// ✅ Adding a field — backwards compatible
{ "customer_email": "user@example.com", "customer_id": "cus_123" }

// ❌ Removing a field — breaking change
{ "customer_id": "cus_123" }

// ❌ Renaming a field — breaking change
{ "email": "user@example.com" }

If you want to rename a field, keep the old name and add the new name. Deprecate the old one in documentation, remove it in a future major version.

Rule 2: Never change the type of a field

Changing amount from integer to string breaks every customer who does arithmetic on it. Changing status from an enum to a free-form string breaks every customer who does a switch statement.

If you must change a type, add a new field with a different name.

Rule 3: Enums can only grow, not shrink

Adding a new enum value is backwards compatible (customers that don't recognize it should handle unknown values gracefully). Removing an enum value is a breaking change.

Document explicitly that customers must handle unknown enum values:

"Your integration should handle unknown status values gracefully — we may add new statuses without notice."

Rule 4: Use null consistently

Decide upfront: does a missing field mean null, or is the field absent entirely? Inconsistency here breaks customers who check if (event.data.refund_id !== null) vs. if ('refund_id' in event.data).

Recommendation: Use null for optional fields that may not apply, and document which fields are always present vs. sometimes present.

Rule 5: Monetary values as integers (cents), not floats

json
// ❌ Float — precision errors, rounding bugs
{ "amount": 149.99 }

// ✅ Integer cents — exact representation
{ "amount": 14999, "currency": "usd" }

149.99 in IEEE 754 floating point is actually 149.98999999999999.... Customers who do accounting on your webhook data will have subtle bugs if you use floats.


Schema Versioning Strategies

Strategy 1: Date-based API versions (Stripe's approach)

Stripe versions its API by date: 2024-01-01, 2023-10-16, etc. When you create a webhook endpoint, you pin it to an API version. Stripe delivers webhooks in the format for your pinned version.

json
{ "api_version": "2024-01" }

Pros: Clear, customer-friendly. Customers know exactly which schema to expect. Cons: Requires maintaining multiple payload serializers for each supported version.

Strategy 2: Resource-level versioning

Instead of versioning the entire API, version individual event types:

json
{ "type": "order.completed.v2" }

Pros: Simpler to implement — old event types keep working, new types add the version suffix. Cons: Event type lists become messy. Customers have to handle order.completed AND order.completed.v2.

Strategy 3: Content negotiation via webhook registration

When a customer registers their webhook endpoint, they specify which event types and which API version they want. You deliver accordingly.

json
{
  "endpoint_url": "https://api.customer.com/webhooks",
  "api_version": "2025-01",
  "events": ["order.completed", "subscription.created"]
}

Pros: Maximum flexibility. Customers explicitly opt into new schemas. Cons: Complex infrastructure. You must maintain schema transformers for every version pair.

Recommendation for most SaaS products: Start with date-based versioning. It's the most developer-friendly and the most copied from established platforms.


Handling Breaking Changes

Sometimes breaking changes are unavoidable — a compliance requirement, a fundamental data model change, a typo in a field name that you'd embarrass yourself by shipping forever.

Breaking change process

Step 1: Announce with a minimum 90-day notice

Email every customer with active webhook subscriptions. Publish a migration guide. Create a changelog entry. Do this before you've written a line of new code — the announcement is the hardest part.

Step 2: Ship the new version alongside the old

Run both schema versions simultaneously. New webhooks in the new format; existing webhooks still in the old format.

Step 3: Migrate non-responsive customers manually

After 90 days, identify customers who haven't migrated. Email them directly. If you have a customer success team, loop them in.

Step 4: Switch default, maintain backwards compat for 6 more months

New webhook registrations default to the new schema version. Old registrations continue to receive the old format.

Step 5: Remove legacy version with 30-day final notice

Send a final notice to any remaining customers still on the old version. Set a hard cutoff date.

This timeline takes 8+ months. That's intentional. Customers with serious integrations (enterprise, payments, fulfillment) need this runway.


The previous_attributes Pattern

One of the most useful patterns for webhook payloads is including the previous state of changed fields:

json
{
  "type": "subscription.updated",
  "data": {
    "object": {
      "id": "sub_123",
      "status": "active",
      "plan": "enterprise",
      "seats": 50
    },
    "previous_attributes": {
      "plan": "growth",
      "seats": 10
    }
  }
}

This lets customers answer "what actually changed?" without fetching the previous state from their own database. It's especially valuable for *.updated events, which are otherwise ambiguous.

Implementation note: Only include fields that actually changed in previous_attributes. Including unchanged fields adds noise and inflates payload size.


Testing Schema Compatibility

Before shipping a schema change, run a compatibility test suite:

go
func TestSchemaCompatibility(t *testing.T) {
    // Parse old-format event with new schema parser
    oldPayload := `{"amount": 14999, "customer_email": "user@example.com"}`

    var event EventV2
    err := json.Unmarshal([]byte(oldPayload), &event)

    // New schema must parse old payloads without error
    assert.NoError(t, err)

    // New fields should have zero values, not errors
    assert.Empty(t, event.CustomerID) // new field, not in old payload
    assert.Equal(t, 14999, event.Amount) // old field, must still work
}

Also test the reverse: new format payloads must be parseable by old clients. Use contract testing (Pact or similar) if you can get your major customers to participate.


Documentation as a Contract

Your webhook schema documentation is a contract with customers. Treat it accordingly:

  • Publish a changelog — every schema change, no matter how minor, gets a dated entry
  • Mark deprecated fields — use [DEPRECATED] in field descriptions with a removal date
  • Include example payloads — one per event type, kept in sync with actual payload
  • Document error payloads — what does a failed delivery look like from the customer's side?

GetHook's event types, payload schemas, and API versioning are documented in the GetHook API reference. Every breaking change ships with a migration guide.

Stop losing webhook events.

GetHook gives you reliable delivery, automatic retry, and full observability — in minutes.