REST APIs have OpenAPI specs, generated client libraries, and SDKs that can absorb a schema change with a version bump. Webhooks have none of that. You POST a JSON payload to an endpoint you don't control, and if the consumer's code breaks because you renamed a field, you find out when their support ticket arrives.
This is a schema contract problem. The producer and consumer have an implicit agreement about payload shape, and there's no mechanism in HTTP to enforce it. When that agreement breaks, the failure is silent on the producer side and catastrophic on the consumer side.
This post covers how to define webhook schema contracts explicitly, enforce backward compatibility before shipping, and version payloads in a way that lets consumers migrate at their own pace.
Why Webhook Schema Breakage Hits Differently
When a REST API breaks a contract, the client gets an error response — usually immediately. When a webhook breaks a contract, the consumer's handler either throws an exception and returns a 500 (which triggers retries), silently ignores the event (data loss), or processes corrupt state (the worst case).
The feedback loop is also reversed. In REST, the client can choose not to upgrade — it's in control of when it makes requests. In webhooks, the producer decides when to send, so a breaking change lands on every consumer simultaneously, whether they're ready or not.
The four common causes of webhook schema breakage:
| Change type | Example | Breaking? |
|---|---|---|
| Field removal | Remove user.plan from payload | Yes — consumer code referencing it will throw |
| Field rename | user_id → userId | Yes — old field is missing |
| Type change | amount from string "9.99" to integer 999 (cents) | Yes — consumer parses as wrong type |
| Enum value addition | Add "enterprise" to plan enum | Conditionally — breaks switch statements with no default |
| Field addition | Add user.organization_id | No — safe if consumer ignores unknown fields |
| Field narrowing | status was nullable, now required | No — already valid values remain valid |
Additions and narrowing are safe. Removals, renames, and type changes are breaking. Enum additions are a landmine that looks safe until someone deploys a switch statement with an exhaustive case list.
Defining the Contract: JSON Schema
The first step is making the implicit contract explicit. JSON Schema is the right tool — it's widely supported, toolable, and produces human-readable documentation as a side effect.
Define a schema per event type, and store schemas in your repository as source of truth:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://webhooks.example.com/schemas/payment.completed/v1.json",
"type": "object",
"required": ["id", "type", "created_at", "data"],
"additionalProperties": false,
"properties": {
"id": { "type": "string", "pattern": "^evt_" },
"type": { "type": "string", "const": "payment.completed" },
"created_at": { "type": "string", "format": "date-time" },
"data": {
"type": "object",
"required": ["payment_id", "amount_cents", "currency", "status"],
"additionalProperties": true,
"properties": {
"payment_id": { "type": "string" },
"amount_cents": { "type": "integer", "minimum": 0 },
"currency": { "type": "string", "pattern": "^[A-Z]{3}$" },
"status": {
"type": "string",
"enum": ["succeeded", "failed", "refunded"]
}
}
}
}
}Two notes on additionalProperties:
- ›Set it to
falseon the envelope (the outer object) to prevent typos in top-level fields from going undetected. - ›Set it to
trueon thedataobject. This is how you stay forward-compatible: consumers that receive a payload with new fields they don't recognize should ignore them, not fail.
Backward Compatibility Checks in CI
Having a schema is necessary but not sufficient. You need automated checks that prevent a breaking change from reaching production. This is the same pattern that Protobuf and Avro registries use — but for JSON Schema.
The tool that works well for this is json-schema-diff-validator or a custom check using the ajv validator. The approach:
- ›Store the last published schema version in your repo (or a schema registry).
- ›On every PR that touches a schema file, run a compatibility check.
- ›Fail the build if a breaking change is detected.
Here's a minimal Go implementation for a CI compatibility check:
package schemacompat
import (
"encoding/json"
"fmt"
)
type Schema struct {
Required []string `json:"required"`
Properties map[string]interface{} `json:"properties"`
AdditionalProperties interface{} `json:"additionalProperties"`
}
// CheckBackwardCompatibility returns an error if newSchema breaks consumers
// that were written against oldSchema.
func CheckBackwardCompatibility(oldSchema, newSchema Schema) error {
// Fields required in old schema must still exist in new schema
for _, req := range oldSchema.Required {
if _, exists := newSchema.Properties[req]; !exists {
return fmt.Errorf("breaking: required field %q removed from schema", req)
}
}
// New schema must not add required fields (consumer may not provide them)
oldRequired := make(map[string]bool)
for _, f := range oldSchema.Required {
oldRequired[f] = true
}
for _, req := range newSchema.Required {
if !oldRequired[req] {
return fmt.Errorf("breaking: new required field %q added (consumers cannot supply it)", req)
}
}
return nil
}This is a simplified check — a production implementation should also verify type consistency for existing fields. The point is to make compatibility a gate, not a guideline.
Versioning Strategies
When a breaking change is unavoidable — and sometimes it is — you have three approaches.
Version in the event type
The simplest approach: treat the event type string as a versioned namespace.
{ "type": "payment.completed.v2" }Old consumers subscribed to payment.completed.v1 keep receiving the old format. New consumers subscribe to payment.completed.v2. You run both in parallel during the migration window, then deprecate v1 after a notice period.
The downside: consumers that want to handle both versions need to register two webhooks or branch on the type string. For producers with many event types, maintaining v1, v2, and sometimes v3 variants of each becomes operationally expensive.
Version in the envelope
An alternative is a top-level schema_version field:
{
"id": "evt_01HX9P3...",
"type": "payment.completed",
"schema_version": "2",
"created_at": "2026-04-05T10:00:00Z",
"data": { ... }
}This lets consumers key on schema_version without changing the type string. It works well when you have a small number of event types with infrequent breaking changes.
Dual-writing during migration
For a smooth migration, produce both formats simultaneously for a defined period:
func (h *PaymentHandler) publishCompleted(ctx context.Context, p Payment) error {
// Publish v1 format for consumers that haven't migrated
if err := h.publish(ctx, "payment.completed", buildV1Payload(p)); err != nil {
return err
}
// Publish v2 format for consumers already on the new schema
if err := h.publish(ctx, "payment.completed.v2", buildV2Payload(p)); err != nil {
return err
}
return nil
}Dual-writing doubles your event volume temporarily, but it decouples the producer migration from the consumer migration. You can deploy the v2 payload on your timeline; consumers migrate when they're ready; you remove v1 publishing after the deprecation window closes.
Schema Registry: When to Introduce One
A schema registry centralizes schema storage, versioning, and compatibility enforcement. Confluent's Schema Registry (used with Kafka) is the most well-known example, but the concept applies equally to webhooks.
You need a schema registry when:
- ›You have more than one team producing webhooks, and you need a single source of truth for schema definitions.
- ›You want consumers to be able to fetch the schema for any event type programmatically (for validation, code generation, or documentation).
- ›You want compatibility enforcement to be a platform-level concern rather than a per-team CI responsibility.
You don't need a schema registry when:
- ›A single team owns all event producers.
- ›Your event catalog has fewer than 20 event types.
- ›A schemas directory in your monorepo and a CI check is sufficient.
The minimal viable schema registry is a versioned directory in your repository with a compatibility check script in your CI pipeline. A dedicated service adds operational overhead that smaller teams don't yet need.
Consumer-Side Defensive Practices
Schema contracts protect against known breaking changes. Unknown breaking changes — a field that was supposed to stay stable but changed anyway — require defensive consumer code.
Three practices that make consumers resilient:
Ignore unknown fields. Your JSON deserialization library should not reject payloads with fields it doesn't recognize. In Go, encoding/json ignores unknown fields by default. In Python, dataclasses and attrs do not — you need dacite with strict=False or explicit **kwargs handling.
Validate on the fields you use, not the full schema. If your handler only uses data.payment_id and data.amount_cents, validate those two fields are present and of the right type. Don't validate the entire payload — that couples your handler to the full schema and breaks on every benign addition.
Log the raw payload before processing. If your handler fails, you want to inspect what was actually received. Log the raw JSON body (with sensitive fields redacted) before deserialization. This is invaluable when debugging schema mismatches in production.
func (h *WebhookHandler) handlePaymentCompleted(w http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(io.LimitReader(r.Body, 1<<20)) // 1MB limit
if err != nil {
http.Error(w, "bad request", http.StatusBadRequest)
return
}
// Log raw body before deserialization — redact sensitive fields in production
log.Printf("webhook received type=payment.completed body_bytes=%d", len(body))
var event struct {
ID string `json:"id"`
Type string `json:"type"`
Data struct {
PaymentID string `json:"payment_id"`
AmountCents int `json:"amount_cents"`
Currency string `json:"currency"`
} `json:"data"`
}
if err := json.Unmarshal(body, &event); err != nil {
log.Printf("webhook parse error: %v", err)
http.Error(w, "bad request", http.StatusBadRequest)
return
}
// Validate only the fields this handler actually uses
if event.Data.PaymentID == "" || event.Data.AmountCents <= 0 {
log.Printf("webhook validation error: missing required fields in data")
http.Error(w, "unprocessable entity", http.StatusUnprocessableEntity)
return
}
h.processPayment(r.Context(), event.Data.PaymentID, event.Data.AmountCents, event.Data.Currency)
w.WriteHeader(http.StatusOK)
}Deprecation and Sunset Policies
A schema contract without a deprecation policy is incomplete. When you need to break compatibility, communicate clearly and give consumers time to migrate.
| Phase | Action | Duration |
|---|---|---|
| Announcement | Publish changelog, send email/Slack notification to webhook consumers | At least 30 days before |
| Dual-write | Run old and new schema side by side | 30–90 days |
| Sunset warning | Add Deprecation and Sunset HTTP headers to old schema events | Last 30 days |
| Removal | Stop publishing old schema | After sunset date |
The Deprecation and Sunset headers are an IETF draft standard (RFC 8594 for Sunset). Adding them to webhook delivery responses gives programmatic notice to consumers who have instrumented their HTTP clients to surface them.
GetHook's event catalog feature lets you publish schema definitions alongside event type documentation, so consumers can reference the current schema and see deprecation notices in one place — without digging through changelogs.
Putting It Together
Webhook schema contracts aren't glamorous infrastructure, but they're the difference between a webhook platform that producers trust and one that breaks consumers silently. The minimum viable approach:
- ›Define JSON Schemas for every event type and commit them to your repository.
- ›Run a backward compatibility check in CI on every schema change.
- ›Use
additionalProperties: trueon data objects so consumers tolerate additions. - ›When breaking changes are unavoidable, version the event type and dual-write during the migration window.
- ›On the consumer side, validate only the fields you use, ignore unknown fields, and log the raw body before deserialization.
The investment is low relative to the cost of a breaking change that silently corrupts consumer state across every integration simultaneously.
Start building webhook infrastructure with schema versioning support on GetHook →