Every webhook consumer starts the same way. An event arrives, your handler unpacks the JSON, and somewhere near the top of the function you write something like this:
if event.OrderID == "" {
log.Printf("warn: missing order_id on event %s", event.ID)
return nil // silently drop it
}
if event.TotalCents < 0 {
return fmt.Errorf("invalid total_cents: %d", event.TotalCents)
}
if event.Status != "created" && event.Status != "fulfilled" && event.Status != "cancelled" {
return fmt.Errorf("unrecognized status: %q", event.Status)
}This is validation. It is also in exactly the wrong place.
The Problem With In-Process Validation
Validating inside your application handler feels natural — it's close to the code that uses the data. But it creates a set of problems that compound as your system grows.
Bad events reach your persistence layer first. By the time your handler runs, the event has been accepted at the ingest endpoint, written to your queue or database, and potentially routed to several downstream consumers. A malformed order.fulfilled event doesn't fail at the front door. It fails when your inventory service tries to decrement a count using a missing product ID, or when your reporting pipeline divides by a total that arrived as a string instead of an integer.
Validation logic fragments across services. You validate order_id presence in the order service. You validate currency format in the payments service. You validate customer_id against your user table in the notification service. Now the same payload is being scrutinized — differently — in three places. When the sending team changes a field name, you find out by watching error rates climb across all three, not from a clear rejection at the point of ingestion.
Failures surface late and without context. A handler that rejects a malformed event with a log line and a silent return has no way to tell the sender what was wrong. The sender's retry logic fires. The event enters your retry queue. Dead-letter queues fill up. An on-call engineer eventually investigates and discovers that the root cause was a field that was always null — and always has been.
Defensive code accumulates. Each near-miss adds another null-check. Over time, your handlers become defensive in the way legacy code is defensive: not because there's a real threat being mitigated, but because nobody is confident enough in the incoming data to remove the guards.
The Ingest Gateway Already Validates — Extend That Mental Model
If you're using a webhook gateway like GetHook, you're already benefiting from validation at the edge. Signature verification runs before your event enters the system. If the HMAC signature doesn't match, the request is rejected with a 401. Your application code never sees it.
The same principle applies to payload structure. The gateway receives every inbound event before your application does. It's the right place to enforce the contract between sender and receiver.
Consider what the ingest layer can check before an event is ever persisted:
| Validation type | Example | Caught at gateway? |
|---|---|---|
| Signature verification | HMAC-SHA256 mismatch | Yes — already enforced |
| Required fields | order_id is missing | With schema enforcement |
| Type correctness | total_cents is "500" not 500 | With schema enforcement |
| Enum values | status is "CREATED" not "created" | With schema enforcement |
| Format constraints | created_at is not a valid ISO 8601 timestamp | With schema enforcement |
| Structural shape | data is an array instead of an object | With schema enforcement |
Signature verification and schema enforcement are the same category of concern. Both answer the question: "Is this event trustworthy enough to enter my system?" One checks authenticity. The other checks correctness.
What This Looks Like in Practice
Say your platform receives order.fulfilled events from a third-party logistics provider. The expected shape is:
{
"id": "evt_01HZ...",
"type": "order.fulfilled",
"created_at": "2026-04-22T09:14:00Z",
"data": {
"order_id": "ord_9182",
"fulfilled_at": "2026-04-22T08:55:00Z",
"line_items": [
{ "sku": "WIDGET-A", "quantity": 2 }
],
"tracking_number": "1Z999AA10123456784"
}
}You attach a JSON Schema to the source endpoint in GetHook. The schema marks data.order_id, data.fulfilled_at, and data.line_items as required, constrains quantity to a positive integer, and validates fulfilled_at as an date-time format.
When the logistics provider accidentally sends this — a bug introduced by a new engineer who serialized quantity as a string:
{
"id": "evt_01HZ...",
"type": "order.fulfilled",
"created_at": "2026-04-22T09:14:02Z",
"data": {
"order_id": "ord_9183",
"fulfilled_at": "2026-04-22T09:01:00Z",
"line_items": [
{ "sku": "WIDGET-B", "quantity": "1" }
],
"tracking_number": "1Z999AA10123456785"
}
}The gateway rejects it synchronously:
HTTP/1.1 400 Bad Request
Content-Type: application/json
{
"error": "payload does not match source schema",
"violations": [
{
"path": "/data/line_items/0/quantity",
"message": "expected integer, got string",
"value": "1"
}
]
}The sender's HTTP client receives a 400. Their retry logic does not fire — this is not a transient failure, it's a malformed request. They have a machine-readable description of exactly what to fix. Your application never sees the event. Your database is clean.
Your Application Receives Only Trusted Events
This is the payoff: when schema enforcement runs at the ingest layer, your application handlers can be written with confidence. The contract has already been checked.
Compare these two versions of the same handler:
Before — defensive, noisy:
func handleOrderFulfilled(ctx context.Context, raw json.RawMessage) error {
var event OrderFulfilledEvent
if err := json.Unmarshal(raw, &event); err != nil {
return fmt.Errorf("unmarshal: %w", err)
}
if event.Data.OrderID == "" {
return fmt.Errorf("missing order_id")
}
if len(event.Data.LineItems) == 0 {
return fmt.Errorf("line_items is empty")
}
for i, item := range event.Data.LineItems {
if item.Quantity <= 0 {
return fmt.Errorf("line_items[%d]: quantity must be positive", i)
}
if item.SKU == "" {
return fmt.Errorf("line_items[%d]: missing sku", i)
}
}
return db.MarkOrderFulfilled(ctx, event.Data.OrderID, event.Data.LineItems)
}After — focused on business logic:
func handleOrderFulfilled(ctx context.Context, raw json.RawMessage) error {
var event OrderFulfilledEvent
if err := json.Unmarshal(raw, &event); err != nil {
// This should never happen if schema enforcement is active,
// but handle it defensively at the boundary.
return fmt.Errorf("unmarshal: %w", err)
}
return db.MarkOrderFulfilled(ctx, event.Data.OrderID, event.Data.LineItems)
}The business logic is two lines. There is no field-checking noise. If a future reader wants to understand what valid input looks like, they look at the source schema in GetHook — not at scattered guard clauses across multiple service files.
One Place to Update Validation Rules
When the logistics provider announces they are adding a required warehouse_id field to order.fulfilled events in three weeks, you have one place to update: the source schema. You do not grep for every place that touches OrderFulfilledEvent, audit which ones check for warehouse_id, and coordinate a deployment across three services.
You update the schema. You test it in warn mode against the existing event stream for a day to confirm there is no surprise violation. You flip enforcement on. Done.
The same is true when a provider deprecates a field you were relying on, or changes an enum value from uppercase to lowercase. Schema enforcement at the gateway surfaces these changes early — before they become silent corruptions in your database.
Replay After a Schema Update
One operational concern that comes up: if you tighten a validation rule and some past events would now fail it, what happens when you replay those events?
GetHook applies the current source configuration when delivering a replayed event. If the event was accepted under a previous, looser schema, it may fail the updated rule on replay. This is intentional — it means your schema accurately reflects what your application is prepared to handle. If you need to replay events that predate a schema change, you either relax the rule temporarily or process those events through a separate pipeline that acknowledges the schema mismatch.
The replay API makes this explicit:
# Replay a specific event
curl -X POST https://api.gethook.to/v1/events/{event_id}/replay \
-H "Authorization: Bearer hk_..."You have full control over which events to replay and when. The schema is applied consistently, so the replay outcome is predictable.
When to Keep Validation in Your Application
Gateway validation catches structural problems. It does not replace business-rule validation — and should not try to.
| Validation | Right layer |
|---|---|
| Required field is missing | Gateway (JSON Schema) |
| Field is wrong type | Gateway (JSON Schema) |
| Enum value is not in the allowed set | Gateway (JSON Schema) |
| Timestamp format is invalid | Gateway (JSON Schema) |
order_id references a real order in your database | Application code |
customer_id has permission to make this action | Application code |
| Business rule: total_cents cannot exceed account credit limit | Application code |
| Idempotency: this event has already been processed | Application code |
The gateway enforces the shape and type contract. Your application enforces business invariants. Both layers validate — but they validate different things, and neither should be doing the other's job.
Structural validation does not belong in your application handlers. It belongs at the earliest possible point in the event's journey — the ingest gateway — where you can reject bad data before it consumes any resources, notify the sender immediately, and keep your business logic clean.
If you want to configure payload schema enforcement on your ingest sources today, get started with GetHook.