Delivery attempt logs are not audit logs. They tell you what your system tried to do — when a webhook was sent, whether the destination returned a 200, how many retries it took. That's useful for debugging. It's not sufficient for compliance.
An audit log answers a different question: who did what, to what, and when — and can you prove it hasn't been tampered with? When a SOC 2 auditor asks for evidence that access to webhook secrets was controlled, or when a PCI DSS assessor asks which users modified payment notification destinations in the last 90 days, your delivery logs won't help. You need a purpose-built audit trail.
This post covers what belongs in a webhook audit log, how to make it tamper-evident, and how to query it in ways that satisfy real compliance requirements.
What Belongs in an Audit Log (and What Doesn't)
The instinct is to log everything. That creates a different problem: an audit log that's 90% noise is nearly useless under time pressure during an incident or audit. Focus on events that represent a change in authorization, configuration, or data access.
Log these:
| Event category | Examples |
|---|---|
| Credential operations | API key created, API key revoked, signing secret rotated |
| Destination changes | Destination URL updated, auth config changed, destination deleted |
| Route changes | Route created, event type filter modified, route deleted |
| Replay operations | Event replayed, bulk replay initiated |
| Access events | Delivery attempt viewed (for sensitive event types), raw payload accessed |
| Admin actions | Account settings changed, team member added/removed, custom domain configured |
Don't log these in your audit log:
- ›Individual webhook delivery attempts (that belongs in delivery logs)
- ›Health check pings
- ›Read operations that don't access sensitive payload content
The distinction matters because audit logs are subject to retention policies and often end up in compliance tools like Splunk, Datadog, or a SIEM. Flooding them with delivery attempt noise increases cost and makes the meaningful events harder to find.
The Audit Record Schema
Every audit record needs five components:
- ›Who — the authenticated principal that took the action
- ›What — the action taken, in a canonical form
- ›Which resource — the entity affected, by type and ID
- ›When — timestamp with millisecond precision, in UTC
- ›Context — IP address, user agent, request ID for cross-referencing
type AuditEvent struct {
ID string `json:"id"` // audit_01HX9P3...
AccountID string `json:"account_id"`
ActorID string `json:"actor_id"` // api key ID or user ID
ActorType string `json:"actor_type"` // "api_key" | "user" | "system"
ActorPrefix string `json:"actor_prefix"` // "hk_live_ab..." — never the full key
Action string `json:"action"` // "destination.updated"
ResourceType string `json:"resource_type"` // "destination"
ResourceID string `json:"resource_id"`
Changes []Change `json:"changes,omitempty"`
IPAddress string `json:"ip_address"`
UserAgent string `json:"user_agent"`
RequestID string `json:"request_id"`
OccurredAt time.Time `json:"occurred_at"`
}
type Change struct {
Field string `json:"field"`
OldValue string `json:"old_value"` // redacted for secrets
NewValue string `json:"new_value"` // redacted for secrets
}The Changes field is particularly valuable for destination updates. When you know that url changed from https://old.example.com/hook to https://new.example.com/hook at a specific timestamp, you can correlate it against a delivery failure spike. Without it, you only know that something changed.
Secret fields must be redacted in changes. Never record old or new values for signing secrets, API keys, or auth tokens. Record the action (secret rotated) and the metadata (key prefix, rotation timestamp) but not the values themselves.
Making the Log Tamper-Evident
A log that can be quietly edited after the fact isn't an audit log — it's a story. For compliance purposes, you need to be able to demonstrate that records haven't been modified. There are two approaches, depending on your threat model.
Hash Chaining
Each audit record includes a hash of the previous record. Any modification to a historical record invalidates all subsequent hashes, making tampering detectable.
func computeRecordHash(prev string, record AuditEvent) (string, error) {
// Serialize deterministically — field order matters
b, err := json.Marshal(record)
if err != nil {
return "", err
}
h := sha256.New()
// Chain the previous hash into this record's hash
h.Write([]byte(prev))
h.Write(b)
return hex.EncodeToString(h.Sum(nil)), nil
}Store the computed hash in the chain_hash column of each audit record. Verification is a sequential scan:
SELECT
id,
occurred_at,
action,
chain_hash,
LAG(chain_hash) OVER (ORDER BY occurred_at, id) AS prev_hash
FROM audit_events
WHERE account_id = $1
ORDER BY occurred_at, id;Re-compute the expected hash for each row using prev_hash and the record content. If any row's stored chain_hash doesn't match the recomputed value, the chain is broken — and you know exactly which record was tampered with.
Hash chaining works when you control the storage layer and your threat model is an attacker who gains write access to your database. It doesn't protect against a superuser who can rewrite both the record and its hash.
Append-Only Storage with Periodic Notarization
For stronger guarantees, use an append-only store and periodically notarize the chain head with an external, timestamped service. Options include:
- ›AWS QLDB — purpose-built immutable ledger database, SHA-256 hash chains managed automatically
- ›Certificate Transparency-style logs — submit hourly hash roots to a public log
- ›S3 Object Lock — WORM storage for batched audit log exports
- ›Blockchain timestamping — submit hash roots to a public blockchain (useful for demonstrating independence from your own infrastructure)
For most companies at Series A and below, hash chaining in your primary database plus quarterly exports to WORM S3 storage satisfies SOC 2 Type II audit requirements. Reserve the more complex approaches for PCI Level 1 or FedRAMP environments.
Schema and Indexes for Compliance Queries
Auditors ask questions like:
- ›Show me every change to payment webhook destinations in the last 90 days.
- ›Who had access to the signing secrets for the Stripe inbound source?
- ›Were any webhooks replayed after the incident window (2026-03-15T14:00Z to 2026-03-15T18:00Z)?
Your schema needs to answer these efficiently. The minimum viable index set:
CREATE TABLE audit_events (
id TEXT PRIMARY KEY,
account_id TEXT NOT NULL,
actor_id TEXT NOT NULL,
actor_type TEXT NOT NULL,
actor_prefix TEXT,
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id TEXT NOT NULL,
changes JSONB,
ip_address INET,
user_agent TEXT,
request_id TEXT,
chain_hash TEXT NOT NULL,
occurred_at TIMESTAMPTZ NOT NULL
);
-- Query by account and time range (most common compliance query)
CREATE INDEX idx_audit_account_time
ON audit_events (account_id, occurred_at DESC);
-- Query by resource (e.g., "all changes to destination X")
CREATE INDEX idx_audit_resource
ON audit_events (account_id, resource_type, resource_id);
-- Query by action category (e.g., "all replay operations")
CREATE INDEX idx_audit_action
ON audit_events (account_id, action, occurred_at DESC);Keep audit records in a separate table from operational data. This makes it easier to apply different retention policies, export to a SIEM without including delivery attempt noise, and restrict who can query audit records independently of who can query events.
Retention Policies
Audit log retention is a compliance requirement, not a storage optimization problem. Get the minimums wrong and you fail audits. Get them too long and you create unnecessary data liability.
| Framework | Minimum retention | Notes |
|---|---|---|
| SOC 2 Type II | 12 months | Auditor typically reviews 6–12 months of evidence |
| PCI DSS 4.0 | 12 months (3 months immediately accessible) | Requirement 10.7.1 |
| GDPR | Retain only as long as necessary | Audit logs of data access to PII may themselves contain PII |
| HIPAA | 6 years | Business associate agreements may specify longer |
| ISO 27001 | Defined by your ISMS policy | Usually 12–36 months |
GDPR creates a specific tension for audit logs: the log records who accessed what, and "who" might include personal data (name, email, IP address). You have two options:
- ›Pseudonymize actor identifiers in the audit log — store the actor's internal ID and resolve to a name at query time, so deleting a user's account doesn't require scrubbing the audit log.
- ›Treat audit log IPs as personal data and apply a 90-day retention specifically to the
ip_addresscolumn, while retaining the rest of the record for 12 months.
Option 1 is cleaner. GetHook stores actor IDs (not names or emails) in audit records, with resolution to human-readable labels happening at query time via a separate identity lookup.
Exposing Audit Logs to Your Customers
If you're building a platform where customers configure webhook destinations and routes, they have a legitimate interest in their own audit trail. A financial services customer may be required by their own auditors to produce evidence that only authorized personnel modified their webhook configuration.
Expose audit logs via an API with tight scoping:
# Get audit events for a specific resource
GET /v1/audit-events?resource_type=destination&resource_id=dest_01HX&limit=50
# Get audit events within a time range
GET /v1/audit-events?from=2026-03-01T00:00:00Z&to=2026-04-01T00:00:00Z
# Get all credential operations
GET /v1/audit-events?action_prefix=api_keyKey design decisions for the customer-facing audit API:
- ›Scoped to their account — the auth middleware ensures they only see their own audit events. This is non-negotiable.
- ›Read-only, always — there is no endpoint to delete or modify audit events. If a customer asks to delete specific audit records, the answer is no — you can delete their account and all associated data, but individual audit records can't be surgically removed without breaking the chain.
- ›Paginated with cursor-based pagination — audit logs can grow large quickly; offset-based pagination becomes expensive at depth. Use a cursor on
(occurred_at, id). - ›Exportable as NDJSON — compliance teams want to import into their own tooling. Support
Accept: application/x-ndjsonfor streaming export.
A Note on "We Log Everything"
The phrase "we log everything" is a red flag in a security review. Logging everything without a defined schema means you log things you shouldn't (raw secrets, PII that should have been redacted) and miss things you should (who rotated a secret last Tuesday).
Start with the event categories in the table above. Add new event types deliberately, with explicit decisions about what fields to include and what to redact. Review your audit log schema the same way you'd review any public API surface — because for your compliance auditors, it is one.
If you're building webhook infrastructure that needs to pass a SOC 2 audit or satisfy PCI DSS requirements, start with GetHook — audit logging with hash chaining, scoped customer-facing audit APIs, and WORM-compatible export are part of the platform rather than a feature you have to build yourself.