Back to Blog
securityencryptionwebhookscompliancearchitecture

Webhook Payload Encryption: End-to-End Confidentiality Beyond HTTPS

TLS protects webhooks in transit, but it doesn't protect them at rest inside a gateway, in logs, or at your destination. Here's how to add payload-level encryption when HTTPS alone isn't enough.

N
Nadia Kowalski
Security Engineer
March 30, 2026
9 min read

HTTPS is table stakes for webhook delivery. It encrypts the payload in transit and authenticates the server. For most applications, that's sufficient.

But "in transit" is not the same as "end-to-end." Your webhook payload travels through several hops before it reaches its final destination: it enters your gateway, gets persisted to a database, may be logged to an observability platform, queued for retry, and finally forwarded to the destination. At each of those points, the payload is at rest — and TLS protects none of them.

This post explains when payload-level encryption is worth the operational overhead, what the implementation looks like, and what it doesn't protect.


When TLS Alone Is Insufficient

For most teams, TLS is enough. If your webhook payloads contain order confirmations or GitHub push notifications, protecting them in transit is the right tradeoff.

Payload-level encryption becomes worth considering when:

  • The payload contains regulated data. PII, PHI (HIPAA), or financial account details that you're contractually or legally required to protect at rest.
  • Your gateway is a shared multi-tenant system. If multiple teams or customers share an infrastructure layer, a misconfigured access control could expose one tenant's payloads to another. Encryption at rest limits blast radius.
  • You operate in a zero-trust environment. If your threat model includes insider threat or compromised internal systems, encryption at rest means a database dump doesn't expose payload contents.
  • Your logs are forwarded to a third-party SIEM. Webhook payloads often end up in Datadog, Splunk, or Elastic. If those logs contain sensitive fields, you need to either redact before logging or encrypt before storing.
ThreatTLS ProtectsPayload Encryption Protects
Network eavesdroppingYesYes
Compromised gateway databaseNoYes
Log aggregation exposureNoYes (if encrypted before logging)
Compromised internal service (not destination)NoYes
Compromised destination serverNoNo
Side-channel key extraction at destinationNoNo

The Two Models: Symmetric vs. Asymmetric

There are two practical approaches to payload-level encryption for webhooks.

Symmetric encryption (AES-256-GCM)

A shared secret is used to both encrypt and decrypt. The gateway encrypts before storing; the destination decrypts on receipt.

This is simpler to implement and faster to compute, but it requires the destination to have access to the encryption key — which means you need a secure channel to provision that key, and you need to rotate it without downtime.

go
import (
    "crypto/aes"
    "crypto/cipher"
    "crypto/rand"
    "encoding/base64"
    "io"
)

func encryptPayload(key []byte, plaintext []byte) (string, error) {
    block, err := aes.NewCipher(key)
    if err != nil {
        return "", err
    }

    gcm, err := cipher.NewGCM(block)
    if err != nil {
        return "", err
    }

    nonce := make([]byte, gcm.NonceSize())
    if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
        return "", err
    }

    // Nonce is prepended to the ciphertext so the receiver can extract it
    ciphertext := gcm.Seal(nonce, nonce, plaintext, nil)
    return base64.StdEncoding.EncodeToString(ciphertext), nil
}

func decryptPayload(key []byte, encoded string) ([]byte, error) {
    data, err := base64.StdEncoding.DecodeString(encoded)
    if err != nil {
        return nil, err
    }

    block, err := aes.NewCipher(key)
    if err != nil {
        return nil, err
    }

    gcm, err := cipher.NewGCM(block)
    if err != nil {
        return nil, err
    }

    nonceSize := gcm.NonceSize()
    if len(data) < nonceSize {
        return nil, fmt.Errorf("ciphertext too short")
    }

    nonce, ciphertext := data[:nonceSize], data[nonceSize:]
    return gcm.Open(nil, nonce, ciphertext, nil)
}

The HMAC signature you already compute (the t=<unix>,v1=<hex> header) covers the ciphertext, not the plaintext. This is correct: the receiver needs to verify the signature before decrypting, and the signature should match what was actually transmitted.

Asymmetric encryption (RSA-OAEP or ECIES)

The destination publishes a public key. The gateway encrypts payloads with that public key. Only the destination — which holds the private key — can decrypt.

This is the end-to-end model that TLS itself uses for key exchange. It has a key advantage: the gateway never has access to the decrypted payload after it's been encrypted. Even a compromised gateway database contains only ciphertext.

go
import (
    "crypto/rand"
    "crypto/rsa"
    "crypto/sha256"
)

func encryptWithPublicKey(pubKey *rsa.PublicKey, payload []byte) ([]byte, error) {
    return rsa.EncryptOAEP(sha256.New(), rand.Reader, pubKey, payload, nil)
}

For large payloads, pure asymmetric encryption is impractical — RSA-OAEP with a 2048-bit key can only encrypt ~214 bytes. The standard pattern is hybrid encryption: generate a random 256-bit symmetric key, encrypt the payload with AES-256-GCM, then encrypt the symmetric key with the destination's public key. Ship both together.

json
{
  "id": "evt_01HX9P3...",
  "type": "payment.completed",
  "encrypted": true,
  "enc_key": "<base64-encoded RSA-encrypted AES key>",
  "enc_payload": "<base64-encoded AES-GCM ciphertext>"
}

The destination first decrypts enc_key with its RSA private key to recover the AES key, then uses that to decrypt enc_payload.


What to Encrypt: Envelope Encryption

You rarely want to encrypt the entire payload. The event type, timestamp, and event ID are useful for routing, logging, and deduplication — they should remain in plaintext. The sensitive content lives in the data object.

The pattern is called envelope encryption:

json
{
  "id": "evt_01HX9P3...",
  "type": "patient.record.updated",
  "created_at": "2026-03-30T09:14:00Z",
  "encryption": {
    "algorithm": "AES-256-GCM",
    "key_id": "dest-key-2026-03",
    "ciphertext": "<base64>"
  }
}

The outer envelope stays plaintext. The data field is replaced by an encryption object containing the ciphertext and a key identifier. The destination knows which key to use based on key_id, enabling key rotation without re-encrypting historical events.


Key Management

Encryption is only as strong as its key management. The common mistakes:

Storing the encryption key alongside the encrypted data. If both live in the same database, an attacker who gets the database gets everything. The key must live separately — a secrets manager (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) or a hardware security module (HSM).

Using one key for all destinations. A single encryption key is a single point of failure. Use per-destination keys. If a destination's key is compromised, you rotate that key and re-encrypt only that destination's pending events — not everyone's.

Never rotating keys. Rotation limits the window of exposure for a compromised key. For symmetric keys shared with destinations, rotate quarterly and use a key ID in the payload so you can run both keys in parallel during the rotation window. This is the same pattern described in zero-downtime secret rotation — the key ID tells the destination which key to use for decryption without needing to try both.

sql
-- Track encryption keys per destination
CREATE TABLE destination_enc_keys (
    id          TEXT PRIMARY KEY,       -- e.g., "dest-key-2026-03"
    dest_id     UUID NOT NULL REFERENCES destinations(id),
    enc_key     BYTEA NOT NULL,         -- encrypted by your KMS
    active      BOOLEAN NOT NULL DEFAULT TRUE,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    retired_at  TIMESTAMPTZ
);

CREATE INDEX ON destination_enc_keys (dest_id) WHERE active = TRUE;

When rotating: insert the new key row, update the active flag, start encrypting new events with the new key. Leave the old key row in place (with active = FALSE) until you've confirmed no in-flight events are waiting on it.


Integration with Signature Verification

Payload encryption and HMAC signing serve different security goals:

  • HMAC answers: "Did this payload come from the gateway and was it unmodified?"
  • Encryption answers: "Can anyone other than the intended destination read the payload?"

Both should be present when you're encrypting payloads. The signature should cover the ciphertext (what was transmitted), not the plaintext (what was encrypted). This means your destination's verification flow is:

  1. Extract the t=<unix>,v1=<hex> signature from the header
  2. Verify the signature against the raw request body (which contains the encrypted payload)
  3. If signature is valid, decrypt the enc_payload field
  4. Process the decrypted event

If you verify the plaintext instead of the ciphertext, an adversary could swap the ciphertext with a different one and the HMAC would still pass (since the HMAC wouldn't cover the swapped bytes). Always sign the wire format.


Logging and Observability Tradeoffs

Payload encryption has an uncomfortable side effect: it breaks payload-level observability. You can no longer search your logs for customer_id = "cus_123" if that field is encrypted.

Your options:

ApproachObservabilitySecurity
Log plaintextFullWeak — logs are a liability
Log nothingNoneStrong — but debugging is painful
Log event ID + type onlyModerateGood — no payload in logs
Log encrypted payloadFull (if you have the key)Strong — logs contain only ciphertext
Log selected plaintext fields from the envelopePartialGood — sensitive fields stay encrypted

The pragmatic choice for most regulated environments is to log event ID, type, timestamp, and delivery outcome — nothing from the data object. If you need to debug a specific event, use the event ID to retrieve and decrypt the stored ciphertext on demand, with that lookup logged in an audit trail.

GetHook stores signing secrets and destination credentials with AES-256-GCM encryption at rest, so even a direct database read returns ciphertext for those fields. Extending the same pattern to event payloads is a configuration decision for teams that need it.


When Not to Bother

Payload encryption adds operational complexity: key provisioning, rotation procedures, hybrid encryption logic on both sides, and broken log-based debugging. For most webhook use cases, it's not the right investment.

Skip payload encryption if:

  • Your payloads contain no regulated or sensitive data
  • Your gateway runs in a private VPC with no public database access
  • Your destination is a first-party service you own and operate
  • Your compliance requirements are satisfied by encryption at rest on the database level (e.g., AWS RDS encryption with KMS-managed keys)

Adopt payload encryption if:

  • You're sending PHI or financial account details to third-party endpoints
  • You're building a multi-tenant platform where payloads from different customers must not be readable by the same infrastructure operator
  • Your security model explicitly requires end-to-end confidentiality (not just in-transit confidentiality)

The right security control is the one that addresses your actual threat model — not the most sophisticated one available. If TLS plus database-level encryption at rest covers your threats, ship that and move on. If you need end-to-end confidentiality, the patterns above give you a production-ready starting point.

Review GetHook's security architecture and get started →

Stop losing webhook events.

GetHook gives you reliable delivery, automatic retry, and full observability — in minutes.