Back to Blog
webhooksterraforminfrastructure-as-codedevopsci-cd

Webhook Infrastructure as Code: Terraform Patterns for Sources, Destinations, and Routes

Webhook config managed through dashboards drifts, diverges across environments, and disappears when engineers leave. Here's how to declare sources, destinations, and routes in Terraform and promote them through environments like any other infrastructure.

L
Lena Hartmann
Infrastructure Engineer
April 15, 2026
10 min read

Webhook configuration lives in dashboards and gets managed by clicking around. Sources get created in production but forgotten in staging. Signing secrets are stored in someone's Notion doc. New engineers provision destinations by guessing field names. When something breaks, nobody knows who changed what or when.

The same "everything as code" discipline that tamed server infrastructure applies directly to webhook infrastructure. Sources, destinations, and routes are configuration — and configuration belongs in version control, goes through code review, and gets promoted through environments by CI. This post walks through how to get there using Terraform and the REST API, what multi-environment promotion looks like in practice, and what a native provider would eventually enable.


Why Webhook Config Drifts

When webhook infrastructure is managed through a UI or ad-hoc API calls, a predictable set of problems emerges:

ProblemWhat actually happens
Environment driftProduction has a destination staging doesn't. A bug only reproduces in one env.
Secret sprawlSigning secrets live in engineers' heads, password managers, or Slack threads.
No audit trailWho created this source? When was the timeout changed? No one knows.
Brittle onboardingNew engineers can't reproduce the exact config locally without help from the team.
Deployment couplingWebhook config changes require manual dashboard clicks in sync with code deploys.

IaC solves all of these by the same mechanism: the desired state is declared in a file, versioned in git, and applied by CI. A git log on your Terraform directory becomes the audit trail your security team wants.


The Three Resources to Declare

A webhook delivery pipeline has three core config objects:

  • Sources — an ingest endpoint with a path token, auth mode, and optional signature verification preset
  • Destinations — a target URL with a signing secret, timeout, and auth config
  • Routes — a binding between a source, a destination, and an event type pattern

Declaring these in code means you can audit the full routing topology from a single diff. An engineer reviewing a PR can see exactly which new destinations a deploy will register and which event types they'll receive — before anything touches production.


Using the Community REST API Provider

Until a native Terraform provider is available, the mastercard/restapi community provider is the lowest-friction path to IaC webhook config. It wraps any JSON REST API as Terraform resources.

hcl
terraform {
  required_providers {
    restapi = {
      source  = "mastercard/restapi"
      version = "~> 1.19"
    }
  }
}

provider "restapi" {
  uri                  = var.gethook_api_url
  write_returns_object = true
  headers = {
    "Authorization" = "Bearer ${var.gethook_api_key}"
    "Content-Type"  = "application/json"
  }
}

With the provider configured, declare a source:

hcl
resource "restapi_object" "order_events_source" {
  path         = "/v1/sources"
  read_path    = "/v1/sources/{id}"
  destroy_path = "/v1/sources/{id}"
  id_attribute = "data/id"

  data = jsonencode({
    name      = "order-events-${var.environment}"
    auth_mode = "hmac"
  })
}

A destination pointing to your fulfillment service:

hcl
resource "restapi_object" "fulfillment_destination" {
  path         = "/v1/destinations"
  read_path    = "/v1/destinations/{id}"
  destroy_path = "/v1/destinations/{id}"
  id_attribute = "data/id"

  data = jsonencode({
    name            = "fulfillment-service-${var.environment}"
    url             = "https://${var.fulfillment_host}/webhooks/orders"
    timeout_seconds = 30
    signing_secret  = var.fulfillment_webhook_secret
  })
}

And the route that wires them together:

hcl
resource "restapi_object" "order_to_fulfillment_route" {
  path         = "/v1/routes"
  read_path    = "/v1/routes/{id}"
  destroy_path = "/v1/routes/{id}"
  id_attribute = "data/id"

  data = jsonencode({
    source_id          = jsondecode(restapi_object.order_events_source.api_response).data.id
    destination_id     = jsondecode(restapi_object.fulfillment_destination.api_response).data.id
    event_type_pattern = "order.*"
  })
}

The event_type_pattern glob order.* routes all order lifecycle events — order.created, order.fulfilled, order.cancelled — to this destination without requiring a route update for each new event type you add later.


Managing Secrets Without Committing Them

The signing_secret in your destination config must never land in a .tfstate file in git. Two patterns cover this well.

Option 1: Pull from Vault at plan time

hcl
data "vault_generic_secret" "fulfillment_webhook_secret" {
  path = "secret/${var.environment}/webhook/fulfillment"
}

resource "restapi_object" "fulfillment_destination" {
  # ...
  data = jsonencode({
    name            = "fulfillment-service-${var.environment}"
    url             = "https://${var.fulfillment_host}/webhooks/orders"
    timeout_seconds = 30
    signing_secret  = data.vault_generic_secret.fulfillment_webhook_secret.data["value"]
  })
}

The secret is resolved from Vault at terraform plan time, used in the API call, and never written to state in plaintext — provided your Terraform backend is encrypted and the provider marks the field as sensitive.

Option 2: Inject via environment variable in CI

bash
export TF_VAR_fulfillment_webhook_secret=$(vault kv get \
  -field=value secret/prod/webhook/fulfillment)

terraform apply

This keeps secrets entirely out of your Terraform source files and is straightforward to wire into any CI system — GitHub Actions, GitLab CI, or CircleCI.


Multi-Environment Promotion

The real payoff of IaC is environment promotion: the same module, parameterized by environment, applied reliably from dev to staging to production.

infrastructure/webhook/
  environments/
    dev/
      main.tf          # instantiates the webhook-pipeline module
      terraform.tfvars # environment = "dev", fulfillment_host = "..."
    staging/
      main.tf
      terraform.tfvars # environment = "staging", fulfillment_host = "..."
    production/
      main.tf
      terraform.tfvars # environment = "production", fulfillment_host = "..."
  modules/
    webhook-pipeline/
      main.tf          # source + destination + route resources
      variables.tf
      outputs.tf

The modules/webhook-pipeline module declares the full topology. Each environment instantiates it with its own variables. Promoting a config change from staging to production is a PR that updates production/terraform.tfvars — the same process you'd use to update any other infrastructure config.


Exporting the Ingest Token

After apply, you need the source's path token to configure your upstream provider (or your own SDK). Export it as a Terraform output:

hcl
output "source_path_token" {
  description = "The ingest path token for the order events source. Use as the POST /ingest/{token} path."
  value       = jsondecode(restapi_object.order_events_source.api_response).data.path_token
  sensitive   = true
}

output "route_id" {
  description = "The route ID binding the source to the fulfillment destination."
  value       = jsondecode(restapi_object.order_to_fulfillment_route.api_response).data.id
}

Feed source_path_token directly into your provider app's environment config via a downstream aws_ssm_parameter write or GCP Secret Manager secret. This closes the loop: new environment = one terraform apply = fully wired webhook pipeline, with every secret injected programmatically.


CI/CD Integration

A GitHub Actions workflow that plans on PR and applies on merge:

yaml
name: Webhook Infrastructure

on:
  pull_request:
    paths: ["infrastructure/webhook/**"]
  push:
    branches: [main]
    paths: ["infrastructure/webhook/**"]

jobs:
  plan:
    if: github.event_name == 'pull_request'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3

      - name: Terraform Plan (staging)
        run: terraform -chdir=infrastructure/webhook/staging plan -out=tfplan
        env:
          TF_VAR_gethook_api_key: ${{ secrets.GETHOOK_STAGING_API_KEY }}
          TF_VAR_fulfillment_webhook_secret: ${{ secrets.FULFILLMENT_SECRET_STAGING }}

  apply:
    if: github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3

      - name: Terraform Apply (production)
        run: terraform -chdir=infrastructure/webhook/production apply -auto-approve
        env:
          TF_VAR_gethook_api_key: ${{ secrets.GETHOOK_PROD_API_KEY }}
          TF_VAR_fulfillment_webhook_secret: ${{ secrets.FULFILLMENT_SECRET_PROD }}

The plan job runs on every PR touching webhook config. Post the plan output as a PR comment using terraform show -json piped to gh pr comment — reviewers see exactly what will change before they approve. The apply job runs on merge to main and promotes staging config to production without manual steps.

One operational note: if terraform apply fails mid-run (network error, API rate limit), the state may be partially updated. Always store Terraform state in a remote backend (S3 + DynamoDB for state locking, or Terraform Cloud) so state is never split across engineers' machines.


What a Native Provider Would Look Like

The community REST API approach works, but the jsonencode/jsondecode round-tripping is verbose. A native gethook provider would expose idiomatic Terraform resources with proper type constraints and plan-time validation:

hcl
resource "gethook_source" "order_events" {
  name      = "order-events-${var.environment}"
  auth_mode = "hmac"
}

resource "gethook_destination" "fulfillment" {
  name            = "fulfillment-service"
  url             = "https://${var.fulfillment_host}/webhooks/orders"
  timeout_seconds = 30
  signing_secret  = var.fulfillment_webhook_secret
}

resource "gethook_route" "order_to_fulfillment" {
  source_id          = gethook_source.order_events.id
  destination_id     = gethook_destination.fulfillment.id
  event_type_pattern = "order.*"
}

Three resource blocks, one terraform apply, fully wired pipeline. Plan output would show the actual event_type_pattern being added or changed, not a raw JSON diff. Import support (terraform import gethook_destination.fulfillment dst_abc123) would let you bring existing dashboard-created resources under code control without recreating them.

This is the target: the same experience engineers expect from aws_sqs_queue or google_pubsub_topic, applied to webhook routing topology.


Treating webhook configuration as infrastructure eliminates an entire class of production incidents: the misconfigured destination nobody remembers changing, the staging environment that never got the new route, the signing secret that only exists in one engineer's 1Password vault. Version control gives you the audit trail. CI gives you the guardrails. Environment modules give you the reproducibility.

If you're building on GetHook and want to start managing your webhook infrastructure as code today, every resource in the dashboard is available through the same REST API — ready to be Terraformed. Get started here.

Stop losing webhook events.

GetHook gives you reliable delivery, automatic retry, and full observability — in minutes.