When you need to notify clients about events that happen asynchronously, you have four primary options: polling REST endpoints, webhooks, Server-Sent Events (SSE), and WebSockets. Each has a distinct profile of trade-offs around infrastructure cost, client complexity, reliability, and use case fit.
This post is a practical decision framework. Not "which is best" — that question is meaningless without context. Instead: which pattern fits which situation, and what are the failure modes of getting it wrong.
The Four Patterns at a Glance
REST Polling: The client periodically calls a GET endpoint to check for updates. Simple to implement on both sides; wasteful at scale.
Webhooks: The server pushes events to a client-provided URL via HTTP POST when something happens. The server delivers; the client receives.
Server-Sent Events (SSE): The client opens a long-lived HTTP connection; the server streams events down as they occur. One-directional, built on HTTP/1.1.
WebSockets: A full-duplex TCP connection upgraded from HTTP. Both sides can send messages at any time.
| Dimension | REST Polling | Webhooks | SSE | WebSockets |
|---|---|---|---|---|
| Direction | Pull | Push (server → client URL) | Push (server → browser) | Bidirectional |
| Latency | High (poll interval) | Low (event-driven) | Low (event-driven) | Very low |
| Connection | Stateless | Stateless | Persistent | Persistent |
| Infrastructure cost | Low–High (volume-dependent) | Low | Medium | High |
| Client complexity | Low | Medium (requires public URL) | Low (native EventSource API) | Medium |
| Server complexity | Low | Medium | Medium | High |
| Works in browser | Yes | No | Yes | Yes |
| Survives NAT/proxy | Yes | Yes | Usually | Sometimes (requires WSS) |
REST Polling: When It's the Right Answer
Polling gets a bad reputation it doesn't always deserve. For operations where results arrive within seconds and traffic is light, polling is often the simplest and most reliable option.
Good fit:
- ›Short-running async jobs (image processing, report generation) where the client actively waits for a result
- ›Internal services behind a firewall where a public inbound URL isn't available
- ›Mobile clients with unpredictable network conditions
- ›Any scenario where you need a result "soon" and can tolerate a few seconds of delay
The implementation pattern that matters most: exponential backoff with a ceiling.
func pollJobResult(ctx context.Context, client *http.Client, jobID string) (*JobResult, error) {
backoff := 1 * time.Second
maxBackoff := 30 * time.Second
for {
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-time.After(backoff):
}
result, err := fetchJobResult(ctx, client, jobID)
if err != nil {
return nil, err
}
if result.Status == "complete" || result.Status == "failed" {
return result, nil
}
backoff = min(backoff*2, maxBackoff)
}
}The failure mode of using polling incorrectly: you poll on a 1-second interval for an event that might take 30 minutes. At 1,000 concurrent clients, that's 1,000 requests/second of pure overhead, all returning "not ready yet." If you're polling at intervals longer than 60 seconds — or for events with no predictable completion time — a push mechanism is almost certainly a better fit.
Webhooks: The Default for Server-to-Server
Webhooks are the right default for asynchronous server-to-server communication. When your server needs to notify another system that something happened — and you don't control that system — webhooks are what you reach for.
Good fit:
- ›Third-party integrations (payment processors, CRMs, communication platforms)
- ›Customer-facing event delivery where you're the platform and they're the subscriber
- ›Microservice communication when guaranteed delivery matters and latency above 100ms is acceptable
- ›Audit and compliance use cases where every event must be persisted and replayable
What webhooks require that people underestimate:
- ›The receiver must be publicly accessible. No webhooks to
localhost. This is why local webhook development is painful without a tunnel tool. - ›The sender must retry. If delivery fails, the sender is responsible for retrying. This requires a durable queue, not a synchronous HTTP call in your request handler.
- ›The receiver must be idempotent. Retries mean the same event may arrive twice. Your handler must process duplicates without side effects.
- ›Signature verification is non-negotiable. Anyone can POST to your webhook endpoint. Verify the HMAC signature before trusting the payload.
# Verify a Stripe-compatible webhook signature (for debugging in bash)
TIMESTAMP=$(echo "$HEADER" | grep -oP '(?<=t=)[^,]+')
SIG=$(echo "$HEADER" | grep -oP '(?<=v1=)[^,]+')
COMPUTED=$(printf '%s.%s' "$TIMESTAMP" "$BODY" \
| openssl dgst -sha256 -hmac "$SECRET" -hex | cut -d' ' -f2)
[ "$SIG" = "$COMPUTED" ] && echo "Valid" || echo "Invalid"The failure mode of using webhooks incorrectly: calling an external HTTP endpoint synchronously in your order creation flow. If that endpoint is slow or down, your checkout breaks. Webhooks are inherently async — dispatch from a queue, never block request handling on delivery.
If you're building the sender side (pushing events to your customers' endpoints), GetHook handles the delivery queue, retry logic, and HMAC signing so you don't have to build that layer yourself.
Server-Sent Events: Browser Real-Time Without WebSocket Complexity
SSE gives you server-push over a plain HTTP connection. The browser's native EventSource API handles reconnection automatically. You get real-time delivery to browser clients without the infrastructure overhead of WebSockets.
Good fit:
- ›Live dashboards where updates flow server → client only (activity feeds, job progress, delivery status)
- ›Notification panels and toast alerts
- ›Any browser use case where the client doesn't need to send messages back
SSE in Go — no framework needed:
func (h *StreamHandler) Events(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "streaming unsupported", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
for {
select {
case <-r.Context().Done():
return
case event := <-h.eventCh:
fmt.Fprintf(w, "data: %s\n\n", event)
flusher.Flush()
}
}
}Browser-side consumption is two lines:
const source = new EventSource('/events');
source.onmessage = (e) => console.log(JSON.parse(e.data));The failure modes of using SSE incorrectly:
Trying to use it bidirectionally. If you need the client to send messages back, you'll end up implementing a polling hack alongside SSE — combining the worst of both patterns. Reach for WebSockets instead.
Ignoring proxy timeouts. Many load balancers and reverse proxies close idle connections after 60–90 seconds. Send a comment heartbeat (": keepalive\n\n") every 30 seconds to keep the connection alive. The EventSource API will reconnect automatically when a connection drops, but reconnection causes a brief gap and may require you to track a Last-Event-ID header to avoid missing events during the gap.
WebSockets: The Right Tool for True Real-Time
WebSockets provide a persistent, full-duplex channel over a single TCP connection. They're the highest-complexity option and the only right choice when you need low-latency bidirectional communication.
Good fit:
- ›Collaborative editing (shared documents, multiplayer whiteboards)
- ›Live chat and messaging applications
- ›Multiplayer games and shared state
- ›Trading platforms with real-time order books
- ›Any scenario where sub-100ms client-to-server messages are required
WebSockets are not free. Each open connection consumes a file descriptor on the server. At 100K concurrent users, that's 100K file descriptors — this requires dedicated connection infrastructure, not a standard web server configured for request/response.
Horizontal scaling is also non-trivial. A message arriving at server A can't be delivered to a client connected to server B without a shared pub/sub layer. The typical architecture looks like this:
Client A ──► Server 1 ─┐
├──► Redis Pub/Sub ──► Server 2 ──► Client B
Client C ──► Server 3 ─┘The failure mode of using WebSockets incorrectly: using a WebSocket for a notifications panel that only needs to receive updates from the server. You've paid full WebSocket infrastructure cost for a use case SSE handles with zero server state. The browser-side EventSource API reconnects automatically; the WebSocket API doesn't — you have to implement reconnection, exponential backoff, and event replay yourself.
Decision Framework
Work through these questions in order:
1. Does the client have a public inbound URL? → No → webhooks are off the table. Choose SSE, WebSockets, or polling.
2. Does the client need to send messages to the server in real time? → Yes, with sub-100ms latency → WebSockets. → Yes, but occasionally → polling (client sends a POST, then polls or waits).
3. Is the client a browser? → Yes, and only needs server-push → SSE. → Yes, and needs bidirectional → WebSockets. → No (server-to-server) → webhooks are the default.
4. Is event volume low and latency tolerance above 10 seconds? → Yes → polling is often the simplest, most reliable choice. → No → push mechanism required.
5. Do you need durable delivery with at-least-once semantics? → Yes → webhooks with a queue-backed sender. SSE and WebSockets don't persist events across disconnections without additional infrastructure. → No → SSE or WebSockets are fine.
| Scenario | Recommended Pattern |
|---|---|
| Stripe notifying your backend about a payment | Webhooks |
| Your SaaS pushing events to customers' endpoints | Webhooks |
| Dashboard showing live job progress | SSE |
| Browser-based chat application | WebSockets |
| Client polling a background job result | REST polling with backoff |
| Internal service-to-service, no public URL | REST polling or internal queue |
| Multiplayer collaborative editor | WebSockets |
A Note on Mixing Patterns
Real systems often use more than one pattern. A common combination:
- ›Webhooks for durable, server-to-server event delivery where reliability guarantees matter
- ›SSE for browser clients that need to see real-time status without full WebSocket infrastructure
- ›Polling as a fallback for environments where persistent connections are blocked (certain enterprise proxies kill long-lived HTTP connections)
The patterns aren't mutually exclusive — they solve different layers of the same problem.
Picking the wrong async pattern doesn't usually cause an immediate failure. It causes a slow accumulation of infrastructure cost, operational complexity, or reliability incidents that are hard to attribute to the original architectural choice. Map the pattern to the use case, not to the technology you're most comfortable with.
For the webhook layer specifically — durable delivery, retry, and signature verification — GetHook handles the infrastructure so you can focus on what your events mean, not how they get there. Start building →