AutoKaam Playbook

Cloudflare Workers, the Edge Runtime I Use for the Sharp Bits

100K free daily requests, sub-50ms cold starts, and the empire's ads-txt + redirect layer.

Last reviewed:

The operator take

Cloudflare Workers is where I put the small dynamic pieces that should never touch a real server. Across the empire, that is the ads.txt aggregator (one Worker serving the same Adsense pub-id across 24 empire domains), redirect handlers, the autokaam IndexNow ping, and a handful of webhook receivers that need sub-200ms response times globally.

The 100K free daily requests on the entry tier covers most of this. The few endpoints past that volume run on the paid tier at Rs 500 a month, which is still trivial compared to running an equivalent service on a real server with a load balancer.

What Workers is genuinely good at: edge-locality logic. The ads.txt Worker runs in roughly 20 ms TTFB for India-region users because it is colocated. Same Worker on a Coolify box would be 80-120 ms because of the Singapore round-trip. That difference matters for ads.txt (Adsense crawlers do retry, and faster responses make for better fetch pass rates) and for redirect handlers (every 100 ms shaved off a 301 chain is real).

What Workers is bad at: long-running tasks. The CPU budget per request is 10 ms on the free tier, 50 ms on paid. Anything past that and you are running into limits. The empire pattern is to use Workers for the routing layer and FastAPI on Coolify for the actual work.

The setup gotcha: multipart upload of Worker bundles needs the right Content-Disposition. I lost an hour to this once when wrangler had been replaced by direct REST calls in a CI job. The fix is in the empire memory, but anyone doing direct API uploads should know.

The 2026 thing I am exploring is Workers AI (Cloudflare's edge LLM inference). For very low-latency, non-frontier-quality tasks (slug generation, simple classification), running Llama-3.1 on Workers AI from India is faster than calling Anthropic, even if the quality is lower. I have not productionized this yet but it is on the empire roadmap.

The Indian-operator angle is the same as Pages: India-region traffic is materially faster on Cloudflare than on US-only edge providers, and the pricing tiers are reachable for solo founders.

If you find yourself spinning up a tiny VPS just to host a redirect or a webhook, replace it with a Worker.

Why it matters in 2026

Edge compute became table-stakes for any global user base in 2025-26. Cloudflare Workers is the most mature edge runtime, with the best price-performance for India-region traffic and the deepest integration with the rest of the Cloudflare stack (Pages, R2, KV, D1).

Cost in INR

Free at 100K daily requests; Paid from Rs 500/mo for higher volume + larger CPU budgets

Use when

  • +Edge routing, redirects, ads.txt-style aggregation
  • +Webhook receivers needing global low-latency response
  • +Static site dynamic shims (auth proxies, A/B test logic)
  • +Webhook signature verification before forwarding to real servers

Skip when

  • xLong-running tasks (CPU budget too tight)
  • xHeavy stateful workloads (no native long-lived connections)
  • xWorkloads that need filesystem or arbitrary npm packages

Alternatives I would consider