
Cloudflare just released Containers: here's everything you need to know

Cloudflare Containers let you run any Docker image on Cloudflare's 300-plus edge locations. You control them with a few lines of JavaScript in a Worker, they scale to zero, and you're billed in 10 ms slices while they're awake.
They sit in the gap between:
Model | Strengths | Trade-offs |
---|---|---|
Workers (today) | Sub-ms startup, worldwide | V8 only, 128 MB RAM |
Always-on PaaS (e.g. sliplane) | Simple, predictable | You pay 24 / 7, even when idle |
DIY Kubernetes / Fargate | Full control at scale | Cluster, LB, IAM overhead |
Cloudflare Containers bring the edge reach and pay-for-use pricing of Workers to workloads that need a full Linux sandbox.
Why would I care?
- Native binaries or full FS, so you can run FFmpeg, Pandas, or AI toolchains.
- Languages beyond JS or Wasm, such as Go, Rust, Python, Java, Ruby, or anything your Dockerfile holds.
- Bigger resource envelope, with up to 4 GiB RAM and half a vCPU per instance (larger sizes are planned).
- Per-tenant state, with one container per Durable-Object ID for sticky sessions.
- Burst-heavy jobs, such as cron, code evaluation, or on-demand video export.
If your code sleeps a lot, scaling to zero is better than paying for an always-on container (whether that is sliplane, a VPS, or a managed dyno).
How it works
# 1. Scaffold + deploy
npm create cloudflare@latest -- --template=cloudflare/templates/containers-template
wrangler deploy
// 2. Route requests in your Worker
import { Container, getRandom } from "@cloudflare/containers";
class API extends Container {
defaultPort = 8080;
sleepAfter = "10m";
}
export default {
async fetch(req, env) {
const instance = getRandom(env.API, 3); // simple round-robin helper
return instance.fetch(req);
},
};
The first hit is a cold-start (about 2 to 3 seconds in beta). After that, the container stays warm until it is idle for the duration set in sleepAfter
.
Under the hood, each container is coupled to a Durable Object that handles lifecycle and routing. There is no YAML, no nodes, just code.
Pricing snapshot
Meter (Workers Paid, $5/mo) | Free quota | Over-quota rate |
---|---|---|
Memory | 25 GiB-hours | $0.0000025 / GiB-s |
CPU | 375 vCPU-min | $0.000020 / vCPU-s |
Disk | 200 GB-hours | $0.00000007 / GB-s |
Instance sizes in beta are dev (256 MiB), basic (1 GiB), and standard (4 GiB). Larger sizes are coming.
Assume a "standard" instance (4 GiB RAM, half a vCPU, 4 GB disk) that runs 24 × 7 for a 30-day month and ships 2 TB of traffic. This is a workload better suited to an always-on PaaS.
Meter | Raw usage | Free quota | Billable | Rate | Cost |
---|---|---|---|---|---|
Memory | 4 GiB × 2 592 000 s = 10 368 000 GiB-s | 25 GiB-h = 90 000 GiB-s | 10 278 000 GiB-s | $0.0000025 / GiB-s | $25.70 |
CPU | 0.5 vCPU × 2 592 000 s = 1 296 000 vCPU-s | 375 vCPU-min = 22 500 vCPU-s | 1 273 500 vCPU-s | $0.000020 / vCPU-s | $25.47 |
Disk (ephemeral) | 4 GB × 2 592 000 s = 10 368 000 GB-s | 200 GB-h = 720 000 GB-s | 9 648 000 GB-s | $0.00000007 / GB-s | $0.68 |
Egress (NA/EU) | 2 TB = 2048 GB | 1 TB | 1024 GB | $0.025 / GB | $25.60 |
Variable total: about $77.44 per month. Add the $5 Workers Paid subscription, and the total is about $82.44 all-in.
A comparable always-on PaaS instance (such as sliplane or a small VPS) might cost $7 to $15 per month flat, so for high-utilisation, bandwidth-heavy services, Cloudflare Containers can be five to ten times more expensive.
Rule of thumb: workloads that idle most of the day tend to cost less on Containers. Steady-state, high-utilisation services can still be cheaper on an always-on host like sliplane.
Current beta limits
- Manual scaling, you call
get(id)
. Autoscale and latency routing are planned. - Ephemeral disk, so you get a fresh FS after each sleep.
- 40 GiB RAM and 20 vCPU account cap (temporary).
- Linux/amd64 only, no ARM support yet.
- No inbound TCP or UDP, since everything is proxied through a Worker HTTP call.
When to pick Containers vs. an always-on PaaS
Scenario | Containers | sliplane / always-on |
---|---|---|
Edge-adjacent AI image generation, mostly idle | ✅ | |
24/7 REST API with over 70% utilisation | ✅ simpler, lower steady cost | |
Per-tenant sandbox (one container per user) | ✅ | |
Database that needs persistent volumes | ✅ |
A mixed model often wins. You can run your persistent database on sliplane (or similar), bursty compute on Cloudflare Containers, and connect them with a Worker.
Takeaway
Cloudflare just introduced what is essentially serverless Fargate at the edge: Docker images, millisecond billing, global points of presence, and no cluster busywork. If Workers' V8 box ever felt cramped, or your always-on container spends most of its time idle, try spinning up a beta Container and see what the edge can do.
Happy hacking!
Cheers,
Jonas, Co-Founder of sliplane