Why we built another router
We didn't need another HTTP router. Nobody did. The ecosystem has Hono, Fastify, Express, and dozens of others. They're all fine.
But "fine" wasn't good enough for what we were building. Our messaging platform handled millions of webhook callbacks per day. Every millisecond of routing overhead multiplied by millions. We profiled the hot paths and found that route matching alone consumed 8% of total request time.
So we built Zero Router. And then we open-sourced it.
The benchmark that started it
We ran a straightforward benchmark: 100 routes, mix of static and parametric paths, realistic middleware chains. The kind of routing table you'd see in a production API.
150,000 operations per second. Not in a synthetic "hello world" test — with actual middleware, parameter parsing, and response serialization.
How Zero Router is different
Compile-time route optimization
Most routers build a route tree at startup and traverse it at runtime. Zero Router takes a different approach: it compiles your route definitions into an optimized matching function during startup.
Think of it like a JIT compiler for routes. Instead of walking a tree for every request, the compiled function is a series of direct comparisons — no tree traversal, no regex matching, no backtracking.
// You define routes normally
const router = new ZeroRouter()
.get("/users/:id", getUser)
.get("/users/:id/posts", getUserPosts)
.post("/users", createUser)
// Zero Router compiles this into an optimized matcher
// that resolves any path in O(1) amortized time
Zero-allocation matching
Every allocation during request handling is overhead. Most routers allocate at least one object per request for the route match result, plus arrays for parameter values.
Zero Router pre-allocates a parameter buffer and reuses it across requests. The route match result is written into a pre-allocated slot. Zero garbage collection pressure from the routing layer.
Radix tree with path compression
The underlying data structure is a compressed radix tree. Common path prefixes are stored once, and the tree is flattened wherever possible.
For a route table like:
/api/v1/users
/api/v1/users/:id
/api/v1/users/:id/posts
/api/v1/posts
/api/v1/posts/:id
A naive tree has 10+ nodes. Our compressed tree has 4. Fewer nodes means fewer comparisons means faster matching.
Path compression matters more than algorithmic complexity for realistic route tables. Most APIs have 50-200 routes with significant prefix overlap. Compression eliminates 60-70% of tree nodes in typical applications.
TypeScript-first, not TypeScript-compatible
Zero Router isn't a JavaScript router with type definitions bolted on. Types are a core design constraint.
Route parameters are fully typed:
router.get("/users/:id/posts/:postId", (ctx) => {
// ctx.params is typed as { id: string; postId: string }
// No type assertions needed
const { id, postId } = ctx.params;
});
Middleware chains preserve types through composition:
const authed = middleware((ctx) => {
// ctx.user is typed from the auth middleware
return { user: verifyToken(ctx.headers.authorization) };
});
router.get("/profile", authed, (ctx) => {
// ctx.user is fully typed here
// TypeScript knows it's the return type of verifyToken
});
The middleware system
Middleware in Zero Router is composable and type-safe. Each middleware can:
- Add typed context (auth, logging, rate limiting)
- Short-circuit the request (return early on auth failure)
- Transform the response (compression, headers)
- Measure timing (per-middleware latency tracking)
| Feature | Zero Router | Hono | Fastify |
|---|---|---|---|
| Typed middleware chain | Yes | Partial | No |
| Zero-alloc matching | Yes | No | No |
| Compile-time optimization | Yes | No | Yes (schema) |
| Bun native | Yes | Yes | Partial |
| Middleware composition | Type-safe | Type-safe | Plugin-based |
Real-world performance
Benchmarks are one thing. Production is another. Here's what we measured in our messaging platform after switching from Hono to Zero Router:
The P99 improvement was the big win. Our tail latency dropped from 12ms to 7ms. At millions of requests per day, that's the difference between "consistently fast" and "usually fast with occasional hiccups."
Why open source
We built Zero Router for ourselves. We open-sourced it because the TypeScript ecosystem deserves better router options.
The project is MIT-licensed, has zero dependencies, and works with Bun, Node.js, and Deno. It's the router we use in production, so it gets battle-tested daily at scale.
Getting started
bun add zero-router
import { ZeroRouter } from "zero-router";
const router = new ZeroRouter()
.get("/", () => new Response("Hello"))
.get("/users/:id", (ctx) => {
return Response.json({ id: ctx.params.id });
});
export default { fetch: router.fetch };
That's it. No configuration. No plugins. No setup ceremony.
What's next
We're working on:
- Route-level caching — Declarative cache headers per route with automatic invalidation
- OpenAPI generation — Generate spec from route types, not the other way around
- WebSocket routing — Same zero-allocation approach for WebSocket upgrade paths
Zero Router is production-ready and MIT-licensed. If you care about routing performance in TypeScript, check it out or talk to us about your infrastructure.