Skip to main content
Performance claims mean nothing without methodology. Every number on this page comes from real benchmark runs using hyperfine, and all raw data is available in the Ant repository. The cold-start benchmark measures the time to load Hono, register two routes, and exit — this isolates module resolution and runtime initialization overhead without the noise of an HTTP server actually accepting connections.

Cold-start benchmark

The benchmark loads the same bench-coldstart.js script from examples/npm/hono/ across all four runtimes. The script creates a Hono app, registers two routes, prints "ready", and calls process.exit(0). No HTTP server is started. This isolates module resolution and initialization overhead from request-handling performance. Methodology: 10 warmup runs, 100 timed runs with hyperfine.
hyperfine --warmup 10 --runs 100 \
  'ant  examples/npm/hono/bench-coldstart.js' \
  'node examples/npm/hono/bench-coldstart.js' \
  'bun  examples/npm/hono/bench-coldstart.js' \
  'deno run --allow-read --allow-env examples/npm/hono/bench-coldstart.js'

Results

RuntimeMeanMinMaxRelative
Ant5.7 ms5.0 ms7.3 ms1.00
Bun12.8 ms11.6 ms16.4 ms2.24× slower
Deno24.8 ms22.2 ms29.4 ms4.32× slower
Node31.1 ms27.1 ms151.7 ms5.41× slower
Node’s maximum of 151.7 ms reflects occasional outliers, likely JIT warm-up spikes. Ant’s worst-case run was 7.3 ms.

Test environment

DetailValue
HardwareApple M4 Pro, 24 GB RAM, 14 cores
OSmacOS 15.7.5 (arm64)
Ant0.9.1
Node25.9.0
Bun1.3.13
Deno2.7.12

Binary size comparison

A smaller binary means faster downloads, less storage on edge nodes, and faster process startup before any JavaScript even runs. Ant ships at approximately 9 MB in its default release build.
RuntimeBinary size
Ant~9 MB
Bun~60 MB
Deno~90 MB
Node~120 MB
Ant can also be compiled in size-optimized mode (-Os), producing a binary of approximately 6.5 MB — useful for embedded systems or highly constrained edge environments.

What these benchmarks measure — and what they don’t

The cold-start benchmark is specifically designed to test the use case where Ant excels: serverless functions, edge deployments, and CLI tools where every millisecond of initialization is user-visible latency. It does not measure:
  • Long-running HTTP server throughput
  • CPU-bound compute workloads
  • Memory usage under load
  • I/O-heavy workloads
For those workloads, the JIT compiler in Ant Silver provides competitive throughput, but no benchmarks covering those scenarios are included here yet.
All benchmark scripts are in the examples/npm/hono/ directory of the Ant repository. You can reproduce these results by running the hyperfine command above on your own hardware.