Back to Blog
3 min read
February 16, 2026TECHNICAL

Building nanobrew: A 1.2 MB Homebrew Replacement in Zig

How I built nanobrew, a package manager that installs software 10 to 500x faster than Homebrew, with third-party tap support, Linux .deb packages, and zero dependencies.

Rach Pradhan

Rach Pradhan

Design Engineer

Hey everyone, Rach here.

So the other day I ran brew install tree and timed it. 5.5 seconds. For tree. A 60KB binary with zero dependencies.

I stared at my terminal for a second. 5.5 seconds. Where does all that time go? Ruby spinning up, curl downloading one thing at a time, otool parsing binaries one by one, codesign running sequentially... all for a tiny utility that prints your folder structure.

And I just thought: what if none of that overhead existed? What if the whole thing was one binary, no subprocesses, everything in parallel?

So I built it. It's called nanobrew. It installs tree in 10 milliseconds.

That's not a typo. Let me explain how we got there.

GitHub: github.com/justrach/nanobrew

1.2 MB
Binary Size
10 ms
Warm Install (tree)
500x
Faster Than Brew
0
Dependencies
Loading charts...

The 5.5 Second Problem

Here's the thing about Homebrew. It's not slow because downloading packages is slow or because extracting tarballs is slow. It's slow because of everything around that. The Ruby runtime is 57 MB. Every network call spawns a curl subprocess. Every binary inspection spawns otool. Every codesign is sequential. And after downloading a file, it reads the entire file again just to verify the hash.

Homebrew
57 MB Ruby runtime
Spawns curl for every download
Spawns otool for binary inspection
Sequential codesigning
Reads file twice (download + hash)
Everything sequential
nanobrew
1.2 MB static binary
Native HTTP — zero subprocesses
Native Mach-O / ELF parsing
Parallel everything
Streaming SHA256 during download
Content-addressable store

All of that adds up. And it adds up fast when you're installing something like ffmpeg with 11 dependencies.

I had this hypothesis: if you just did everything natively in a single binary, in parallel, with zero subprocess spawning, installs would be dramatically faster. Not like 2x faster. Like 10x, maybe 100x faster.

So I picked Zig, opened a new project, and started writing.


The First Time It Worked

I'll never forget the first time I ran nanobrew on tree and saw the number. I honestly thought it was broken. I ran it again. Same thing. Again. Same.

Package Homebrew nanobrew (cold) nanobrew (warm)
tree (0 deps) 5.527s 0.681s 0.010s
ffmpeg (11 deps) 19.571s 2.117s 0.564s
wget (6 deps) 5.849s 3.090s 0.033s

Warm installs (package already cached locally) were under 10 milliseconds for tree. Under 35 milliseconds for wget. ffmpeg with 11 dependencies in half a second. Homebrew takes 19.6 seconds for the same thing.

Why are warm installs so fast?
Warm installs skip the network entirely. nanobrew checks if the package SHA256 already exists in its local store, then uses APFS clonefile (macOS) or reflink copy (Linux) to "copy" without moving any data. The result: 3.5ms for tree, because no bytes are actually written to disk.

The hypothesis was very, very proven. But how? Let me walk through the pieces.


The Trick That Makes Warm Installs Instant

This is the single most important design decision in nanobrew, and it's surprisingly simple.

Every downloaded bottle gets stored by its SHA256 hash in /opt/nanobrew/store/<sha>/. That's it. That's the content-addressable store. When you install something, nanobrew checks if that hash exists. If it does? No download. No extraction. It just clonefiles from the store into the Cellar.

Now here's where macOS does something magical. APFS clonefile is copy-on-write. When nanobrew "copies" from the store to the Cellar, it's not actually copying any data. It just creates a reference to the same blocks on disk. Zero extra disk space. Zero IO time.

Warm install pipeline
Look up SHA256
Found in store
APFS clonefile
Check symlinks
Done (3.5ms)
Cold install pipeline
Resolve deps
Parallel download + streaming SHA256
Extract via mmap
Store by hash
Clonefile & link

3.5 milliseconds. That's the whole thing.


Stop Reading Files Twice

This was one of those "wait, why doesn't everyone do this?" moments that I kept having while building nanobrew.

Homebrew downloads a bottle, saves it to disk, then reads the entire file again to compute the SHA256. Two full passes over the data. For ffmpeg that's about 30 MB read twice.

nanobrew computes the SHA256 during the download. Every chunk that comes in over HTTP feeds into both the hash state and the file writer at the same time. One pass. When the download finishes, the hash is already done.

Streaming SHA256 verification during download eliminates an entire pass over the data. For large packages like ffmpeg (~30 MB), this cuts verification time to zero because the hash is ready the instant the download completes.

It sounds so obvious in retrospect. But it makes a real difference.


The Ruby Problem (Or: Do You Really Need a Full Parser?)

Okay so this is where things got really fun.

I wanted to support third-party taps. If you're not familiar, that's when you do something like nb install steipete/tap/sag and it pulls from someone's custom Homebrew tap hosted on GitHub. It's a pretty common thing in the Homebrew world. Lots of developers maintain their own taps for tools that aren't in homebrew-core.

Here's the catch: no other fast Homebrew alternative supports taps. Zerobrew explicitly rejects them. Every other one I looked at just says "only homebrew-core". And I get why. Tap formulas aren't JSON like the homebrew-core API. They're Ruby files. Actual .rb files with classes, methods, string interpolation, platform conditionals.

The insight
You don't need a full Ruby parser. Tap formulas follow predictable patterns: version "X" on one line, url "X" on another, sha256 "X", depends_on "X". They're structured, just not in a format anyone bothered to formalize. A 500-line line-by-line parser handles every formula I've tested.

So I thought: what if I just go line by line?

500 lines of Zig later, I had a working Ruby formula parser. It pulls out version, url, sha256, dependencies, handles #{version} string interpolation in URLs, tracks on_macos/on_linux platform blocks by monitoring curly brace depth, and parses bottle blocks with per-architecture checksums.

nb install steipete/tap/sag fetches the .rb straight from raw.githubusercontent.com, parses it, resolves deps through the normal pipeline, and installs. About 2 seconds. No brew tap step.

The funniest bug was with pre-built binary taps. Some formulas (like sag) don't have a build system at all. They just ship a compiled executable in a tarball. My source builder kept crashing with UnknownBuildSystem because it couldn't find cmake or autotools or anything. The fix was embarrassingly simple: just scan the extracted files for executables and copy them to bin/. Done. Sometimes you overthink things.


Going After apt-get

At this point I was feeling ambitious. nanobrew was crushing Homebrew on macOS. Could it also beat apt-get on Linux? Specifically in Docker containers, where everyone knows apt-get is painfully slow?

I built native .deb support from scratch. And when I say native, I mean native. No dpkg binary. No ar binary. No zstd binary. All of it parsed and decompressed in Zig.

.deb install pipeline (all native Zig)
Fetch Packages.gz
Gzip decompress
BFS dep resolution
Parallel download + SHA256
Native ar parse
zstd/gzip decompress
Extract tar

And the results:

Command apt-get nanobrew Speedup
curl (32 deps) 34.1s 12.2s 2.8x
curl wget git (60+ deps) 49.7s 25.0s 2.0x

2.8x faster! And the output is byte-identical to dpkg-deb extraction. I was paranoid about correctness here so I wrote a CI test that extracts packages with both nanobrew and dpkg-deb and diffs them. Identical.

Bug story
zstd decompression kept failing on large packages like libc6. Everything else worked fine. Just the big ones. The fix: the decompression buffer was too small. You need default_window_len + block_size_max (8MB + 128KB) to handle the largest zstd frames in .deb packages.

Now in Docker you can just:

COPY --from=nanobrew/nb /nb /usr/local/bin/nb
RUN nb init && nb install --deb curl wget git

No more sitting around waiting for apt-get in CI.


Why Zig Though?

People ask me this a lot. Honestly it came down to a few things that all compounded together.

Comptime is wild. I generate SIMD byte scanning routines at compile time for tar header detection and JSON parsing. The compiler literally does the work so there's zero runtime dispatch overhead.

mmap means bottle extraction reads directly from page cache. The tar parser walks mapped memory without copying anything. Zero-copy all the way down.

Arena allocators mean the hot install path does zero heap allocations. Everything goes through arenas that get freed in one shot at the end.

Cross-compilation just works. zig build linux on my Mac gives me a static Linux binary. Same for ARM. One command, done.

And comptime platform dispatch means @import("builtin").os.tag routes to APFS clonefile on macOS or reflink copy on Linux at compile time. No if-statements at runtime for platform stuff.

The whole thing compiles to 1.2 MB. Homebrew's Ruby runtime is 57 MB. nanobrew is 47x smaller.


Everything Else It Does

What started as "make installs faster" kind of grew into a full package manager. nanobrew now does source builds with cmake, autotools, meson, and make. Cask support for macOS apps (.dmg, .pkg, .zip). Search across all Homebrew formulas and casks. Upgrade for all packages or specific ones. Outdated to show what's behind. Pin and unpin to freeze packages at a version. Rollback to revert using install history. Bundle dump and install to export and restore your setup. Doctor for health checks on broken symlinks and orphaned files. Cleanup for old caches. Deps with an ASCII tree view. Services management for launchctl on macOS and systemd on Linux. Shell completions for zsh, bash, and fish. And nuke if you ever want to completely remove nanobrew itself.

It's a lot more than I planned to build. But each feature kind of naturally led to the next one.


What I Took Away From All of This

Subprocess spawning is where all the time goes. Every time Homebrew calls curl or otool or codesign, it pays for process creation, shell init, and IPC. Cutting that out was the single biggest win. Everything else was gravy.

Parallelism compounds in ways you don't expect. Downloading 11 bottles at once isn't just 11x faster than sequential. It's better than that because TCP connections overlap, DNS gets cached after the first lookup, and the kernel schedules IO more efficiently when it can see all the work at once.

You don't always need a "real" parser. 500 lines of line-by-line Zig handles every Ruby formula I've thrown at it. I spent way too long worrying about edge cases that never showed up.

And content-addressable storage is one of those ideas that sounds fancy but is actually dead simple to implement, and the payoff is enormous. SHA256 keyed store gives you dedup, cache invalidation, and instant reinstalls basically for free.


Frequently Asked Questions

Is nanobrew a drop-in replacement for Homebrew?

For most common packages, yes. nanobrew uses the same Homebrew formulae and bottle infrastructure. You can run nb install tree or nb install ffmpeg exactly like you would with brew. The main difference is that some very niche formulas with complex Ruby build logic might not be supported yet since nanobrew uses a simplified Ruby formula parser.

Can I use nanobrew alongside Homebrew?

Yes! nanobrew installs to /opt/nanobrew/ by default, which is separate from Homebrew's /opt/homebrew/. You can run both side by side. Some people use nanobrew for their common packages (where speed matters) and fall back to Homebrew for anything exotic.

Why not just make Homebrew faster instead of building something new?

The core issue is architectural. Homebrew is built on Ruby and relies on shelling out to system tools (curl, otool, codesign) for nearly everything. Making it meaningfully faster would require rewriting the entire download, extraction, verification, and linking pipeline. At that point you're not improving Homebrew, you're building a new tool. Which is what nanobrew is.

What does "content-addressable store" actually mean?

Every package nanobrew downloads gets stored in a folder named after its SHA256 hash, e.g. /opt/nanobrew/store/a1b2c3.../. When you install a package, nanobrew checks if that hash already exists. If it does, there's no need to download or extract anything. On macOS, APFS clonefile creates a zero-copy reference to the stored files. Think of it like git's object store but for package binaries.

How does the Ruby formula parser work without actually running Ruby?

Tap formulas follow very predictable patterns. Lines like version "1.0", url "https://...", and sha256 "abc..." appear in consistent formats. The parser reads line-by-line, matches these patterns, handles #{version} string interpolation, and tracks platform blocks (on_macos/on_linux) by counting curly brace depth. It's 500 lines of Zig and handles every real-world formula I've tested.

Is nanobrew safe to use in production Docker images?

For .deb packages, the extracted output is byte-identical to dpkg-deb. I verify this in CI by extracting the same packages with both tools and diffing. For Homebrew bottles, it uses the same checksums and verification as brew itself. That said, nanobrew is still experimental — I'd recommend testing thoroughly in staging before putting it in production CI pipelines.

Why Zig instead of Rust or Go?

Three things tipped the scale: (1) comptime lets me generate SIMD routines and platform dispatch at compile time with zero runtime cost, (2) arena allocators mean the hot install path does zero heap allocations, and (3) cross-compilation just works — zig build linux on my Mac gives me a static Linux binary. Rust could do most of this but the binary would be larger and the compile-time metaprogramming story isn't as clean. Go would sacrifice the zero-allocation and zero-copy guarantees.

What's on the roadmap?

The big ones are: broader formula compatibility (handling more edge-case Ruby formulas), Windows support via MSYS2/winget, a nb lock command for reproducible environments, and a daemon mode that pre-fetches updates in the background. I'm also looking at adding a nb audit command that checks installed packages against known CVEs.


Try It Out

curl -fsSL https://nanobrew.trilok.ai/install | bash

Or build from source:

git clone https://github.com/justrach/nanobrew.git
cd nanobrew && ./install.sh

It's still experimental. Things will break. But for common packages it works really well, and honestly it's fast enough that installing software starts to feel like it shouldn't take any time at all.

If something breaks, open an issue. Would love to hear what you think.

Until next time,

Rach