Using Homebrew Inside Docker Containers
Homebrew in Docker adds bloat and build time. Here's how to minimize the impact — and why stout's single binary is a better fit for containers.
There are legitimate reasons to use Homebrew inside a Docker container. Maybe you’re building a macOS cross-compilation environment on Linux. Maybe you’re containerizing a development environment that mirrors your team’s local setup. Maybe you’re running CI on Linux and need a tool that’s most easily available through Homebrew.
Whatever the reason, putting Homebrew in a Docker container is painful. The installation is slow, the image is large, layer caching is fragile, and the whole experience fights against Docker’s design principles. Here’s what you’re dealing with, how to minimize the damage, and what changes when you use a tool built for this kind of environment.
The cost of Homebrew in Docker
Installation overhead
Homebrew’s install script (/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)") does the following:
- Installs Homebrew itself (~15MB)
- Clones the
homebrew-coretap (~700MB git history, ~200MB on disk after shallow clone) - Sets up the Ruby environment
- Configures the shell environment
In a Dockerfile, this looks like:
RUN NONINTERACTIVE=1 /bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
This single RUN step takes 30-90 seconds and adds 400-800MB to your image layer. And you haven’t installed a single package yet.
Image size
A minimal Ubuntu 22.04 image is ~77MB. Add Homebrew and install three packages:
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y curl git build-essential procps
RUN NONINTERACTIVE=1 /bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
ENV PATH="/home/linuxbrew/.linuxbrew/bin:${PATH}"
RUN brew install jq yq ripgrep
The resulting image is typically 1.5-2.5GB. For three CLI tools that total ~15MB of actual binaries, you’ve added over a gigabyte of overhead: the git repository, Ruby runtime, bottle cache, build artifacts, and Homebrew’s own framework.
Layer caching problems
Docker layer caching is based on instruction identity. If nothing in a RUN instruction changes, the cached layer is reused. But Homebrew’s model breaks this in practice:
- The
homebrew-coregit clone fetches the latest commit. Running the same Dockerfile on different days produces different results, even though the instruction is identical. brew install jqresolves to whatever version is current. Version changes invalidate downstream layers.- Homebrew stores state across its prefix — installing one package can modify metadata files that affect subsequent installs.
The result is that Homebrew layers cache poorly. Rebuilds are frequent and expensive.
Multi-stage build challenges
The standard Docker practice for reducing image size is multi-stage builds: install in a build stage, then copy only the needed artifacts to a slim final stage. With Homebrew, this is harder than it should be:
# Build stage
FROM ubuntu:22.04 AS builder
RUN apt-get update && apt-get install -y curl git build-essential procps
RUN NONINTERACTIVE=1 /bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
ENV PATH="/home/linuxbrew/.linuxbrew/bin:${PATH}"
RUN brew install ffmpeg
# Final stage — but how do you copy ffmpeg?
FROM ubuntu:22.04
COPY --from=builder /home/linuxbrew/.linuxbrew/opt/ffmpeg /opt/ffmpeg
# This doesn't work because ffmpeg has runtime dependencies
# scattered across the Homebrew prefix
Homebrew installs packages across a cellar structure with symlinks, and packages reference shared libraries via rpaths pointing into the Homebrew prefix. You can’t just copy a single directory — you need the entire dependency tree with its directory structure intact. This largely defeats the purpose of multi-stage builds.
Optimizing Homebrew in Docker (if you must)
If you’re stuck with Homebrew in Docker, here are the mitigations:
Use a shallow clone and disable features you don’t need:
ENV HOMEBREW_NO_AUTO_UPDATE=1 \
HOMEBREW_NO_INSTALL_CLEANUP=1 \
HOMEBREW_NO_ANALYTICS=1 \
HOMEBREW_INSTALL_FROM_API=1
RUN NONINTERACTIVE=1 /bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Combine install and cleanup in a single layer:
RUN brew install jq yq ripgrep && \
brew cleanup --prune=all && \
rm -rf "$(brew --cache)" && \
rm -rf /home/linuxbrew/.linuxbrew/Homebrew/.git
Removing the git history after installation saves ~500MB, but it means brew update will no longer work in the container. For immutable container images, this is usually fine.
Pin the Homebrew version to improve reproducibility:
RUN NONINTERACTIVE=1 /bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" && \
cd /home/linuxbrew/.linuxbrew/Homebrew && \
git checkout 4.2.0
Even with all of these optimizations, you’re looking at 600MB-1GB of overhead for the Homebrew framework.
How stout works in Docker
stout is a single static binary with no runtime dependencies. Here’s the equivalent Dockerfile:
FROM ubuntu:22.04
RUN curl -fsSL https://get.stout.dev/binary/linux-x86_64 -o /usr/local/bin/stout && \
chmod +x /usr/local/bin/stout
RUN stout install jq yq ripgrep
That’s it. No git clone, no Ruby, no framework. The install step takes 2-3 seconds and adds ~3MB for the stout binary plus the actual size of the installed packages.
Image size comparison
| Setup | Image size | Build time |
|---|---|---|
| Ubuntu + Homebrew + 3 packages | ~1.8GB | ~90s |
| Ubuntu + Homebrew (optimized) + 3 packages | ~900MB | ~60s |
| Ubuntu + stout + 3 packages | ~120MB | ~8s |
The stout-based image is 7-15x smaller than the Homebrew equivalent. Build times are 7-11x faster.
Multi-stage builds work naturally
Because stout installs packages into a clean prefix without scattered symlinks, multi-stage builds work as expected:
FROM ubuntu:22.04 AS builder
RUN curl -fsSL https://get.stout.dev/binary/linux-x86_64 -o /usr/local/bin/stout && \
chmod +x /usr/local/bin/stout
RUN stout install --prefix /opt/tools ffmpeg
FROM ubuntu:22.04
COPY --from=builder /opt/tools /opt/tools
ENV PATH="/opt/tools/bin:${PATH}"
The --prefix flag installs everything into a single directory tree with no external dependencies. Copy that directory to your final stage and you’re done.
Layer caching works predictably
stout’s 3MB SQLite index is versioned. When you pin the index version (or use a lock file), the RUN stout install layer produces identical output every time:
COPY stout.lock .
RUN stout install --lockfile stout.lock
If stout.lock hasn’t changed, Docker reuses the cached layer. If it has changed, only the packages that differ are downloaded. There’s no git repository with constantly-changing history to invalidate the cache.
Scratch and distroless base images
For security-sensitive deployments, you might use scratch or distroless base images that contain almost nothing. Homebrew can’t work in these environments at all — it requires bash, git, curl, and Ruby at minimum.
stout’s static binary runs on any Linux kernel without additional dependencies:
FROM gcr.io/distroless/static
COPY --from=builder /usr/local/bin/stout /usr/local/bin/stout
COPY --from=builder /opt/tools /opt/tools
When Homebrew in Docker makes sense
There are cases where Homebrew in Docker is the right choice:
- You’re replicating a developer’s exact local environment for debugging
- You need a package that’s only available as a Homebrew formula (rare, but it happens)
- You’re running Homebrew’s own test infrastructure
For everything else — CI containers, build environments, production-adjacent containers, developer environment standardization — a tool designed for non-interactive, headless use in minimal environments is a better fit. Containers are supposed to be small, fast to build, and reproducible. Homebrew makes them large, slow to build, and non-deterministic. stout aligns with how containers are meant to work.
Need Rust performance engineering or AI agent expertise?
Neul Labs — the team behind stout — consults on Rust development, performance optimization, CLI tool design, and AI agent infrastructure. We build fast, reliable systems that ship.