Managing Developer Dependencies at Scale
When your team grows past 10 developers, 'just brew install it' stops working. Here's how to standardize, audit, and scale developer tooling.
At five developers, everyone installs what they need and it works. Someone runs into a build issue, they ask in Slack, another developer says “oh you need to brew install pkg-config,” and the problem is solved in two minutes.
At fifty developers, this breaks down. New hires spend half a day getting their machine set up. Builds fail because Alice has openssl 3.1 and Bob has openssl 3.2. The platform team gets ten tickets a week that all boil down to “my local environment doesn’t match production.” Nobody knows exactly what’s installed on any given machine, and no one can reproduce anyone else’s environment.
This is the developer dependency scaling problem. Here’s how it manifests, and how to solve it systematically.
The symptoms of unmanaged dependencies
You’re likely dealing with this problem if you recognize these patterns:
Onboarding takes more than an hour. New developers follow a wiki page (last updated eight months ago) that says to run a list of brew install commands. Half the commands work. The other half fail because of changed formula names, missing taps, or version conflicts. The new developer spends the afternoon debugging their setup with help from someone who “went through this last month.”
“Works on my machine” is a weekly occurrence. Developer A’s build passes locally but fails in CI. Or developer B can’t reproduce a bug that developer C sees consistently. The root cause is different versions of system dependencies — different openssl, different protobuf, different libpq. But nobody knows this because nobody tracks system dependency versions.
Shadow IT for developer tools. Without a standard way to install tools, developers find their own paths. Some use Homebrew. Some use Nix. Some download binaries from GitHub releases. Some compile from source. Some use Docker for everything. Each approach creates a different environment with different behavior.
Security has no visibility. When a CVE drops for curl or libxml2, the security team asks “who’s affected?” The answer is “we don’t know, because we don’t know what version of curl is on each developer’s machine.” Patching is manual, unverified, and incomplete.
CI doesn’t match local. CI uses a specific runner image with specific package versions. Developer machines have whatever accumulated over months of brew install and brew upgrade. The two environments drift over time, causing builds that pass locally to fail in CI (or vice versa).
The manual approach: scripts and wikis
Most teams’ first attempt at standardization is a setup script:
#!/bin/bash
# setup.sh — Developer machine setup
# Last updated: 2025-11-03 (probably)
brew update
brew install git node [email protected] go protobuf pkg-config openssl
brew install --cask docker visual-studio-code
# Project-specific dependencies
brew tap some-company/internal
brew install some-company/internal/internal-tool
echo "Done! If something failed, ask in #dev-setup on Slack."
This is better than a wiki page. But it still has problems:
- No version pinning. Each developer gets whatever version is current when they run the script. Two developers running the script a week apart get different versions.
- No idempotency. Running the script twice might upgrade packages unexpectedly, or fail because packages are already installed.
- No verification. There’s no way to check that a developer’s machine matches the expected state after running the script.
- No audit trail. You can’t answer “what version of openssl is installed across the engineering team?”
The Nix approach
Nix solves the reproducibility problem through functional package management. Every package is built in isolation and identified by a hash of its inputs. Nix environments are perfectly reproducible.
But Nix has adoption challenges at scale:
- Steep learning curve. The Nix expression language is unfamiliar to most developers. Writing and maintaining Nix expressions requires specialized knowledge.
- Slow builds. Building from source (Nix’s default mode) is significantly slower than downloading pre-built binaries.
- Large disk usage. Nix stores every version of every package separately.
/nix/storecan grow to tens or hundreds of gigabytes. - Organizational friction. Convincing fifty developers to learn Nix is a harder sell than “it works like Homebrew but faster.”
Nix is the right choice for some organizations, particularly those with strong platform engineering teams and a culture of technical rigor. For most, the adoption cost is too high.
A systematic approach with stout
stout’s design targets exactly this scaling problem: give platform teams the control they need while keeping the developer experience simple.
Step 1: Define the standard toolset
Create a stout.toml at the root of your repository (or in a shared configuration repository):
# stout.toml — Engineering team standard dependencies
[packages]
git = "2.44"
node = "20.12"
python = "3.12"
go = "1.22"
protobuf = "26.1"
pkg-config = "0.29.2"
openssl = "3.2.1"
redis = "7.2"
[packages.dev]
# Additional packages for local development only
shellcheck = "0.10"
jq = "1.7"
ripgrep = "14.1"
This file is version-controlled and code-reviewed. Changes go through the same pull request process as any other code change.
Step 2: Lock and distribute
Generate a lock file from the configuration:
stout lock --config stout.toml > stout.lock
The lock file captures the exact version, checksum, and dependency tree for every package. Commit both stout.toml and stout.lock to the repository.
Step 3: Developer setup
A new developer’s setup is one command:
curl -fsSL https://get.stout.dev | sh
stout install --lockfile stout.lock
This takes 15-30 seconds (depending on how many packages are in the lock file) and produces an environment that’s identical to every other developer’s machine. There’s no wiki to follow, no Slack questions, no debugging.
If a developer’s environment has drifted (they installed something manually, or upgraded a package outside the standard workflow), they can verify and reset:
# Check if local state matches the lock file
stout lock verify --lockfile stout.lock
# Reset to the locked state
stout install --lockfile stout.lock --exact
The --exact flag removes packages that aren’t in the lock file, ensuring the environment matches exactly.
Step 4: CI uses the same lock file
- name: Install dependencies
run: |
curl -fsSL https://get.stout.dev | sh
stout install --lockfile stout.lock
CI and local development use the same lock file. If a build passes locally, it passes in CI. If it fails in CI, it fails locally. The “works on my machine” class of bugs is eliminated.
Step 5: Visibility and auditing
Platform teams need to answer questions like “how many developers are running openssl 3.1?” and “has everyone updated to the patched version of curl?”
stout supports reporting installed package state to a central endpoint:
# Report installed packages to the platform team's dashboard
stout report --endpoint https://platform.internal/stout-inventory
This can run automatically (via a launch agent on macOS or a cron job on Linux) or be triggered by the platform team. The report includes:
- Machine identifier
- stout version
- All installed packages with exact versions
- Whether the local state matches the team’s lock file
When a CVE drops, the platform team queries the inventory:
$ curl https://platform.internal/stout-inventory/query?package=curl&version=8.5.0
{
"affected_machines": 12,
"total_machines": 48,
"users": ["alice", "bob", "charlie", ...]
}
Then they update the lock file, and affected developers get a notification to run stout install --lockfile stout.lock.
The organizational change
Tooling alone doesn’t solve the scaling problem. You also need process:
Assign ownership. Someone (or some team) owns the stout.toml and stout.lock files. They review and approve dependency changes. They respond to security advisories. Without ownership, the files rot.
Automate verification. Run stout lock verify as a CI check. If a developer’s PR was built on an environment that doesn’t match the lock file, the check fails. This catches drift before it causes problems.
Schedule updates. Set a cadence for dependency updates — monthly is reasonable for most teams. The owner runs stout update --all, regenerates the lock file, tests the new versions, and merges the update. Between updates, everything is pinned.
Make it easy to deviate. Developers sometimes need packages that aren’t in the standard set. stout supports local overrides that extend (but don’t conflict with) the team lock file:
# stout.local.toml — not committed, gitignored
[packages]
lua = "5.4"
This lets individual developers install additional tools without affecting the team standard.
The goal isn’t to control every binary on every developer’s machine. It’s to ensure that the packages that affect build output and application behavior are consistent, tracked, and updateable. When you can do that reliably, an entire category of engineering friction disappears — and the platform team can answer “what’s installed across the org?” in seconds instead of days.
Need Rust performance engineering or AI agent expertise?
Neul Labs — the team behind stout — consults on Rust development, performance optimization, CLI tool design, and AI agent infrastructure. We build fast, reliable systems that ship.