Why Rust Over Go for a Package Manager
Both Rust and Go produce fast native binaries. Here's why stout chose Rust — memory control, error handling, the type system, and the crate ecosystem.
Go and Rust are the two dominant languages for new CLI tools and systems software in 2026. Both compile to native binaries. Both have strong concurrency models. Both have large ecosystems. When we started building stout, we prototyped critical paths in both languages before committing. This article lays out the specific technical factors that led us to Rust — not as a general “Rust vs Go” debate, but through the lens of what a package manager actually needs to do.
Memory usage: GC versus ownership
A package manager routinely handles large data. stout downloads bottle files that range from a few kilobytes to several hundred megabytes, decompresses them, verifies checksums, and writes them to disk — often for dozens of packages in parallel. Memory behavior during these operations matters.
In Go, the garbage collector manages heap allocations. This works well for most applications, but it introduces two problems for our use case:
Unpredictable peak memory. When downloading and extracting 20 bottles concurrently, Go’s GC may not collect intermediate buffers fast enough. In our prototype, peak memory usage during a parallel install of ffmpeg (25 dependencies) fluctuated between 180MB and 400MB across runs, depending on GC timing.
GC pause latency. Go’s GC targets sub-millisecond pauses in most workloads, but large heap sizes push pause times higher. During a parallel download with progress bars, GC pauses caused visible stutter in terminal output.
In Rust, memory is freed deterministically when values go out of scope. There is no GC and no pause. stout’s peak memory during the same ffmpeg install is consistently 120-140MB, because buffers are dropped as soon as extraction completes:
async fn download_and_extract(spec: &BottleSpec, cellar: &Path) -> Result<()> {
let bytes = download_bottle(&spec.url).await?; // allocated
verify_sha256(&bytes, &spec.sha256)?;
extract_tar_zstd(&bytes, cellar)?;
// `bytes` is dropped here — memory freed immediately
Ok(())
}
There is no equivalent guarantee in Go. You can call runtime.GC() manually, but that blocks the goroutine and defeats the purpose.
Error handling: Result types versus error returns
Go’s error handling pattern is explicit, which is good. But it relies on convention, not the type system:
func downloadBottle(url string) ([]byte, error) {
resp, err := http.Get(url)
if err != nil {
return nil, err
}
defer resp.Body.Close()
// Easy to forget: check status code
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return body, nil
}
The compiler does not enforce that you check err. You can ignore the error return and the code compiles. Linters catch many of these, but not all — particularly in complex control flow with multiple error sources.
Rust’s Result<T, E> is a type. If a function returns Result, you must handle it. The compiler emits a warning if you discard a Result, and Clippy promotes it to an error. The ? operator propagates errors concisely without losing explicitness:
async fn download_bottle(url: &str) -> Result<Vec<u8>> {
let response = reqwest::get(url)
.await?
.error_for_status()?; // compiler forces you to handle this
let bytes = response.bytes().await?;
Ok(bytes.to_vec())
}
In a codebase with hundreds of fallible operations — network requests, filesystem writes, SQLite queries, compression, signature verification — the difference between compiler-enforced and convention-enforced error handling is the difference between a class of bugs that exists and one that does not.
The type system: enums and pattern matching
Go’s type system is intentionally simple. It uses interfaces for polymorphism and lacks sum types (tagged unions). When modeling package states in Go, you end up with string constants or iota enums that the compiler cannot exhaustively check:
type PackageState int
const (
NotInstalled PackageState = iota
Installed
Outdated
Pinned
)
Nothing stops you from passing an unhandled PackageState value through a switch statement. Rust’s enums with pattern matching close this gap:
enum PackageState {
NotInstalled,
Installed { version: Version, path: PathBuf },
Outdated { current: Version, available: Version },
Pinned { version: Version },
}
fn status_line(state: &PackageState) -> String {
match state {
PackageState::NotInstalled => "not installed".into(),
PackageState::Installed { version, .. } => format!("installed {version}"),
PackageState::Outdated { current, available } =>
format!("{current} -> {available}"),
PackageState::Pinned { version } => format!("pinned at {version}"),
}
// Adding a new variant forces updating every match — the compiler ensures it.
}
This is especially valuable for dependency resolution, where package states carry associated data and the resolution algorithm must handle every case. In Go, the equivalent requires discipline and testing. In Rust, it requires compilation.
Concurrency: goroutines versus async/await
Go’s goroutine model is simpler to learn. You write go func() and it runs concurrently. Channels handle communication. For many applications, this simplicity is a genuine advantage.
Rust’s async model with Tokio is more complex but provides two things stout needs:
Structured concurrency. Tokio’s JoinSet gives us a bounded set of concurrent tasks with explicit error propagation. When downloading 25 bottles, we spawn them into a JoinSet with a concurrency limit and await all results:
let mut set = JoinSet::new();
let semaphore = Arc::new(Semaphore::new(8)); // max 8 concurrent downloads
for spec in bottles {
let permit = semaphore.clone().acquire_owned().await?;
set.spawn(async move {
let result = download_and_extract(&spec).await;
drop(permit);
result
});
}
while let Some(result) = set.join_next().await {
result??; // propagate both join and download errors
}
Data race prevention at compile time. Rust’s Send and Sync traits make it a compiler error to share mutable data across tasks without synchronization. In Go, the race detector catches data races at runtime — but only for code paths that are exercised during testing. Rust’s model catches them for all code paths at compile time.
Crate ecosystem versus Go modules
Both ecosystems have strong libraries for CLI development. The comparison for our specific needs:
| Capability | Rust crate | Go equivalent |
|---|---|---|
| Arg parsing | clap (derive) | cobra |
| Async HTTP | reqwest + tokio | net/http (stdlib) |
| SQLite | rusqlite | mattn/go-sqlite3 |
| Serialization | serde | encoding/json (stdlib) |
| Compression | zstd (binding) | klauspost/compress |
| Ed25519 | ed25519-dalek | crypto/ed25519 (stdlib) |
| Progress bars | indicatif | various |
Go has the advantage of a larger standard library — net/http, encoding/json, and crypto/ed25519 are built in. Rust requires external crates for these capabilities.
However, the Rust crates are generally faster. serde deserializes JSON 2-5x faster than encoding/json. reqwest with hyper handles HTTP/2 with zero-copy streaming. rusqlite with the bundled feature statically links SQLite, avoiding the CGo overhead that go-sqlite3 pays for every call crossing the Go/C boundary.
That CGo overhead was a real concern. Every SQLite query in Go crosses the CGo bridge, which costs approximately 50-100ns per call. stout runs thousands of SQLite queries during dependency resolution. In our benchmarks, the rusqlite path completed a full dependency resolve for ffmpeg in 2.3ms. The Go equivalent with go-sqlite3 took 8.7ms — almost 4x slower, primarily from CGo overhead.
The tradeoff: development velocity
Rust’s compile times are slower than Go’s. A clean build of stout takes approximately 45 seconds; an incremental build takes 3-8 seconds. Go compiles the equivalent in under 5 seconds clean, under 1 second incremental.
Rust’s learning curve is steeper. Lifetimes, the borrow checker, and async trait boundaries take weeks to internalize. Go can be productive in days.
These are real costs. We accepted them because the resulting binary is faster, uses less memory, handles errors more rigorously, and prevents entire categories of bugs at compile time. For a package manager that runs on hundreds of thousands of developer machines, the one-time development cost is small compared to the per-execution performance gain.
Conclusion
If we were building a web service, a Kubernetes operator, or a deployment tool, Go would be a strong choice. Its simplicity, fast compilation, and excellent standard library are genuine strengths.
For a package manager — where startup time, memory control, data integrity, and raw throughput determine the user experience — Rust’s ownership model, type system, and zero-cost abstractions produce a measurably better result. stout’s 5ms startup, deterministic memory usage, and compiler-enforced correctness are direct consequences of that choice.
Need Rust performance engineering or AI agent expertise?
Neul Labs — the team behind stout — consults on Rust development, performance optimization, CLI tool design, and AI agent infrastructure. We build fast, reliable systems that ship.