Skip to main content

A high-performance local compiler cache daemon

Project description

zccache

Linux macOS Windows

A blazing fast compiler cache for C/C++ and Rust

New Project

Inspired by sccache, but optimized for local-first use with aggressive file metadata caching and filesystem watching.

Performance

Rust Benchmark: 50 .rs files, 5 warm trials

Scenario Bare rustc sccache zccache vs sccache vs bare rustc
Build, Cold 7.119s 10.023s 8.507s 1.2x faster 1.2x slower
Build, Warm 6.592s 8.604s 0.045s 193x faster 148x faster
Check, Cold 4.289s 7.056s 5.060s 1.4x faster 1.2x slower
Check, Warm 3.716s 5.922s 0.049s 121x faster 76x faster

Build = --emit=dep-info,metadata,link (cargo build). Check = --emit=dep-info,metadata (cargo check). Cold = first compile (empty cache). Warm = median of 5 subsequent runs. Each file is an independent rustc --crate-type lib invocation with --out-dir (same flags cargo passes).

sccache gets cache hits but each hit still costs ~170ms subprocess overhead. zccache serves hits in ~1ms via in-process IPC — no subprocess, no re-hashing.

Why is zccache 120-193x faster than sccache on warm hits?

The difference comes from architecture, not better caching:

sccache zccache
IPC model Subprocess per invocation (fork + exec + connect) Persistent daemon, single IPC message per compile
Cache lookup Client hashes inputs, sends to server, server checks disk Daemon has inputs in memory (file watcher + metadata cache)
On hit Server reads artifact from disk, sends back via IPC Daemon hardlinks cached file to output path (1 syscall)
Per-hit cost ~170ms (process spawn + hash + disk I/O + IPC) ~1ms (in-memory lookup + hardlink)

sccache was designed for distributed caching (S3, GCS, Redis) where network latency dwarfs local overhead. zccache is designed for local-first use where every millisecond of wrapper overhead matters.

C++ Benchmark: 50 C++ files, 5 warm trials

Scenario Bare Clang sccache zccache vs sccache vs bare clang
Single-file, Cold 12.641s 20.632s 13.430s 1.5x faster 1.1x slower
Single-file, Warm 11.705s 1.576s 0.050s 32x faster 236x faster
Multi-file, Cold 11.358s 11.759s 12.867s 1.1x slower 1.1x slower
Multi-file, Warm 11.553s 11.530s 0.017s 695x faster 696x faster

Cold = first compile (empty cache). Warm = median of 5 subsequent runs. Single-file = 50 sequential clang++ -c unit.cpp invocations. Multi-file = one clang++ -c *.cpp invocation. sccache cannot cache multi-file compilations — its "warm" multi-file time is a full recompile.

Response-file benchmark: 50 C++ files, ~283 expanded args, 5 warm trials

Scenario Bare Clang sccache zccache vs sccache vs bare clang
Single-file RSP, Cold 12.063s 20.607s 14.087s 1.5x faster 1.2x slower
Single-file RSP, Warm 12.540s 1.558s 0.047s 33x faster 267x faster
Multi-file RSP, Cold 13.030s 25.303s 13.975s 1.8x faster 1.1x slower
Multi-file RSP, Warm 12.049s 12.434s 0.019s 669x faster 648x faster

All args passed via nested response files: flags.rsp -> @warnings.rsp + @defines.rsp. 200 -D defines + 50 -I paths + 30 warning flags = ~283 total expanded args per compile.

Run the benchmark yourself: ./perf

Install

pip install zccache

This installs native Rust binaries (zccache and zccache-daemon) directly onto your PATH — no Python runtime dependency. Pre-built wheels are available for:

Platform Architecture
Linux x86_64, aarch64
macOS x86_64, Apple Silicon
Windows x86_64

Verify the install:

zccache --version

Use it as a drop-in replacement for sccache — just substitute zccache:

Rust / Cargo integration

# cargo build (cached)
RUSTC_WRAPPER=zccache cargo build

# cargo check (cached)
RUSTC_WRAPPER=zccache cargo check

Add to .cargo/config.toml for automatic use:

[build]
rustc-wrapper = "zccache"

Supports --emit=metadata (cargo check), --emit=dep-info,metadata,link (cargo build), extern crate content hashing (dependency changes cause cache misses), and all cacheable crate types (lib, rlib, staticlib). Proc-macro and binary crates are passed through without caching (same as sccache).

C/C++ build system integration (ninja, meson, cmake, make)

zccache is a drop-in compiler wrapper. Point your build system's compiler at zccache <real-compiler> and it handles the rest:

# meson native file
[binaries]
c = ['zccache', '/usr/bin/clang']
cpp = ['zccache', '/usr/bin/clang++']
# CMake
set(CMAKE_C_COMPILER_LAUNCHER zccache)
set(CMAKE_CXX_COMPILER_LAUNCHER zccache)

The first build (cold cache) runs at near-bare speed. Subsequent rebuilds (ninja -t clean && ninja, or touching source files) serve cached artifacts via hardlinks in under a second.

Single-roundtrip IPC: In drop-in mode, zccache sends a single CompileEphemeral message that combines session creation, compilation, and session teardown — eliminating 2 of 3 IPC roundtrips per invocation.

Session stats: Track hit rates per-build with --stats:

eval $(zccache session-start --stats --log build.log)
export ZCCACHE_SESSION_ID=...
# ... build runs ...
zccache session-stats $ZCCACHE_SESSION_ID   # query mid-build
zccache session-end $ZCCACHE_SESSION_ID     # final stats

Persistent cache: Artifacts are stored in ~/.zccache/artifacts/ and survive daemon restarts. No need to re-warm the cache after a reboot.

Compile journal (build replay): Every compile and link command is recorded to ~/.zccache/logs/compile_journal.jsonl as a JSONL file with enough detail to replay the entire build:

{"ts":"2026-03-17T10:30:00.123Z","outcome":"hit","compiler":"/usr/bin/clang++","args":["-c","foo.cpp","-o","foo.o"],"cwd":"/project/build","env":[["CC","clang"]],"exit_code":0,"session_id":"uuid","latency_ns":1234567}

Fields: ts (ISO 8601 UTC), outcome (hit/miss/error/link_hit/link_miss), compiler (full path), args (full argument list), cwd, env (omitted when inheriting daemon env), exit_code, session_id (null for ephemeral), latency_ns (wall-clock nanoseconds). One JSON object per line — pipe through jq to filter, or replay builds by extracting compiler + args + cwd.

Per-session compile journal: Pass --journal <path> to session-start to write a dedicated JSONL log containing only the commands from that session. The path must end in .jsonl:

result=$(zccache session-start --journal build.jsonl)
session_id=$(echo "$result" | jq -r .session_id)
export ZCCACHE_SESSION_ID=$session_id

# ... build runs ...

# Inspect this session's commands only (no noise from other sessions)
jq . build.jsonl

zccache session-end $session_id

The session journal uses the same JSONL schema as the global journal. Entries are written to both the global and session journals simultaneously. The session file handle is released when session-end is called.

Multi-file compilation (fast path)

When a build system passes multiple source files to a single compiler invocation (e.g. gcc -c a.cpp b.cpp c.cpp -o ...), zccache treats this as a fast path:

  1. Each source file is checked against the cache in parallel.
  2. Cache hits are served immediately — their .o files are written from the cache.
  3. Remaining cache misses are batched into a single compiler process, preserving the compiler's own process-reuse and memory-sharing benefits.
  4. The outputs of the batched compilation are cached individually for future hits.

This hybrid approach means the first build populates the cache per-file, and subsequent builds serve as many files as possible from cache while still letting the compiler handle misses efficiently in bulk.

Recommendation: Configure your build system to pass multiple source files per compiler invocation whenever possible. This gives zccache the best opportunity to parallelize cache lookups and minimize compiler launches.

Concurrency

The daemon uses lock-free concurrent data structures (DashMap) for artifact and metadata lookups, so parallel compilation requests from multiple build workers never serialize on a global lock.

Status

Early development — architecture and scaffolding phase.

Goals

  • Extremely fast on local machines (daemon keeps caches warm)
  • Portable across Linux, macOS, and Windows
  • Correct under heavy parallel compilation (no stale cache hits)
  • Simple deployment (single binary)

Architecture

See docs/ARCHITECTURE.md for the full system design.

Key components

Crate Purpose
zccache-cli Command-line interface (zccache binary)
zccache-daemon Daemon process (IPC server, orchestration)
zccache-core Shared types, errors, config, path utilities
zccache-protocol IPC message types and serialization
zccache-ipc Transport layer (Unix sockets / named pipes)
zccache-hash blake3 hashing and cache key computation
zccache-fscache In-memory file metadata cache
zccache-artifact Disk-backed artifact store with redb index
zccache-watcher File watcher abstraction (notify backend)
zccache-compiler Compiler detection and argument parsing
zccache-test-support Test utilities and fixtures

Building

cargo build --workspace

Testing

cargo test --workspace

Documentation

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

zccache-1.1.12-py3-none-win_amd64.whl (4.6 MB view details)

Uploaded Python 3Windows x86-64

zccache-1.1.12-py3-none-musllinux_1_2_x86_64.whl (5.4 MB view details)

Uploaded Python 3musllinux: musl 1.2+ x86-64

zccache-1.1.12-py3-none-musllinux_1_2_aarch64.whl (5.0 MB view details)

Uploaded Python 3musllinux: musl 1.2+ ARM64

zccache-1.1.12-py3-none-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

zccache-1.1.12-py3-none-macosx_10_12_x86_64.whl (4.7 MB view details)

Uploaded Python 3macOS 10.12+ x86-64

File details

Details for the file zccache-1.1.12-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for zccache-1.1.12-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 1daf583cdf7873e28675848c98b7e1351559fe4729584ce6a377ffee9d24450f
MD5 1d3371c4c7fa19487c226b6f6f749f97
BLAKE2b-256 531394df85a0f6b5824d23054e9523650a3acc36a2c35865c45671d71b2dec07

See more details on using hashes here.

File details

Details for the file zccache-1.1.12-py3-none-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for zccache-1.1.12-py3-none-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 28c8774c4e8573f0ea0b36b908e96e151e62b050494b9c311de63a30bd41f641
MD5 5b14452c511499b9c703337dcbfdab63
BLAKE2b-256 4febc52c1566bb8504309e2c0563a7a30b8f3a78a4be47fd169d609fc0e0b2d7

See more details on using hashes here.

File details

Details for the file zccache-1.1.12-py3-none-musllinux_1_2_aarch64.whl.

File metadata

File hashes

Hashes for zccache-1.1.12-py3-none-musllinux_1_2_aarch64.whl
Algorithm Hash digest
SHA256 7ec76d4c4bb70acbf18b07dbc7957afabc54e8892cfb396b92b668e85f521518
MD5 9c029d930da8f15f8373c834808b6f5e
BLAKE2b-256 8a2fd9f375113c46b5877ae859567f375abd60e2b7fee3d9c1bfc2114d00ca11

See more details on using hashes here.

File details

Details for the file zccache-1.1.12-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for zccache-1.1.12-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 022e66ac6a5981ccb048b4de8edee67a1765f1216bdb360320cc3b324a8d7b7d
MD5 d29eb20b1fd1ee84bc68883c0180a13f
BLAKE2b-256 8dd8bb790348742189df806fa5e43dcb19dcf621562a46b7897b56a734cbf48d

See more details on using hashes here.

File details

Details for the file zccache-1.1.12-py3-none-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for zccache-1.1.12-py3-none-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 1b23db5c5327576e12109e99bd69c8fbe8dbf4efd5b462c60d7e24bef95b5212
MD5 77ef12017ad14451085c3a42b4649276
BLAKE2b-256 4ff89c809ce2a488b07de77f61f73fe5d260a90dad8f58913567ad8a1916f262

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page