Skip to main content

A high-performance local compiler cache daemon

Project description

zccache

Linux macOS Windows

C/C++ clang clang++ clang-tidy IWYU

Rust rustc clippy rustfmt

Emscripten emcc em++ wasm-ld

A blazing fast compiler cache for C/C++ and Rust

New Project

Inspired by sccache, but optimized for local-first use with aggressive file metadata caching and filesystem watching.

Performance

50 files per benchmark, median of 5 trials. Run it yourself: ./perf

Cache Hit (warm cache)

Benchmark Bare Compiler sccache zccache vs sccache vs bare
C++ single-file 11.705s 1.576s 0.050s 32x 236x
C++ multi-file 11.553s 11.530s 0.017s 695x 696x
C++ response-file (single) 12.540s 1.558s 0.047s 33x 267x
C++ response-file (multi) 12.049s 12.434s 0.019s 669x 648x
Rust build 6.592s 8.604s 0.045s 193x 148x
Rust check 3.716s 5.922s 0.049s 121x 76x

Cache Miss (cold compile)

Benchmark Bare Compiler sccache zccache vs sccache vs bare
C++ single-file 12.641s 20.632s 13.430s 1.5x 0.9x
C++ multi-file 11.358s 11.759s 12.867s 0.9x 0.9x
C++ response-file (single) 12.063s 20.607s 14.087s 1.5x 0.9x
C++ response-file (multi) 13.030s 25.303s 13.975s 1.8x 0.9x
Rust build 7.119s 10.023s 8.507s 1.2x 0.8x
Rust check 4.289s 7.056s 5.060s 1.4x 0.8x
Benchmark details
  • Single-file = 50 sequential clang++ -c unit.cpp invocations
  • Multi-file = one clang++ -c *.cpp invocation (sccache cannot cache these — its "warm" time is a full recompile)
  • Response-file = args via nested .rsp files: 200 -D defines + 50 -I paths + 30 warning flags (~283 expanded args)
  • Rust build = --emit=dep-info,metadata,link (cargo build)
  • Rust check = --emit=dep-info,metadata (cargo check)
  • Cold = first compile (empty cache). Warm = median of 5 subsequent runs.
  • sccache gets cache hits but each hit still costs ~170ms subprocess overhead. zccache serves hits in ~1ms via in-process IPC.

Why is zccache so much faster on warm hits?

The difference comes from architecture, not better caching:

sccache zccache
IPC model Subprocess per invocation (fork + exec + connect) Persistent daemon, single IPC message per compile
Cache lookup Client hashes inputs, sends to server, server checks disk Daemon has inputs in memory (file watcher + metadata cache)
On hit Server reads artifact from disk, sends back via IPC Daemon hardlinks cached file to output path (1 syscall)
Multi-file Compiles every file (no multi-file cache support) Parallel per-file cache lookups, only misses go to the compiler
Per-hit cost ~170ms (process spawn + hash + disk I/O + IPC) ~1ms (in-memory lookup + hardlink)

Architecture enhancements that make the difference:

  • Filesystem watcher — a background notify watcher tracks file changes in real time, so the daemon already knows whether inputs are dirty before you even invoke a compile. No redundant stat/hash work on hit.
  • In-memory metadata cache — file sizes, mtimes, and content hashes live in a lock-free DashMap. Cache key computation is a memory lookup, not disk I/O.
  • Single-roundtrip IPC — each compile is one length-prefixed bincode message over a Unix socket (or named pipe on Windows). No subprocess spawning, no repeated handshakes.
  • Hardlink delivery — cache hits are served by hardlinking the cached artifact to the output path — a single syscall instead of reading + writing the file contents.
  • Multi-file fast path — when a build system passes N source files in one invocation, zccache checks all N against the cache in parallel, serves hits immediately, and batches only the misses into a single compiler process.

Broader tool coverage — zccache supports modes that other compiler caches don't:

Mode Description
Multi-file compilation clang++ -c a.cpp b.cpp c.cpp — per-file caching with parallel lookups
Response files Nested .rsp files with hundreds of flags — fully expanded and cached
clang-tidy Static analysis results cached and replayed
include-what-you-use IWYU output cached per translation unit
Emscripten (emcc/em++) WebAssembly compilation cached end-to-end
wasm-ld WebAssembly linking cached
rustfmt Formatting results cached
clippy Lint results cached
Rust check & build cargo check and cargo build with extern crate content hashing

Install

pip install zccache

This installs native Rust binaries (zccache and zccache-daemon) directly onto your PATH — no Python runtime dependency. Pre-built wheels are available for:

Platform Architecture
Linux x86_64, aarch64
macOS x86_64, Apple Silicon
Windows x86_64

Verify the install:

zccache --version

Use it as a drop-in replacement for sccache — just substitute zccache:

Rust / Cargo integration

# cargo build (cached)
RUSTC_WRAPPER=zccache cargo build

# cargo check (cached)
RUSTC_WRAPPER=zccache cargo check

Add to .cargo/config.toml for automatic use:

[build]
rustc-wrapper = "zccache"

Supports --emit=metadata (cargo check), --emit=dep-info,metadata,link (cargo build), extern crate content hashing (dependency changes cause cache misses), and all cacheable crate types (lib, rlib, staticlib). Proc-macro and binary crates are passed through without caching (same as sccache).

C/C++ build system integration (ninja, meson, cmake, make)

zccache is a drop-in compiler wrapper. Point your build system's compiler at zccache <real-compiler> and it handles the rest:

# meson native file
[binaries]
c = ['zccache', '/usr/bin/clang']
cpp = ['zccache', '/usr/bin/clang++']
# CMake
set(CMAKE_C_COMPILER_LAUNCHER zccache)
set(CMAKE_CXX_COMPILER_LAUNCHER zccache)

The first build (cold cache) runs at near-bare speed. Subsequent rebuilds (ninja -t clean && ninja, or touching source files) serve cached artifacts via hardlinks in under a second.

Single-roundtrip IPC: In drop-in mode, zccache sends a single CompileEphemeral message that combines session creation, compilation, and session teardown — eliminating 2 of 3 IPC roundtrips per invocation.

Session stats: Track hit rates per-build with --stats:

eval $(zccache session-start --stats --log build.log)
export ZCCACHE_SESSION_ID=...
# ... build runs ...
zccache session-stats $ZCCACHE_SESSION_ID   # query mid-build
zccache session-end $ZCCACHE_SESSION_ID     # final stats

Persistent cache: Artifacts are stored in ~/.zccache/artifacts/ and survive daemon restarts. No need to re-warm the cache after a reboot.

Compile journal (build replay): Every compile and link command is recorded to ~/.zccache/logs/compile_journal.jsonl as a JSONL file with enough detail to replay the entire build:

{"ts":"2026-03-17T10:30:00.123Z","outcome":"hit","compiler":"/usr/bin/clang++","args":["-c","foo.cpp","-o","foo.o"],"cwd":"/project/build","env":[["CC","clang"]],"exit_code":0,"session_id":"uuid","latency_ns":1234567}

Fields: ts (ISO 8601 UTC), outcome (hit/miss/error/link_hit/link_miss), compiler (full path), args (full argument list), cwd, env (omitted when inheriting daemon env), exit_code, session_id (null for ephemeral), latency_ns (wall-clock nanoseconds). One JSON object per line — pipe through jq to filter, or replay builds by extracting compiler + args + cwd.

Per-session compile journal: Pass --journal <path> to session-start to write a dedicated JSONL log containing only the commands from that session. The path must end in .jsonl:

result=$(zccache session-start --journal build.jsonl)
session_id=$(echo "$result" | jq -r .session_id)
export ZCCACHE_SESSION_ID=$session_id

# ... build runs ...

# Inspect this session's commands only (no noise from other sessions)
jq . build.jsonl

zccache session-end $session_id

The session journal uses the same JSONL schema as the global journal. Entries are written to both the global and session journals simultaneously. The session file handle is released when session-end is called.

Multi-file compilation (fast path)

When a build system passes multiple source files to a single compiler invocation (e.g. gcc -c a.cpp b.cpp c.cpp -o ...), zccache treats this as a fast path:

  1. Each source file is checked against the cache in parallel.
  2. Cache hits are served immediately — their .o files are written from the cache.
  3. Remaining cache misses are batched into a single compiler process, preserving the compiler's own process-reuse and memory-sharing benefits.
  4. The outputs of the batched compilation are cached individually for future hits.

This hybrid approach means the first build populates the cache per-file, and subsequent builds serve as many files as possible from cache while still letting the compiler handle misses efficiently in bulk.

Recommendation: Configure your build system to pass multiple source files per compiler invocation whenever possible. This gives zccache the best opportunity to parallelize cache lookups and minimize compiler launches.

Concurrency

The daemon uses lock-free concurrent data structures (DashMap) for artifact and metadata lookups, so parallel compilation requests from multiple build workers never serialize on a global lock.

Status

Early development — architecture and scaffolding phase.

Goals

  • Extremely fast on local machines (daemon keeps caches warm)
  • Portable across Linux, macOS, and Windows
  • Correct under heavy parallel compilation (no stale cache hits)
  • Simple deployment (single binary)

Tool Compatibility

zccache works as a drop-in wrapper for these compilers and tools:

Architecture

See docs/ARCHITECTURE.md for the full system design.

Key components

Crate Purpose
zccache-cli Command-line interface (zccache binary)
zccache-daemon Daemon process (IPC server, orchestration)
zccache-core Shared types, errors, config, path utilities
zccache-protocol IPC message types and serialization
zccache-ipc Transport layer (Unix sockets / named pipes)
zccache-hash blake3 hashing and cache key computation
zccache-fscache In-memory file metadata cache
zccache-artifact Disk-backed artifact store with redb index
zccache-watcher File watcher abstraction (notify backend)
zccache-compiler Compiler detection and argument parsing
zccache-test-support Test utilities and fixtures

Building

cargo build --workspace

Testing

cargo test --workspace

Documentation

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

zccache-1.1.16-py3-none-win_arm64.whl (4.2 MB view details)

Uploaded Python 3Windows ARM64

zccache-1.1.16-py3-none-win_amd64.whl (4.6 MB view details)

Uploaded Python 3Windows x86-64

zccache-1.1.16-py3-none-musllinux_1_2_x86_64.whl (5.4 MB view details)

Uploaded Python 3musllinux: musl 1.2+ x86-64

zccache-1.1.16-py3-none-musllinux_1_2_aarch64.whl (5.0 MB view details)

Uploaded Python 3musllinux: musl 1.2+ ARM64

zccache-1.1.16-py3-none-manylinux_2_17_x86_64.whl (5.3 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ x86-64

zccache-1.1.16-py3-none-manylinux_2_17_aarch64.whl (4.9 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ ARM64

zccache-1.1.16-py3-none-macosx_11_0_arm64.whl (4.5 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

zccache-1.1.16-py3-none-macosx_10_12_x86_64.whl (4.7 MB view details)

Uploaded Python 3macOS 10.12+ x86-64

File details

Details for the file zccache-1.1.16-py3-none-win_arm64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-win_arm64.whl
Algorithm Hash digest
SHA256 498e7827795029125ae92774ee8a0715d22723d47ad676b67f6891ec7593aee4
MD5 2b6a9721548c6fc5437409b1b7ef5334
BLAKE2b-256 0e4c03459b7f88e4ca4a7a5cd0ba2ed08bc76eafc7fe216a1282feeb0c28e39b

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 cc83535ddbc36957067f671226a14e375d0a9aeb9fb82e035027686f8589af01
MD5 9bff019a5593bdd734c2ae72fad6cee2
BLAKE2b-256 1b9aadc027e2a6a39b697602700d8c627978cc20dedd5cf747f6abfa8fe36840

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 dcba47eda809be4b2e0345f11e72ffc12dc823cb90f0bf058f9f907ad238d0ff
MD5 509d0754177e0f7a61c72d0490c9fa69
BLAKE2b-256 7d9bdca9e9656a899e4ffdd473aef007bac03b66c60eb90130012542bd0f42d8

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-musllinux_1_2_aarch64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-musllinux_1_2_aarch64.whl
Algorithm Hash digest
SHA256 99f6cdccd9fb263754d4d43aa1e371964095e207432ef765a9524e60e259ad5d
MD5 d64507167f8d94807138fa7f675587fb
BLAKE2b-256 202a513659e38e7d0858e7ace25ba424f0ff320f8d827aeff966e132e363bf62

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 b7e86a84f6016f4aa083975cea07d78c818f1e28d76d45924c585d642b28b48f
MD5 0f5b2515a3f52cf0739e61b3f05d97a8
BLAKE2b-256 cf78c5fb28891ecba3b9ff12b9551be36cef657d226f242906a36ba5cf6e8d21

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 c7ea3f62fbbd349c6068689b5f5a110bf23ec1b71d0cedfa71439a955e4d438b
MD5 2cbcf0a88f0ca01abdec927ddf94218f
BLAKE2b-256 76a326013161f229a6cc4746d660832863e7613bc01f657d90ace29ae1b36a1e

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5a7294b4ac8d62515a0c09424be4c1a4953dd833d8fc3b6589bef8ebc9da0658
MD5 b2b603313196d0793a712d43dc5dfb70
BLAKE2b-256 0bcfbee4bc7514741e0ae09aabe815672eecd2e788d02b5684fa47af44171b35

See more details on using hashes here.

File details

Details for the file zccache-1.1.16-py3-none-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for zccache-1.1.16-py3-none-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 2913482ea36d23ed74b2becfae62934d5e68646a843663f0400598ea28aef1cc
MD5 28d69b26443ab4d9671af47cc7bd1ee6
BLAKE2b-256 9221e44e2ca43513604ea61bda6b23a65c0db6756f83ffa6b7c0e9f9a2fe6ca8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page