Skip to main content

A specialized, lightweight Django cache backend for Valkey.

Project description

django-vcache

A fast, async-native Django cache backend for Valkey (and Redis). Opinionated and secure by default.

It powers the GlitchTip open-source error tracking platform.

Why django-vcache?

  • Fast — Uses msgpack serialization (via ormsgpack) instead of pickle. 2-3x faster than Django's built-in RedisCache for typical workloads, up to 5x faster under concurrent async load.
  • Async-native — Real async implementations for aget, aset, etc. No sync_to_async thread-pool wrappers. This is the only Django cache backend with true native async support.
  • Secure by default — No pickle. Msgpack cannot execute arbitrary code on deserialization. No special configuration needed.
  • Efficient — At most two connections (one sync, one async) per backend. Lazy-loaded. Automatic zstd compression for large values. Uses libvalkey C parser out of the box.
  • Raw Access — Borrow the underlying valkey-py client for advanced operations (locking, pipelines, pub/sub) without spinning up new connections. Use with django-vtask.
  • Python 3.14 ready — Uses stdlib compression.zstd on 3.14+, no third-party compression dependency needed.

Benchmarks

Measured on Python 3.14 against Django's built-in RedisCache, both hitting the same local Valkey instance. No special tuning on either side — just drop-in configuration.

Sequential operations (5,000 iterations):

Payload Mode Django RedisCache django-vcache Speedup
Small (dict, 20 items) sync 1,438 ops/s 4,041 ops/s 2.8x
Small (dict, 20 items) async 784 ops/s 2,147 ops/s 2.7x
Medium (user session) sync 1,508 ops/s 3,837 ops/s 2.5x
Medium (user session) async 767 ops/s 2,118 ops/s 2.8x
Large (2KB+, compressed) sync 1,789 ops/s 2,784 ops/s 1.6x
Large (2KB+, compressed) async 863 ops/s 2,216 ops/s 2.6x

Concurrent async (50 tasks, 1,000 set+get pairs):

ops/sec Speedup
Django RedisCache 1,979
django-vcache 11,477 5.8x

Django's RedisCache wraps every async call in sync_to_async, which pushes work to a thread pool. django-vcache uses native async I/O — the difference grows with concurrency.

Status: Stable and used in production.

Installation

pip install django-vcache

Usage

Update your settings.py to configure the cache backend:

CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-valkey-host:6379/1",
        "OPTIONS": {
            "max_connections": 200,  # Example: limit the number of connections in the pool
            "connection_pool_timeout": 5, # Example: time to wait for a connection before raising an error
            "socket_connect_timeout": 5,  # Example: set a connection timeout
            "retry_on_timeout": True,     # Example: enable retry on timeout
        }
    },
}

The max_connections and connection_pool_timeout options enable sensible blocking behavior. When max_connections is reached, subsequent requests for a connection will wait for up to connection_pool_timeout seconds for a connection to become available before raising an error. It is recommended to set values for these options to prevent connection exhaustion.

You can then use Django's cache framework as usual:

from django.core.cache import cache

cache.set('my_key', 'my_value', 30)
value = cache.get('my_key')

To access the underlying raw valkey-py client instance, you can use the get_raw_client method:

# Get the synchronous client
sync_client = cache.get_raw_client()

# Get the asynchronous client
async_client = cache.get_raw_client(async_client=True)

Async usage

You must use an ASGI server to run the asynchronous client. For example, you can use granian or uvicorn. This is due to limitations in the valkey-py library. aget will not run reliably on a WSGI server.

Example equivalent of Django runserver: granian --interface asgi --host 0.0.0.0 --port 8000 sample.asgi:application --reload

WSGI Compatibility

The primary ValkeyCache backend is designed for modern ASGI applications and provides native async support. However, for legacy systems running in a synchronous WSGI environment (like Gunicorn or uWSGI with default workers), calling async cache methods can be problematic.

For these specific cases, a WSGI-compatible backend is available. It ensures that async cache methods are safely wrapped, preventing errors related to event loop management in a synchronous context.

To use it, update your settings.py:

CACHES = {
    "default": {
        "BACKEND": "django_vcache.wsgi.ValkeyWSGICache",
        "LOCATION": "valkey://your-valkey-host:6379/1",
        # ... other options
    },
}

Note: django-vcache is optimized for ASGI. If your project is primarily WSGI-based, you may find that other cache backends like django-redis better suit your needs. The ValkeyWSGICache is provided as a compatibility layer, not a performance-focused feature.

Contributing

Development Environment

This project uses Docker for development. To get started:

  1. Clone the repository.

  2. Build and start the services:

    docker compose up -d --build
    

This will start a Valkey container and an app container with the Django sample project running on http://localhost:8000. The development server uses granian with auto-reload, so changes you make to the code will be reflected automatically.

Using Valkey Sentinel

To run the development environment with Valkey Sentinel enabled, use the override compose file:

docker compose -f compose.yml -f compose.sentinel.yml up -d --build

You will also need to configure your sample/settings.py to use the Sentinel URL. The recommended way is to set the VALKEY_URL environment variable before starting the services:

export VALKEY_URL="sentinel://localhost:26379/mymaster/1"

The application will then be available at http://localhost:8000.

Using Valkey Cluster

To use django-vcache with a Valkey Cluster, set the CLUSTER_MODE option to True in your cache configuration. The LOCATION should point to one of the cluster's nodes; valkey-py will automatically discover the rest of the cluster nodes.

CACHES = {
    "default": {
        "BACKEND": "django_vcache.backend.ValkeyCache",
        "LOCATION": "valkey://your-cluster-node-1:6379/1",
        "OPTIONS": {
            "CLUSTER_MODE": True,
            "socket_connect_timeout": 5,
            "retry_on_timeout": True,
        }
    },
}

Note that distributed locking (via cache.lock() and cache.alock()) is not supported when CLUSTER_MODE is enabled, as this functionality is not provided by the underlying valkey-py library in cluster environments. Attempting to use these methods will raise a NotImplementedError.

To run the development environment with Valkey Cluster enabled, use the override compose file and environment variables:

docker compose -f compose.yml -f compose.cluster.yml up -d --build \
    -e VALKEY_URL='valkey://valkey-1:6379/1' \
    -e VALKEY_CLUSTER_MODE='true'

The application will then be available at http://localhost:8000.

Running Tests

To run the test suite, execute the following command:

docker compose run --rm app bash -c "python sample/manage.py test"

Credits

Inspired by the excellent work of django-valkey and django-redis, but re-architected for strict resource efficiency and modern async/sync hybrid stacks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

django_vcache-1.0.0.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

django_vcache-1.0.0-py3-none-any.whl (10.2 kB view details)

Uploaded Python 3

File details

Details for the file django_vcache-1.0.0.tar.gz.

File metadata

  • Download URL: django_vcache-1.0.0.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for django_vcache-1.0.0.tar.gz
Algorithm Hash digest
SHA256 ac34376a5e242e819cc304a2f805391014e4eda6c29f8d15bb06c57831386ecb
MD5 f5136a324d91b7f1993f210bb521358a
BLAKE2b-256 c710defc028c24f85bc7a7970b14dc66b40c5f07a2907cce32c177e89a92a910

See more details on using hashes here.

File details

Details for the file django_vcache-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: django_vcache-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 10.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for django_vcache-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6a585b10eba402d41e63e4909e3d7ba2ea8a5c7c74b7ff170b3343c78809cc46
MD5 5d18b2abedc39b9b453315a839889aaf
BLAKE2b-256 b1106fad0da42491abf84034babb479a96d268b6ed4066113a25694b84779bbf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page