A specialized, lightweight Django cache backend for Valkey.
Project description
django-vcache
A very fast Django cache backend for Valkey (and Redis). This async + sync backend is designed to be resource-efficient.
It powers the GlitchTip open-source error tracking platform.
Why django-vcache?
- Zero "Sync-to-Async" Overhead: Native implementations for both Sync (get) and Async (aget) methods. No thread-switching wrappers.
- Connection Efficiency: Maintains at most two client instances (one Sync, one Async) per configured backend.
- Lazy Loading: Connections are established only when a command is issued, keeping startup time instant and memory usage low.
- Raw Access: Easily borrow the underlying valkey-py client for advanced operations (locking, pipelines, custom data structures) without spinning up new connections. Use the existing client with django-vtask.
- Opinionated and Fast - We use libvalkey and focus on simplicity and speed over every possible use case. Stop thinking about which parser class to use and write your fast application.
Status: Feature complete. Alpha quality. Do not use in production. Once 1.0 is released, we'll use semantic versioning.
Installation
pip install django-vcache
Usage
Update your settings.py to configure the cache backend:
CACHES = {
"default": {
"BACKEND": "django_vcache.backend.ValkeyCache",
"LOCATION": "valkey://your-valkey-host:6379/1",
"OPTIONS": {
"max_connections": 200, # Example: limit the number of connections in the pool
"connection_pool_timeout": 5, # Example: time to wait for a connection before raising an error
"socket_connect_timeout": 5, # Example: set a connection timeout
"retry_on_timeout": True, # Example: enable retry on timeout
}
},
}
The max_connections and connection_pool_timeout options enable sensible blocking behavior. When max_connections is reached, subsequent requests for a connection will wait for up to connection_pool_timeout seconds for a connection to become available before raising an error. It is recommended to set values for these options to prevent connection exhaustion.
You can then use Django's cache framework as usual:
from django.core.cache import cache
cache.set('my_key', 'my_value', 30)
value = cache.get('my_key')
To access the underlying raw valkey-py client instance, you can use the get_raw_client method:
# Get the synchronous client
sync_client = cache.get_raw_client()
# Get the asynchronous client
async_client = cache.get_raw_client(async_client=True)
Async usage
You must use an ASGI server to run the asynchronous client. For example, you can use granian or uvicorn. This is due to limitations in the valkey-py library. aget will not run reliably on a WSGI server.
Example equivalent of Django runserver:
granian --interface asgi --host 0.0.0.0 --port 8000 sample.asgi:application --reload
WSGI Compatibility
The primary ValkeyCache backend is designed for modern ASGI applications and provides native async support. However, for legacy systems running in a synchronous WSGI environment (like Gunicorn or uWSGI with default workers), calling async cache methods can be problematic.
For these specific cases, a WSGI-compatible backend is available. It ensures that async cache methods are safely wrapped, preventing errors related to event loop management in a synchronous context.
To use it, update your settings.py:
CACHES = {
"default": {
"BACKEND": "django_vcache.wsgi.ValkeyWSGICache",
"LOCATION": "valkey://your-valkey-host:6379/1",
# ... other options
},
}
Note:
django-vcacheis optimized for ASGI. If your project is primarily WSGI-based, you may find that other cache backends likedjango-redisbetter suit your needs. TheValkeyWSGICacheis provided as a compatibility layer, not a performance-focused feature.
Contributing
Development Environment
This project uses Docker for development. To get started:
-
Clone the repository.
-
Build and start the services:
docker compose up -d --build
This will start a Valkey container and an app container with the Django sample project running on http://localhost:8000. The development server uses granian with auto-reload, so changes you make to the code will be reflected automatically.
Using Valkey Sentinel
To run the development environment with Valkey Sentinel enabled, use the override compose file:
docker compose -f compose.yml -f compose.sentinel.yml up -d --build
You will also need to configure your sample/settings.py to use the Sentinel URL. The recommended way is to set the VALKEY_URL environment variable before starting the services:
export VALKEY_URL="sentinel://localhost:26379/mymaster/1"
The application will then be available at http://localhost:8000.
Using Valkey Cluster
To use django-vcache with a Valkey Cluster, set the CLUSTER_MODE option to True in your cache configuration. The LOCATION should point to one of the cluster's nodes; valkey-py will automatically discover the rest of the cluster nodes.
CACHES = {
"default": {
"BACKEND": "django_vcache.backend.ValkeyCache",
"LOCATION": "valkey://your-cluster-node-1:6379/1",
"OPTIONS": {
"CLUSTER_MODE": True,
"socket_connect_timeout": 5,
"retry_on_timeout": True,
}
},
}
Note that distributed locking (via cache.lock() and cache.alock()) is not supported when CLUSTER_MODE is enabled, as this functionality is not provided by the underlying valkey-py library in cluster environments. Attempting to use these methods will raise a NotImplementedError.
To run the development environment with Valkey Cluster enabled, use the override compose file and environment variables:
docker compose -f compose.yml -f compose.cluster.yml up -d --build \
-e VALKEY_URL='valkey://valkey-1:6379/1' \
-e VALKEY_CLUSTER_MODE='true'
The application will then be available at http://localhost:8000.
Running Tests
To run the test suite, execute the following command:
docker compose run --rm app bash -c "python sample/manage.py test"
Credits
Inspired by the excellent work of django-valkey and django-redis, but re-architected for strict resource efficiency and modern async/sync hybrid stacks.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file django_vcache-0.1.3.tar.gz.
File metadata
- Download URL: django_vcache-0.1.3.tar.gz
- Upload date:
- Size: 8.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
81f37e3d82e3f8f62e684717c37d0d52309cadbf45314def885ede5eb9f6a7e2
|
|
| MD5 |
bcd7314f6e56126f27e157e71a688fb5
|
|
| BLAKE2b-256 |
0757cbf79487f7d0a4e94993514f2f892255bee7c3cb48a270b637e190c28ad0
|
File details
Details for the file django_vcache-0.1.3-py3-none-any.whl.
File metadata
- Download URL: django_vcache-0.1.3-py3-none-any.whl
- Upload date:
- Size: 9.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a618e71336fb9eeb96d90778d718fb56e086a7a4e8f77b8b1d20dab5fc75482
|
|
| MD5 |
35078b81acab8c24eb95f25310013fa0
|
|
| BLAKE2b-256 |
48cbfc217c3ae08b0f83d2685d5773e9fda300e4fedfab8f0acb7ca95f77aaeb
|