Synchronize Sanic contects between instances when using multiple workers
Project description
Sanic-Synchro-Ctx
Plugin to provide an App context that is shared across multiple workers
Can use native python SyncManager backend, or Redis if you want. (Redis is much faster).
Installation
$ pip3 install sanic-synchro-ctx
Or in a python virtualenv (these example commandline instructions are for a Linux/Unix based OS)
$ python3 -m virtualenv --python=python3 --no-site-packages .venv
$ source ./.venv/bin/activate
$ pip3 install sanic sanic-synchro-ctx
To exit the virtual enviornment:
$ deactivate
Redis Extension
You can install the relevant Redis libraries for this plugin, with the installable redis extension:
$ pip3 install sanic-synchro-ctx[redis]
That is the same as running:
$ pip3 install "sanic-synchro-ctx" "aioredis>=2.0" "hiredis>=1.0"
Compatibility
- Works with Python 3.8 and greater.
- Works with Sanic v21.9.0 and greater.
- If you are installing the redis library separately, use aioredis >= 2.0
Usage
A very simple example, it uses the native python SyncManager backend, doesn't require a Redis connection.
from sanic_synchro_ctx import SanicSynchroCtx
app = Sanic("sample")
s = SanicSynchroCtx(app)
@app.after_server_start
def handler(app, loop=None):
# This will only set this value if it doesn't already exist
# So only the first worker will set this value
app.ctx.synchro.set_default({"counter": 0})
@app.route("/inc")
def increment(request: Request):
# atomic increment operation
counter = request.app.ctx.synchro.increment("counter")
print("counter: {}".format(counter), flush=True)
return html("<p>Incremented!</p>")
@app.route("/count")
def increment(request: Request):
# Get from shared context:
counter = request.app.ctx.synchro.counter
print("counter: {}".format(counter), flush=True)
return html(f"<p>count: {counter}</p>")
app.run("127.0.0.1", port=8000, workers=8)
Redis example:
from sanic_synchro_ctx import SanicSynchroCtx
redis = aioredis.from_url("redis://localhost")
app = Sanic("sample")
s = SanicSynchroCtx(app, backend="redis", redis_client=redis)
@app.after_server_start
async def handler(app, loop=None):
# This will only set this value if it doesn't already exist
# So only the first worker will set this value
await app.ctx.synchro.set_default({"counter": 0})
@app.route("/inc")
async def increment(request: Request):
# atomic increment operation
counter = await request.app.ctx.synchro.increment("counter")
print(f"counter: {counter}", flush=True)
return html("<p>Incremented!</p>")
@app.route("/count")
async def increment(request: Request):
# Get from shared context:
counter = await request.app.ctx.synchro.counter
print(f"counter: {counter}", flush=True)
return html(f"<p>count: {counter}</p>")
app.run("127.0.0.1", port=8000, workers=8)
Changelog
A comprehensive changelog is kept in the CHANGELOG file.
Benchmarks
I've done some basic benchmarks. SyncManager works surprisingly well, but Redis backend is much faster.
License
This repository is licensed under the MIT License. See the LICENSE deed for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sanic-synchro-ctx-0.1.0.tar.gz
.
File metadata
- Download URL: sanic-synchro-ctx-0.1.0.tar.gz
- Upload date:
- Size: 34.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a136d4da6b491a5c2634699da3e2ecd1f2f2cd8484fd110a35f6de470a693b5 |
|
MD5 | a3bbd8cccb7b14d6439855262ba03e1d |
|
BLAKE2b-256 | ea7f626bc4a842d8369c6eb19c816895d3ebc04ffad557faf69bdf10aa95870f |
File details
Details for the file sanic_synchro_ctx-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: sanic_synchro_ctx-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7e347564586ef3449f65be180eb142dfe331f12b6eecbb23b3dca6bca7a92931 |
|
MD5 | d738e024a421a9ed83f6fe1d4238778e |
|
BLAKE2b-256 | 60494efc74e5bb5c8d51249d2f53a103b54eb33fa153f4c6d9874a730b80fab6 |