Zero-config auto-instrumentation logging for Python
Project description
autolog
Zero-config, structured, async-safe logging for Python. Drop one line into your entrypoint and every function call gets instrumented — inputs, outputs, duration, errors, traces — without touching a single line of business logic.
Table of contents
- Why autolog
- Installation
- Quick start
- Output formats
- The
start()API - Three ways to use it
- Decorators
- Trace IDs
- Bound context fields
- Sampling
- Exclude patterns
- Per-package log levels
- The
truncparameter - Sensitive data redaction
- Framework middleware
- Environment variables
pyproject.tomlconfig- CLI reference
- Configuration resolution order
- Production checklist
- How it works internally
- Comparison with alternatives
- Troubleshooting
- License
Why autolog
Most Python logging libraries make you do the work — call logger.info(...) in every function, format your strings, manage trace IDs by hand, redact secrets manually. autolog flips this: you configure once, and every function in your project gets instrumented automatically.
You get this output without writing a single log statement:
[2026-05-08 12:43:36.589] [INFO] [3f4a1b2c-...] [billing] [services.create_invoice] [{"customer_id": "cust_1", "amount": 99.5}] [{"id": "inv_42"}] [4.2ms]
[2026-05-08 12:43:36.591] [ERROR] [3f4a1b2c-...] [billing] [services.charge_card] [{"card": "[REDACTED]"}] [ERROR: ConnectionError(...)] [120.3ms]
What autolog handles for you
- ✅ Wraps every function in your packages — sync, async, generators, async generators
- ✅ Captures inputs, outputs, duration, errors with full tracebacks
- ✅ Async-safe trace ID propagation across requests
- ✅ Auto-redacts passwords, tokens, API keys, JWTs
- ✅ Three output formats: compact (default), pretty (multi-line), JSON (production)
- ✅ Glob-pattern excludes for hot paths
- ✅ Per-package log levels
- ✅ Probabilistic sampling for high-throughput services
- ✅ Rotating file output, no disk-fill risk
- ✅ FastAPI / Flask middleware that doesn't OOM on uploads
- ✅ Honors upstream
X-Request-ID/traceparentheaders - ✅ Pretty-printed exception tracebacks
- ✅ Configurable per-value truncation
Installation
pip install T-autolog
With optional framework support:
pip install "T-autolog[fastapi]" # FastAPI middleware
pip install "T-autolog[flask]" # Flask middleware
Install from source for development:
git clone https://github.com/YOUR_USERNAME/autolog.git
cd autolog
pip install -e ".[dev]"
pytest tests/ -v
Requirements: Python 3.8+. On Python 3.10 or older, tomli is installed automatically as a dependency for reading pyproject.toml.
Quick start
Three lines and you're done:
import autolog
autolog.start(["myapp"])
from myapp.services import process_order
process_order(order_id=123)
Output:
[2026-05-08 12:43:36.589] [INFO] [-] [myapp] [services.process_order] [{"order_id": 123}] [{"status": "ok"}] [3.1ms]
That's the full lifecycle of a function call — inputs, outputs, timing — for free.
Output formats
autolog ships with three formats, selectable via format=. The default is compact.
compact (default) — one line per call
Best for: terminals, dev work, real-time tail, grep-friendly logs.
[2026-05-08 12:43:36.589] [INFO] [3f4a1b2c-...] [billing] [services.create_invoice] [{"customer_id": "c1", "amount": 99.5}] [{"id": "inv_42"}] [4.2ms]
Layout (every block always present):
[time] [LEVEL] [trace] [service] [function] [inputs] [output] [duration]
If bind() was used, an extra block is appended:
[time] [LEVEL] [trace] [service] [function] [inputs] [output] [duration] [user_id=42 org=acme]
pretty — multi-line, human-readable
Best for: focused debugging when you want maximum legibility.
[2026-05-08 12:43:36.589] [INFO ] [billing.services.create_invoice]
▸ trace : 3f4a1b2c-...
▸ inputs : {"customer_id": "c1", "amount": 99.5}
▸ result : {"id": "inv_42"}
▸ duration : 0.0042 s
────────────────────────────────────────────────────────────
json — structured, machine-readable
Best for: production, log shippers (Datadog, Loki, ELK, Splunk).
{"ts": "2026-05-08T12:43:36.589", "level": "INFO", "location": "billing.services.create_invoice", "trace_id": "3f4a1b2c-...", "inputs": {"customer_id": "c1", "amount": 99.5}, "result": {"id": "inv_42"}, "duration_s": 0.004215}
One JSON object per line — directly ingestible by any log pipeline.
Switching formats
autolog.start(format="json")
autolog.start(format="pretty")
autolog.start(format="compact") # default
Or via env var (no code changes):
AUTOLOG_FORMAT=json python myapp.py
Colors
In compact and pretty formats, log levels are color-coded when output is a terminal:
| Level | Color |
|---|---|
DEBUG |
cyan |
INFO |
green |
WARN |
yellow |
ERROR |
red |
NO_COLOR=1— disable colorsFORCE_COLOR=1— force colors even in non-TTY contexts (Docker logs, CI, IDE consoles)
The start() API
start() is the single entry point for all configuration.
autolog.start(
packages=None, # list of packages to instrument; None = auto-discover
*,
exclude=None, # list of glob patterns to skip
service=None, # service name shown in logs
level="INFO", # str or {package: level}
format="compact", # "compact" | "pretty" | "json"
log_file=None, # path to mirror logs to a rotating file
sample=1.0, # 0.0–1.0; fraction of calls to log
trunc=100, # max chars per logged value
config=None, # explicit pyproject.toml path
)
Parameter reference
| Parameter | Type | Default | Description |
|---|---|---|---|
packages |
list[str] or None |
None |
Top-level packages to instrument. None → auto-discover from CWD. |
exclude |
list[str] |
[] |
Glob patterns matched against module.function (e.g. myapp.hot.*). |
service |
str |
first package | Label shown in the [service] block. |
level |
str or dict |
"INFO" |
Either a level name or a per-package map. |
format |
str |
"compact" |
Output format. |
log_file |
str |
None |
Path to mirror logs to a rotating file (10 MB × 5 backups). |
sample |
float |
1.0 |
Probabilistic sampling rate. 0.5 = log half the calls. |
trunc |
int |
100 |
Max chars per logged input/output value before ... truncation. |
config |
str |
None |
Explicit path to pyproject.toml. Default: walk up from CWD. |
Three ways to use it
1. In code
import autolog
autolog.start(["myapp", "utils"])
2. In pyproject.toml
[tool.autolog]
packages = ["myapp", "utils"]
exclude = ["myapp.cache.*", "myapp.metrics.*"]
service = "billing-api"
level = { myapp = "DEBUG", default = "INFO" }
format = "json"
log_file = "logs/app.log"
sample = 0.5
trunc = 200
import autolog
autolog.start() # automatically reads pyproject.toml
3. CLI runner — zero entrypoint changes
autolog run myapp.main
autolog run myapp.main --format json --level DEBUG --trunc 200
autolog run scripts/start.py -- --port 8080
autolog show # show current config & discovered packages
The CLI is a thin wrapper around start() + runpy. Your project's code stays 100% untouched.
Decorators
For per-function or per-class instrumentation (without using the import hook):
@log — unified decorator
Auto-detects function vs class.
from autolog import log
@log
def calculate_tax(amount, rate):
return amount * rate
@log
async def fetch_user(user_id):
return await db.get(user_id)
@log
class OrderService:
def create(self, data): ...
async def fetch(self, id): ...
@log(sample=0.01) # 1% sampling for hot path
def is_ratelimited(ip):
return cache.get(ip)
Handles @staticmethod, @classmethod, @property, sync, async, generators, and async generators automatically.
@no_log — explicit opt-out
Useful inside auto-instrumented packages when you want to skip a specific function (e.g., a hot loop or a function returning a giant payload).
from autolog import no_log
@no_log
def hot_path():
"""Even though this package is auto-patched, this function won't be wrapped."""
return cache[key]
@log_function and @log_class — explicit forms
from autolog import log_function, log_class
@log_function(service="billing", level="DEBUG", trunc=500)
def big_payload(data): ...
@log_class
class Service: ...
These exist for backward compatibility and explicit control. New code should prefer @log.
Trace IDs
Async-safe per-request correlation via contextvars. Every wrapped function automatically picks up the current trace ID and includes it in its log line.
from autolog import new_trace_id, get_trace_id, set_trace_id
new_trace_id() # generates UUID4 and stores in current context
get_trace_id() # → "3f4a1b2c-..."
set_trace_id("custom-id-123") # use an externally-supplied ID
Why ContextVar matters
Each coroutine / task gets its own copy of the trace ID — they don't leak across concurrent requests. Same goes for thread pools.
Automatic propagation in middleware
Both FastAPI and Flask middleware honor upstream headers if present:
X-Request-ID, X-Trace-ID, X-Correlation-ID, traceparent
If none is set, a fresh UUID4 is generated.
Bound context fields
structlog-style: attach fields once, every downstream log includes them.
from autolog import bind, unbind
bind(user_id=42, org_id="acme", request_id="req-001")
# every log emitted from here on (in this async context) will include these fields
do_something()
do_something_else()
unbind() # clear all
unbind("user_id") # clear specific keys only
Output (compact format with bound fields):
[12:43:36.589] [INFO] [-] [myapp] [foo] [{}] [42] [3.1ms] [user_id=42 org_id=acme request_id=req-001]
Per-request example with FastAPI
from autolog import bind, unbind
@app.middleware("http")
async def attach_user_context(request, call_next):
bind(user_id=request.user.id, ip=request.client.host)
try:
return await call_next(request)
finally:
unbind()
Every log line emitted while handling this request will carry user_id and ip automatically — no need to thread them through every function.
Sampling
For hot paths where you don't want to log every call:
Globally (via start())
autolog.start(packages=["myapp"], sample=0.1) # log 10% of all calls
Per-function
@log(sample=0.01) # 1% of calls
def hot_path(): ...
@log(sample=1.0) # always (default)
def critical_path(): ...
@log(sample=0.0) # never
def silent_op(): ...
Sampling is probabilistic per call. Each call rolls a random number — if it lands inside the sample fraction, the call is logged in full; otherwise, the function runs without instrumentation overhead.
Trade-offs
| Sample | Use case |
|---|---|
1.0 |
Normal endpoints, debugging |
0.1–0.5 |
High-traffic but interesting paths |
0.001–0.01 |
Hot loops, cache reads, request counters |
0.0 |
Disable for a specific function (or use @no_log) |
Exclude patterns
Skip noisy modules without modifying their code. Patterns are standard fnmatch globs matched against module.function_name.
Via start()
autolog.start(
packages=["myapp"],
exclude=[
"myapp.cache.*", # entire cache submodule
"myapp.metrics.*", # entire metrics submodule
"myapp.utils.now", # one specific function
"*.internal_*", # any function starting with internal_
],
)
Via pyproject.toml
[tool.autolog]
exclude = ["myapp.cache.*", "myapp.metrics.*"]
Via CLI
autolog run myapp.main --exclude "myapp.cache.*" "myapp.metrics.*"
When to use exclude vs @no_log
| Situation | Use |
|---|---|
| Skip an entire submodule | exclude=["pkg.submodule.*"] |
| Skip many functions matching a pattern | exclude=["*.internal_*"] |
| Skip one specific function in your own code | @no_log |
| Skip a third-party-style hot path | exclude=["pkg.hot.*"] |
Per-package log levels
Different parts of your codebase can have different verbosity:
autolog.start(
packages=["myapp", "utils", "billing"],
level={
"myapp.routes": "DEBUG", # most-specific match wins
"myapp": "INFO",
"utils": "WARNING",
"billing": "DEBUG",
"default": "INFO",
},
)
Resolution
For each function's module, autolog picks the longest matching prefix:
| Function module | Level used |
|---|---|
myapp.routes.users |
DEBUG (matches myapp.routes) |
myapp.services.foo |
INFO (matches myapp) |
utils.helpers |
WARNING (matches utils) |
billing.invoices |
DEBUG (matches billing) |
something_else |
INFO (uses default) |
Via pyproject
[tool.autolog.level]
myapp = "DEBUG"
utils = "WARN"
default = "INFO"
The trunc parameter
Caps the number of characters shown for inputs and outputs in logs. Anything longer ends with ....
Default
trunc = 100 — a reasonable balance between visibility and log noise.
Examples
@log(trunc=20)
def get_user():
return {"id": 1, "name": "Alice", "email": "alice@example.com", "city": "Paris"}
Output: [INFO] ... [{"id": 1, "name": "Ali...] [...]
@log(trunc=10000) # essentially never truncate
def export_report():
return huge_dict
Where to set it
| Where | How |
|---|---|
start() |
autolog.start(trunc=200) |
@log_function |
@log_function(trunc=50) |
pyproject.toml |
trunc = 200 |
| CLI | --trunc 200 |
| Env var | (not available; use one of the above) |
What it applies to
- ✅ Inputs (function arguments)
- ✅ Outputs (return values)
- ❌ Timestamp, level, trace, service, function name, duration — never truncated
Recommended values
| Use case | trunc |
|---|---|
| Production, high-volume | 50–100 |
| Development | 100–200 (default 100) |
| Debugging large payloads | 500–2000 |
| Forensic / no truncation | 100000 |
Sensitive data redaction
Auto-masks values whose key contains a sensitive token. Token-aware: matches whole sub-words, not substrings.
Auto-redacted tokens
password, passwd, token, secret, auth, credential, credentials,
apikey, jwt, bearer, api_key, access_token, refresh_token, private_key
Token splitting
Keys are split on _, -, ., space, and camelCase boundaries. If any token matches, the value is replaced with [REDACTED].
| Key | Tokens | Redacted? |
|---|---|---|
password |
[password] |
✅ |
apiKey |
[api, key] |
✅ |
client.secret |
[client, secret] |
✅ |
JWT_TOKEN |
[jwt, token] |
✅ |
monkey |
[monkey] |
❌ no false positive |
keyword |
[keyword] |
❌ no false positive |
author |
[author] |
❌ no false positive |
Example
@log
def login(username, password, api_key):
return {"session_id": "abc123", "access_token": "xyz789"}
login("alice", "hunter2", "sk-prod-...")
Output:
[INFO] ... [login] [{"username": "alice", "password": "[REDACTED]", "api_key": "[REDACTED]"}] [{"session_id": "abc123", "access_token": "[REDACTED]"}]
Redaction is recursive
Nested dicts, lists, tuples — all walked. Sensitive keys at any depth are masked.
Framework middleware
FastAPI
from fastapi import FastAPI
from autolog.middleware.fastapi import AutoLogMiddleware
app = FastAPI()
app.add_middleware(
AutoLogMiddleware,
service="my-api",
level="INFO",
log_file="api.log",
max_body_len=500, # max chars per body when JSON-serializing
max_body_bytes=64 * 1024, # absolute byte cap; bodies above this are skipped/truncated
)
Production-safe defaults:
- ✅ Honors
X-Request-ID,X-Trace-ID,X-Correlation-ID,traceparentupstream headers - ✅ Skips body capture for
multipart/,application/octet-stream,text/event-stream,image/*,video/*,audio/*,application/x-ndjson - ✅ Honors
Content-Lengthheader — refuses to read bodies larger thanmax_body_bytes - ✅ Truncates oversized bodies that lied about their
Content-Length - ✅ Won't OOM on uploads or break streaming responses (SSE, video)
- ✅ Logs run in
BackgroundTask— never blocks the response
Flask
from flask import Flask
from autolog.middleware.flask import init_autolog
app = Flask(__name__)
init_autolog(app, service="my-api", level="INFO")
Same upstream-trace-ID propagation. Synchronous, but body handling is bounded the same way.
Environment variables
All can be set without code changes — useful for Docker / 12-factor apps:
| Variable | Effect | Example |
|---|---|---|
AUTOLOG_LEVEL |
Override level | AUTOLOG_LEVEL=DEBUG |
AUTOLOG_FILE |
Mirror logs to file | AUTOLOG_FILE=/var/log/app.log |
AUTOLOG_FORMAT |
compact / pretty / json |
AUTOLOG_FORMAT=json |
AUTOLOG_SERVICE |
Service name | AUTOLOG_SERVICE=billing |
AUTOLOG_DISABLE |
Disable all instrumentation | AUTOLOG_DISABLE=1 |
AUTOLOG_FILE_MAX_BYTES |
Rotation size, default 10 MB | AUTOLOG_FILE_MAX_BYTES=52428800 |
AUTOLOG_FILE_BACKUPS |
Rotated backups, default 5 | AUTOLOG_FILE_BACKUPS=10 |
NO_COLOR |
Disable ANSI colors | NO_COLOR=1 |
FORCE_COLOR |
Force colors in non-TTY | FORCE_COLOR=1 |
pyproject.toml config
Full reference:
[tool.autolog]
packages = ["myapp", "utils"]
exclude = ["myapp.cache.*", "myapp.metrics.*"]
service = "billing-api"
format = "json" # "compact" | "pretty" | "json"
log_file = "logs/app.log"
sample = 1.0 # 0.0 - 1.0
trunc = 100
# `level` can be a string OR a per-package map:
level = "INFO"
# OR
[tool.autolog.level]
myapp = "DEBUG"
utils = "WARNING"
default = "INFO"
Then:
import autolog
autolog.start() # picks up pyproject.toml automatically
start() walks upward from the CWD looking for the first pyproject.toml. To force a specific path:
autolog.start(config="/etc/myapp/pyproject.toml")
CLI reference
autolog run TARGET
Run a Python module or script with autolog enabled — no entrypoint changes needed.
autolog run myapp.main # module
autolog run scripts/start.py # script file
autolog run myapp.main -- --port 8080 # everything after `--` goes to your app
Flags (all optional, all override pyproject.toml):
--packages PKG [PKG ...] Override packages to instrument
--exclude GLOB [GLOB ...] Glob patterns to skip
--service SERVICE Service name shown in logs
--level LEVEL DEBUG | INFO | WARNING | ERROR
--format {compact,pretty,json}
--log-file PATH Mirror to file (rotating)
--sample 0.0-1.0 Sampling rate
--trunc N Max chars per logged value
--config PATH Explicit pyproject.toml
-m, --module Force target to be treated as a module name
autolog show
Print the resolved configuration & discovered packages — useful for verifying what start() would do:
autolog show
autolog config:
config file : /home/me/myapp/pyproject.toml
packages : ['myapp', 'utils']
service : billing-api
level : DEBUG
log_file : logs/app.log
Configuration resolution order
When the same option is set in multiple places, the order of precedence (highest first) is:
- Function/decorator kwargs —
start(trunc=200),@log(sample=0.1) - CLI flags —
autolog run --trunc 200 - Environment variables —
AUTOLOG_LEVEL=DEBUG [tool.autolog]inpyproject.toml- Built-in defaults
Production checklist
Before deploying autolog to a real production service:
- Set
format="json"for log shipper compatibility - Set
log_file="..."for file output (rotates automatically) - Add
exclude=[...]for known hot paths - Set
sample=0.1(or lower) on services with >1k req/sec - Verify body limits in middleware match your max payload size
- Confirm trace ID propagation if behind a proxy that adds
X-Request-ID - Audit redaction — add custom sensitive keys if your codebase uses non-standard names
- Set
AUTOLOG_DISABLE=1as a kill switch in case logging causes issues
How it works internally
A high-level walkthrough of the architecture:
1. Import hook (zero-touch mode)
When you call autolog.start(["myapp"]), autolog inserts a MetaPathFinder at position 0 of sys.meta_path. Python's import machinery consults it before any other finder.
2. Module patching
When import myapp.services happens:
- autolog's finder claims the import
- Delegates to the original loader to actually execute the module
- After the module is fully loaded, walks its top-level functions and classes
- Replaces each with a wrapped version using
setattr(module, name, wrap(...))
3. The wrapper
Each wrapper is a closure that:
- Captures the start time (
time.perf_counter()) - Reads the current trace ID from the
ContextVar - Reads bound fields from the
ContextVar - Scrubs sensitive args by name
- Truncates inputs to
truncchars - Calls the original function
- Captures the result, redacts it, truncates it
- Builds a
LogRecordwith all fields and dispatches it - On exception: captures the full traceback, builds an ERROR record, re-raises
The four wrapper variants (sync, async, generator, async-generator) are picked at wrap time using inspect.iscoroutinefunction, etc.
4. The logger
Logger creation goes through logging.getLogger("autolog.{module_name}"). Loggers are cached by (name, level, file, service, trunc, format). propagate = False so autolog's handlers don't leak into the root logger.
5. Async safety
All per-request state — trace ID, bound fields — lives in contextvars.ContextVar. Each coroutine, task, and thread gets isolated copies; no leakage across concurrent requests.
6. Redaction
Recursive walk of dicts/lists/tuples. Each key is split into tokens (on separators + camelCase boundaries) and tested against the sensitive set.
Comparison with alternatives
| autolog | logging (stdlib) |
loguru | structlog | OpenTelemetry | |
|---|---|---|---|---|---|
| Zero-touch instrumentation | ✅ | ❌ | ❌ | ❌ | ⚠️ via auto-instrumentors |
| Structured output | ✅ | ⚠️ via formatter | ⚠️ basic | ✅ | ✅ |
| JSON output | ✅ | ⚠️ manual | ⚠️ manual | ✅ | ✅ |
| Async-safe trace IDs | ✅ | ❌ | ❌ | ✅ via contextvars | ✅ |
| Auto sensitive-key redaction | ✅ | ❌ | ❌ | ❌ | ❌ |
| Per-call sampling | ✅ | ❌ | ❌ | ❌ | ✅ |
| Per-package level | ✅ | ✅ | ⚠️ workarounds | ✅ via filters | ✅ |
| Setup complexity | one line | high | low | medium | very high |
| HTTP middleware | ✅ FastAPI / Flask | ❌ | ❌ | ❌ | ✅ |
autolog's niche: zero-touch instrumentation + structured output + production safety, all in one library. If you want full distributed tracing across services, pair it with OpenTelemetry. If you want to write log.info(...) by hand, use loguru or stdlib.
Troubleshooting
My functions aren't being logged
import autolog
autolog.start(["myapp"])
# ❌ wrong order
from myapp.foo import bar
bar()
start() must be called before importing the modules you want logged. Move imports below start():
import autolog
autolog.start(["myapp"])
from myapp.foo import bar # now patched
bar()
trunc doesn't seem to apply
truncis a maximum, not exact length. Short values stay short.- If you change
truncin code, fully restart Python (long-running servers cache wrappers). - Check what's actually configured with
autolog showorautolog._logger._loggers.keys().
Logs appear twice
You're probably using start() AND a decorator. Pick one. The decorator-applied wrapper has _autolog_wrapped = True, so the import hook will skip it — but if you imported the module before calling start(), retroactive patching may have wrapped non-decorated functions a second time. Restart Python.
start() says "no packages discovered"
Auto-discovery only finds folders with __init__.py in the current working directory. Either:
- Run from your project root, or
- Pass
packages=[...]explicitly, or - Set
packagesinpyproject.toml.
Colors aren't showing in Docker logs
Docker strips TTY by default. Set FORCE_COLOR=1 in your container env.
Can I use autolog with stdlib logging?
Yes — autolog uses stdlib logging underneath. Other handlers attached to other loggers continue to work unchanged. autolog only configures its own autolog.* namespace.
License
MIT. See LICENSE.
Contributing
Bug reports and PRs welcome. Run the test suite:
pip install -e ".[dev]"
pytest tests/ -v
79 tests cover every feature including async functions, generators, sampling, exclusion patterns, sensitive-key redaction, formatter output, and CLI/config integration.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file t_autolog-2.6.0.tar.gz.
File metadata
- Download URL: t_autolog-2.6.0.tar.gz
- Upload date:
- Size: 51.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
232b18dcddc5b42d6d0e104b02f2545ae8ef216d792e06edecd488f75ff7d209
|
|
| MD5 |
98956dc5c66d12178d6d7e0d739e8e37
|
|
| BLAKE2b-256 |
b86154a99b410d93acf51f866061d3ded24098224dac3de8cceafbe4177047f7
|
File details
Details for the file t_autolog-2.6.0-py3-none-any.whl.
File metadata
- Download URL: t_autolog-2.6.0-py3-none-any.whl
- Upload date:
- Size: 39.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7fec9850bdc5b5a04ccd338e65cb91f83a00b5d199adb7fccd0589083d37fcda
|
|
| MD5 |
24cc1898e3ca5a59f61cdb8f13c957a0
|
|
| BLAKE2b-256 |
7e7083a63e894a503b242658ef7befa4676ccdfcd1705834e7cc01e997edc323
|