A modular Nostr data archiving and monitoring system
Project description
BigBrotr
Nostr Relay Discovery, Monitoring, and Event Archiving System
Discovers relays across clearnet and overlay networks, monitors health with NIP-11/NIP-66 compliance checks, and archives events into PostgreSQL.
What It Does
BigBrotr runs 8 independent async services that continuously map and monitor the Nostr relay ecosystem. Each service runs on its own schedule, reads and writes a shared PostgreSQL database, and has no direct dependency on any other service.
┌──────────────────────────────────────────────┐
│ PostgreSQL Database │
│ │
│ relay ── event_relay ── event │
│ service_state │
│ metadata ── relay_metadata │
│ 11 materialized views │
└──┬───┬───┬───┬───┬───┬───┬───┬──────────────┘
│ │ │ │ │ │ │ │
┌───────────────────┘ │ │ │ │ │ │ └──────────────────┐
│ ┌───────────────┘ │ │ │ │ └──────────────┐ │
│ │ ┌───────────┘ │ │ └──────────┐ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
┌────────┐┌──────┐┌─────────┐┌───────┐┌────────────┐┌────────┐┌───┐┌─────┐
│ Seeder ││Finder││Validator││Monitor││Synchronizer││Refresh.││Api││ Dvm │
│one-shot││ ││ ││ ││ ││ ││ ││ │
│ boot ││disco.││ test ws ││health ││archive evts││refresh ││REST││NIP90│
└───┬────┘└──┬───┘└────┬────┘└───┬───┘└─────┬──────┘└───┬────┘└─┬─┘└──┬──┘
│ │ │ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │ ▼ ▼
│ ┌─────────┐┌───────┐ ┌────────┐ ┌─────────┐ │ HTTP Nostr
▼ │ APIs ││Relays │ │ Relays │ │ Relays │ │ clients clients
seed file│ (nostr. ││(WS │ │(NIP-11,│ │ (fetch │ (no I/O)
│ watch) ││ shake)│ │ NIP-66)│ │ events) │
└─────────┘└───────┘ └───┬────┘ └─────────┘
│
▼
Nostr Network
(kind 10166/30166)
Services
| Service | Schedule | What it does | Reads | Writes | External I/O |
|---|---|---|---|---|---|
| Seeder | One-shot | Loads relay URLs from a seed file | -- | relay or service_state (candidates) | Seed file |
| Finder | Every 5 min | Discovers relay URLs from event tag values and external APIs | relay, event_relay, service_state | service_state (candidates + cursors) | HTTP (nostr.watch APIs) |
| Validator | Every 5 min | Tests candidates via WebSocket handshake, promotes valid relays | service_state (candidates) | relay, service_state | WebSocket to relays |
| Monitor | Every 5 min | Runs 7 health checks per relay, publishes NIP-66 events | relay, service_state | metadata, relay_metadata, service_state | HTTP, WebSocket, DNS, SSL, GeoIP |
| Synchronizer | Every 5 min | Connects to relays, fetches and archives signed events | relay, service_state | event, event_relay, service_state | WebSocket to relays |
| Refresher | Every 60 min | Refreshes 11 materialized views in dependency order | (implicit via views) | 11 materialized views | None |
| Api | Continuous | Read-only REST API with auto-generated paginated endpoints | all tables, views | -- | HTTP (FastAPI) |
| Dvm | Continuous | NIP-90 Data Vending Machine for database queries | all tables, views | -- | WebSocket (Nostr) |
Services are loosely coupled through the database: Seeder and Finder populate candidates, Validator promotes them to relays, Monitor and Synchronizer operate on relays, Refresher materializes analytics. But each runs independently -- stopping one does not break the others.
Architecture
Code Organization (Diamond DAG)
Imports flow strictly downward:
services src/bigbrotr/services/
/ | \
core nips utils src/bigbrotr/{core,nips,utils}/
\ | /
models src/bigbrotr/models/
- models -- Pure frozen dataclasses (Relay, Event, Metadata, ServiceState). Zero I/O, stdlib logging only.
- core -- Pool (asyncpg with retry), Brotr (DB facade), BaseService (lifecycle), Logger (structured kv/JSON), Metrics (Prometheus), YAML loader.
- nips -- NIP-11 relay info fetch/parse, NIP-66 health checks (RTT, SSL, DNS, Geo, Net, HTTP). Never raises -- errors in
logs.success. - utils -- DNS resolution, Nostr key management, WebSocket/HTTP transport, SSL fallback, SOCKS5 proxy support.
- services -- 8 independent services + shared queries, configs, and mixins.
Database Schema
┌─────────────────────┐ ┌──────────────────────────────────────┐
│ relay │ │ event │
│─────────────────────│ │──────────────────────────────────────│
│ url PK │◄──┐ ┌──►│ id PK (BYTEA, 32B) │
│ network TEXT │ │ │ │ pubkey BYTEA (32B) │
│ discovered_at BIGINT│ │ │ │ created_at BIGINT │
└─────────┬───────────┘ │ │ │ kind INTEGER │
│ │ │ │ tags JSONB │
│ │ │ │ tagvalues TEXT[] (GENERATED) │
│ │ │ │ content TEXT │
│ │ │ │ sig BYTEA (64B) │
│ │ │ └──────────────────────────────────────┘
│ │ │
│ ┌──────────┴─┴──────────┐
│ │ event_relay │
│ │───────────────────────│
├───►│ relay_url FK ──► relay.url
│ │ event_id FK ──► event.id
│ │ seen_at BIGINT
│ │ PK(event_id, relay_url)
│ └───────────────────────┘
│
│ ┌───────────────────────┐
│ │ relay_metadata │
│ │───────────────────────│
├───►│ relay_url FK ──► relay.url
│ │ metadata_id FK ──┐
│ │ metadata_type FK ─┤► metadata(id, type)
│ │ generated_at BIGINT
│ │ PK(relay_url, generated_at, metadata_type)
└──────────┬────────────┘
│
┌──────────┴────────────┐
│ metadata │
│───────────────────────│
│ id PK (BYTEA, SHA-256)
│ type PK (TEXT, 7 types)
│ data JSONB
└───────────────────────┘
┌───────────────────────┐
│ service_state │
│───────────────────────│
│ service_name PK (TEXT)│ candidate, cursor, checkpoint
│ state_type PK (TEXT)│
│ state_key PK (TEXT)│ typically relay URL
│ state_value JSONB │
│ updated_at BIGINT │
└───────────────────────┘
Key relationships:
relayis the central entity. Cascade deletes propagate toevent_relayandrelay_metadata.metadatais content-addressed: SHA-256 hash of canonical JSON + type as composite PK. Same data = same hash.service_stateis a generic key-value store used by Finder (cursors), Validator (candidates), Monitor (checkpoints), Synchronizer (cursors).event.tagvaluesis a generated column (fromtags_to_tagvalues(tags)) indexed with GIN for fast containment queries.
Service-Database Interaction Map
relay event event_ meta- relay_ service_ materialized
relay data metadata state views (11)
─────────────┬────────┬───────┬────────┬───────┬─────────┬─────────┬────────────
Seeder │ W(1) │ │ │ │ │ W │
Finder │ R │ │ R │ │ │ R/W │
Validator │ W │ │ │ │ │ R/W │
Monitor │ R │ │ │ W │ W │ R/W │
Synchronizer │ R │ W │ W │ │ │ R/W │
Refresher │ │ │ │ │ │ │ W
Api │ R │ R │ R │ R │ R │ R │ R
Dvm │ R │ R │ R │ R │ R │ R │ R
─────────────┴────────┴───────┴────────┴───────┴─────────┴─────────┴────────────
R = reads W = writes (1) = only when to_validate=False
Quick Start
Prerequisites
- Docker and Docker Compose
- (Optional) Python 3.11+ and uv for local development
Deploy with Docker Compose
git clone https://github.com/BigBrotr/bigbrotr.git
cd bigbrotr/deployments/bigbrotr
# Configure secrets
cp .env.example .env
# Edit .env: set DB_ADMIN_PASSWORD, DB_WRITER_PASSWORD, DB_READER_PASSWORD, PRIVATE_KEY, GRAFANA_PASSWORD
# Start everything
docker compose up -d
# Watch services start
docker compose logs -f seeder
This starts PostgreSQL 16, PGBouncer, Tor proxy, all 8 services, Prometheus, Alertmanager, and Grafana.
| Endpoint | URL |
|---|---|
| Grafana | http://localhost:3000 |
| Prometheus | http://localhost:9090 |
| Alertmanager | http://localhost:9093 |
| PostgreSQL | localhost:5432 |
| PGBouncer | localhost:6432 |
Run a Single Service Locally
uv sync --group dev
cd deployments/bigbrotr
# One cycle
python -m bigbrotr seeder --once
# Continuous with debug logging
python -m bigbrotr finder --log-level DEBUG
Deployments
BigBrotr supports multiple deployment configurations from the same codebase via a single parametric Dockerfile (deployments/Dockerfile with ARG DEPLOYMENT).
BigBrotr (Full Archive)
Stores complete Nostr events (id, pubkey, created_at, kind, tags, content, sig). 11 materialized views for analytics. Tor enabled. All 8 services + Prometheus + Grafana.
cd deployments/bigbrotr && docker compose up -d
LilBrotr (Lightweight)
Stores event metadata only (id, pubkey, created_at, kind, tagvalues). Omits tags JSON, content, and sig for approximately 60% disk savings. Same eight services and all 11 materialized views.
cd deployments/lilbrotr && docker compose up -d
Custom Deployment
cp -r deployments/bigbrotr deployments/myrelay
# Edit config, SQL schema, docker-compose.yaml
cd deployments/myrelay && docker compose up -d
Database
PostgreSQL 16 with PGBouncer (transaction-mode pooling) and asyncpg async driver. All mutations via stored functions with bulk array parameters.
Schema
| Table | Purpose |
|---|---|
relay |
Validated relay URLs with network type and discovery timestamp |
event |
Nostr events (BYTEA ids/pubkeys/sigs for space efficiency) |
event_relay |
Junction: which events were seen at which relays (with seen_at) |
metadata |
Content-addressed NIP-11/NIP-66 documents (SHA-256 dedup, composite PK (id, type)) |
relay_metadata |
Time-series snapshots linking relays to metadata records |
service_state |
Per-service operational data (candidates, cursors, checkpoints) |
Stored Functions (25)
- 1 utility:
tags_to_tagvalues(extracts single-char tag values for GIN indexing) - 10 CRUD:
relay_insert,event_insert,metadata_insert,event_relay_insert,relay_metadata_insert,event_relay_insert_cascade,relay_metadata_insert_cascade,service_state_upsert,service_state_get,service_state_delete - 2 cleanup:
orphan_event_delete,orphan_metadata_delete(batched) - 12 refresh: one per materialized view +
all_statistics_refresh
All functions use SECURITY INVOKER, bulk array parameters, and ON CONFLICT DO NOTHING.
Materialized Views (11, BigBrotr Only)
relay_metadata_latest, event_stats, relay_stats, kind_counts, kind_counts_by_relay, pubkey_counts, pubkey_counts_by_relay, network_stats, relay_software_counts, supported_nip_counts, event_daily_counts -- all support REFRESH CONCURRENTLY via unique indexes.
Monitoring
Prometheus Metrics
Every service exposes /metrics on its configured port with four metric types:
| Metric | Type | Description |
|---|---|---|
service_info |
Info | Static service metadata |
service_gauge |
Gauge | Point-in-time state (consecutive_failures, last_cycle_timestamp, progress) |
service_counter |
Counter | Cumulative totals (cycles_success, cycles_failed, errors by type) |
cycle_duration_seconds |
Histogram | Cycle latency with 10 buckets (1s to 1h) |
Alert Rules (6)
| Alert | Condition | Severity |
|---|---|---|
| ServiceDown | up == 0 for 5m |
critical |
| HighFailureRate | error rate > 0.1/s for 5m | warning |
| ConsecutiveFailures | 5+ consecutive cycle failures for 2m | critical |
| SlowCycles | p99 cycle duration > 300s for 5m | warning |
| DatabaseConnectionsHigh | > 80 active connections for 5m | warning |
| CacheHitRatioLow | buffer cache hit ratio < 95% for 10m | warning |
Grafana Dashboard
Auto-provisioned dashboard with per-service panels: cycle duration, error counts, consecutive failures, and service-specific progress metrics.
Structured Logging
info finder cycle_completed relay_count=100 duration=2.5
error validator retry_failed attempt=3 url="wss://relay.example.com"
JSON mode available for cloud aggregation:
{"timestamp": "2026-02-09T12:34:56+00:00", "level": "info", "service": "finder", "message": "cycle_completed", "relay_count": 100}
Nostr Protocol Support
NIPs Implemented
| NIP | Usage |
|---|---|
| NIP-01 | Event model, relay communication |
| NIP-02 | Contact list relay discovery (kind 3) |
| NIP-11 | Relay information document fetch and parse |
| NIP-65 | Relay list metadata (kind 10002) |
| NIP-66 | Relay monitoring and discovery (kinds 10166, 30166) |
Event Kinds
| Kind | Direction | Purpose |
|---|---|---|
| 0 | Published | Monitor profile metadata |
| 2 | Consumed | Deprecated relay recommendation |
| 3 | Consumed | Contact list (relay URLs from tag values) |
| 10002 | Consumed | NIP-65 relay list (r tags) |
| 10166 | Published | Monitor announcement (capabilities, networks, timeouts) |
| 30166 | Published | Relay discovery (addressable, one per relay, health check tags) |
NIP-66 Health Checks
| Check | What It Measures | Networks |
|---|---|---|
| RTT | WebSocket open/read/write latency (ms), 3-phase with verification | All |
| SSL | Certificate validity, expiry, issuer, SANs, cipher, fingerprint | Clearnet |
| DNS | A/AAAA/CNAME/NS/PTR records, TTL | Clearnet |
| Geo | Country, city, coordinates, timezone, geohash (GeoLite2 City) | Clearnet |
| Net | IP address, ASN, organization, network ranges (GeoLite2 ASN) | Clearnet |
| HTTP | Server header, X-Powered-By (from WebSocket handshake) | All |
Configuration
Environment Variables
| Variable | Required | Description |
|---|---|---|
DB_ADMIN_PASSWORD |
Yes | PostgreSQL admin password |
DB_WRITER_PASSWORD |
Yes | Writer role password (all eight services) |
DB_READER_PASSWORD |
Yes | Reader role password (read-only access) |
PRIVATE_KEY |
For Monitor, Synchronizer, Validator, Dvm | Nostr private key (hex or nsec) for event signing |
GRAFANA_PASSWORD |
For Grafana | Grafana admin password |
Configuration Files
deployments/bigbrotr/config/
├── brotr.yaml # Pool, batch size, timeouts
└── services/
├── seeder.yaml # Seed file path, validate mode
├── finder.yaml # API sources (JMESPath), event scanning, concurrency
├── validator.yaml # Networks, cleanup, processing chunk size
├── monitor.yaml # Health checks, retry per type, publishing, GeoIP
├── synchronizer.yaml # Networks, filter, time range, per-relay overrides
├── refresher.yaml # View list, refresh interval
├── api.yaml # Host, port, pagination, CORS
└── dvm.yaml # NIP-90 kind, relay list, response format
All configs use Pydantic v2 validation with typed defaults and constraints.
Development
Setup
git clone https://github.com/BigBrotr/bigbrotr.git && cd bigbrotr
curl -LsSf https://astral.sh/uv/install.sh | sh # install uv (one-time)
uv sync --group dev
pre-commit install
Quality Checks
make lint # ruff check src/ tests/
make format # ruff format src/ tests/
make typecheck # mypy src/bigbrotr (strict mode)
make test # pytest unit tests (~2400 tests)
make test-integration # pytest integration tests (requires Docker)
make test-fast # pytest -m "not slow"
make coverage # pytest --cov with HTML report
make ci # all checks: lint + format-check + typecheck + test + sql-check + audit
make docs # build MkDocs documentation site
make docs-serve # serve docs locally with live reload
make build # build Python package (sdist + wheel)
make docker-build # build Docker image (DEPLOYMENT=bigbrotr)
make docker-up # start Docker stack
make docker-down # stop Docker stack
make clean # remove build artifacts and caches
Test Suite
- ~2,400 unit tests + ~94 integration tests (testcontainers PostgreSQL)
asyncio_mode = "auto"-- no@pytest.mark.asyncioneeded- Global timeout: 120s per test
- Shared fixtures via
tests/fixtures/relays.py(registered as pytest plugin) - Coverage threshold: 80% (branch coverage enabled)
CI/CD Pipeline
| Stage | Tool | Purpose |
|---|---|---|
| Pre-commit | ruff, mypy, yamllint, detect-secrets, markdownlint, hadolint, sqlfluff, codespell | Code quality gates |
| Unit Test | pytest (Python 3.11--3.14 matrix) | Unit tests + coverage |
| Integration Test | pytest + testcontainers | PostgreSQL integration tests |
| Build | Docker Buildx (matrix) | Multi-deployment image builds + Trivy scan |
| Security | uv-secure, Trivy, CodeQL | Dependency vulns, container scanning, static analysis |
| Release | PyPI (OIDC) + GHCR | Package + Docker image publishing, SBOM generation |
| Docs | MkDocs Material | Auto-generated API docs deployed to GitHub Pages |
| Dependencies | Dependabot | Weekly updates for uv, Docker, GitHub Actions |
Project Structure
bigbrotr/
├── src/bigbrotr/ # Main package
│ ├── __main__.py # CLI entry point (service registry)
│ ├── core/ # Infrastructure
│ │ ├── pool.py # asyncpg connection pool with retry/backoff
│ │ ├── brotr.py # DB facade (stored procedures, bulk inserts)
│ │ ├── base_service.py # Abstract service with run_forever loop
│ │ ├── logger.py # Structured key=value / JSON logging
│ │ ├── metrics.py # Prometheus metrics server
│ │ └── yaml.py # YAML config loader
│ ├── models/ # Pure frozen dataclasses (zero I/O)
│ │ ├── relay.py # URL validation (rfc3986), network detection
│ │ ├── event.py # Nostr event wrapper (nostr_sdk.Event)
│ │ ├── metadata.py # Content-addressed metadata (SHA-256)
│ │ ├── event_relay.py # Event-relay junction (cascade insert)
│ │ ├── relay_metadata.py # Relay-metadata junction (cascade insert)
│ │ ├── service_state.py # Operational state persistence
│ │ ├── constants.py # NetworkType, ServiceName, EventKind enums
│ │ └── _validation.py # Shared validation and sanitization
│ ├── nips/ # NIP protocol implementations (I/O)
│ │ ├── base.py # Base data, logs, metadata models
│ │ ├── parsing.py # Declarative field parsing (FieldSpec)
│ │ ├── event_builders.py # Kind 0/10166/30166 event construction
│ │ ├── nip11/ # Relay information document
│ │ └── nip66/ # Health checks: rtt, ssl, dns, geo, net, http
│ ├── utils/ # Network primitives
│ │ ├── protocol.py # Nostr client, relay connection, broadcasting
│ │ ├── transport.py # Insecure WebSocket transport, stderr filter
│ │ ├── dns.py # Async hostname resolution (A/AAAA)
│ │ ├── keys.py # Nostr key loading from environment
│ │ ├── http.py # Bounded HTTP response reading
│ │ └── parsing.py # Tolerant model factory parsing
│ └── services/ # Business logic
│ ├── seeder/ # Seed file loading (one-shot)
│ ├── finder/ # Relay discovery (APIs + event scanning)
│ ├── validator/ # WebSocket protocol validation
│ ├── monitor/ # Health check orchestration + publishing
│ ├── synchronizer/ # Event collection (cursor-based)
│ ├── refresher/ # Materialized view refresh
│ ├── api/ # REST API (FastAPI, read-only)
│ ├── dvm/ # NIP-90 Data Vending Machine
│ └── common/ # Shared queries, configs, mixins
├── deployments/
│ ├── Dockerfile # Single parametric (ARG DEPLOYMENT)
│ ├── bigbrotr/ # Full archive deployment
│ │ ├── config/ # YAML configs (brotr + 8 services)
│ │ ├── postgres/init/ # SQL schema (10 files, 25 functions)
│ │ ├── monitoring/ # Prometheus + Alertmanager + Grafana
│ │ └── docker-compose.yaml # 15 containers, 2 networks
│ └── lilbrotr/ # Lightweight deployment
├── tests/
│ ├── fixtures/relays.py # Shared relay fixtures
│ ├── unit/ # ~2,400 tests (mirrors src/ structure)
│ └── integration/ # ~94 tests (testcontainers PostgreSQL)
├── docs/ # MkDocs Material documentation
├── Makefile # Development targets
└── pyproject.toml # All config: deps, ruff, mypy, pytest, coverage
Docker Infrastructure
Container Stack
| Container | Image | Purpose | Resources |
|---|---|---|---|
| postgres | postgres:16-alpine |
Primary storage | 2 CPU, 2 GB |
| pgbouncer | edoburu/pgbouncer:v1.25.1-p0 |
Transaction-mode connection pooling | 0.5 CPU, 256 MB |
| tor | osminogin/tor-simple:0.4.8.10 |
SOCKS5 proxy for .onion relays | 0.5 CPU, 256 MB |
| seeder | bigbrotr (parametric) | Relay bootstrapping (one-shot) | 0.5 CPU, 256 MB |
| finder | bigbrotr (parametric) | Relay discovery | 1 CPU, 512 MB |
| validator | bigbrotr (parametric) | Candidate validation | 1 CPU, 512 MB |
| monitor | bigbrotr (parametric) | Health monitoring + event publishing | 1 CPU, 512 MB |
| synchronizer | bigbrotr (parametric) | Event archiving | 1 CPU, 512 MB |
| refresher | bigbrotr (parametric) | Materialized view refresh | 0.25 CPU, 256 MB |
| api | bigbrotr (parametric) | REST API (FastAPI) | 0.5 CPU, 256 MB |
| dvm | bigbrotr (parametric) | NIP-90 Data Vending Machine | 0.5 CPU, 256 MB |
| postgres-exporter | prometheuscommunity/postgres-exporter:v0.16.0 |
PostgreSQL metrics | 0.25 CPU, 128 MB |
| prometheus | prom/prometheus:v2.51.0 |
Metrics collection (30d retention) | 0.5 CPU, 512 MB |
| alertmanager | prom/alertmanager:v0.27.0 |
Alert routing and grouping | 0.25 CPU, 128 MB |
| grafana | grafana/grafana:10.4.1 |
Dashboards | 0.5 CPU, 512 MB |
Networks
data-network-- postgres, pgbouncer, tor, all servicesmonitoring-network-- prometheus, grafana, alertmanager, postgres-exporter, all services
Security
- All ports bound to
127.0.0.1(no external exposure) - Non-root container execution (UID 1000)
tinias PID 1 for proper signal handling- SCRAM-SHA-256 authentication (PostgreSQL + PGBouncer)
- Healthchecks via
pg_isreadyand/metricsHTTP endpoint
Technology Stack
| Category | Technologies |
|---|---|
| Language | Python 3.11+ (fully typed, strict mypy) |
| Database | PostgreSQL 16, asyncpg, PGBouncer |
| Async | asyncio, aiohttp, aiohttp-socks |
| Nostr | nostr-sdk (Rust FFI via UniFFI) |
| Web Framework | FastAPI, uvicorn |
| Validation | Pydantic v2, rfc3986 |
| Monitoring | Prometheus, Grafana, Alertmanager, structured logging |
| Networking | dnspython, geoip2, geohash2, tldextract, cryptography |
| Testing | pytest, pytest-asyncio, pytest-cov, testcontainers |
| Quality | ruff (lint+format), mypy (strict), pre-commit (23 hooks) |
| CI/CD | GitHub Actions, uv-secure, Trivy, CodeQL, Dependabot |
| Containers | Docker, Docker Compose, tini |
| Build | uv (dependency management + build) |
Documentation
Full documentation is available at bigbrotr.github.io/bigbrotr.
| Section | Description |
|---|---|
| Getting Started | Installation, quick start tutorial, first deployment |
| User Guide | Architecture, configuration, database, monitoring |
| How-to Guides | Docker deploy, manual deploy, Tor setup, troubleshooting |
| Development | Setup, testing, contributing |
| API Reference | Auto-generated Python API docs |
| Changelog | Version history and migration guides |
Contributing
See the Contributing Guide for detailed instructions.
- Fork and clone
uv sync --group devandpre-commit install- Write tests for new functionality
make ci-- all checks must pass- Submit a pull request
Conventional commits: feat:, fix:, refactor:, docs:, test:, chore:
License
MIT -- see LICENSE.
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bigbrotr-5.4.0.tar.gz.
File metadata
- Download URL: bigbrotr-5.4.0.tar.gz
- Upload date:
- Size: 170.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9e114411f9dca8a1ec80980a4e9812dd207b1d6cccfcd6ee86185c136d4d9372
|
|
| MD5 |
8140342460222dccd1102f04d85131ee
|
|
| BLAKE2b-256 |
9561203929b7e428899616f5c34984ffa07f66f52abe400e83acfa4c6bd1ddf0
|
File details
Details for the file bigbrotr-5.4.0-py3-none-any.whl.
File metadata
- Download URL: bigbrotr-5.4.0-py3-none-any.whl
- Upload date:
- Size: 212.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b5cac7430ec7861e8c7169f8f33423dfc8153322b31612ee720740f4e556aa47
|
|
| MD5 |
efff1c8def6726a9252a4ceebd31642b
|
|
| BLAKE2b-256 |
8acb3eea7af7f846522f8544e84523a79af60c27da3b89966558fb01ea747965
|