Skip to main content

Lumenova Beacon SDK - A Python SDK for observability tracing with OpenTelemetry-compatible span export

Project description

Lumenova Beacon SDK

PyPI version Python Versions License

A Python observability tracing SDK that sends spans in OpenTelemetry-compatible format, designed for AI/LLM applications.

Features

  • OpenTelemetry Integration - Automatic instrumentation for Anthropic, OpenAI, FastAPI, Redis, HTTPX, and more
  • Manual & Decorator Tracing - Create spans manually or use @trace decorator
  • LangChain/LangGraph Integration - Automatic tracing for chains, agents, tools, and retrievers
  • Strands Agents Integration - Callback handler for AWS Strands agent tracing
  • CrewAI Integration - Event listener for CrewAI crew tracing
  • LiteLLM Integration - Callback logger for LiteLLM proxy tracing
  • Dataset Management - ActiveRecord-style API for managing test datasets
  • Prompt Management - Version-controlled prompt templates with labels (staging, production)
  • Experiment & Evaluation Management - Run experiments over datasets and evaluate results
  • Data Masking - Built-in PII detection and redaction via Beacon Guardrails
  • Flexible Transport - HTTP or file-based span export
  • Full Async Support - Async/await throughout

Requirements

  • Python 3.10+

Installation

# Base installation
pip install lumenova-beacon

# With OpenTelemetry support
pip install lumenova-beacon[opentelemetry]

# With LangChain/LangGraph support
pip install lumenova-beacon[langchain]

# With LiteLLM support
pip install lumenova-beacon[litellm]

# With Strands Agents support
pip install lumenova-beacon[strands]

# With CrewAI support
pip install lumenova-beacon[crewai]

Quick Start

from lumenova_beacon import BeaconClient, trace

# Initialize client with your Beacon credentials
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",  # Your Beacon endpoint
    api_key="your-api-key",  # API key from your Beacon account
    session_id="my-session"
)

# Use decorator for automatic tracing
@trace
def my_function(x, y):
    return x + y

result = my_function(10, 20)  # Automatically traced

Configuration

Environment Variables

All environment variables work as fallback - constructor parameters override them:

Variable Purpose Default
BEACON_ENDPOINT API base URL for OTLP export (required unless using file_directory)
BEACON_API_KEY Authentication token
BEACON_SESSION_ID Default session ID for spans
BEACON_SERVICE_NAME Service name for OTEL resource (fallback: OTEL_SERVICE_NAME)
BEACON_ENVIRONMENT Deployment environment (e.g., "production", "staging")
BEACON_VERIFY SSL certificate verification true
BEACON_EAGER_EXPORT Export spans eagerly on end true
# Bash/Linux/macOS
export BEACON_ENDPOINT="https://your-beacon-endpoint.lumenova.ai"
export BEACON_API_KEY="your-api-key"
export BEACON_SESSION_ID="my-session"
# PowerShell
$env:BEACON_ENDPOINT = "https://your-beacon-endpoint.lumenova.ai"
$env:BEACON_API_KEY = "your-api-key"
$env:BEACON_SESSION_ID = "my-session"

Configuration Options

from lumenova_beacon import BeaconClient

client = BeaconClient(
    # Connection
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    verify=True,
    headers={"Custom-Header": "value"},

    # Span Configuration
    session_id="my-session",
    service_name="my-service",
    environment="production",

    # OpenTelemetry
    auto_instrument_opentelemetry=True,   # Auto-configure OTEL (default: True)
    isolated=False,                        # Use private TracerProvider (default: False)
    auto_instrument_litellm=False,         # Auto-configure LiteLLM (default: False)

    # Data Masking
    masking_function=None,                 # Custom masking function (optional)

    # General
    enabled=True,
    eager_export=True,
)

File Transport

For local development or testing, use file_directory instead of endpoint:

from lumenova_beacon import BeaconClient

client = BeaconClient(
    file_directory="./traces",
)

Core Features

1. Tracing

Decorator Tracing

The @trace decorator automatically captures function execution:

from lumenova_beacon import trace

# Simple usage
@trace
def process_data(data):
    return data.upper()

# With custom name
@trace(name="custom_operation")
def another_function():
    pass

# Capture inputs and outputs
@trace(capture_args=True, capture_result=True)
def calculate(x, y):
    return x + y

# Works with async functions
@trace
async def async_operation():
    await some_async_call()

Manual Tracing

For more control, use context managers:

from lumenova_beacon import BeaconClient
from lumenova_beacon.types import SpanKind, StatusCode

client = BeaconClient()

# Context manager
with client.trace("operation_name") as span:
    span.set_attribute("user_id", "123")
    span.set_input({"query": "search term"})

    try:
        result = do_work()
        span.set_output(result)
        span.set_status(StatusCode.OK)
    except Exception as e:
        span.record_exception(e)
        span.set_status(StatusCode.ERROR, str(e))
        raise

# Async context manager
async with client.trace("async_operation") as span:
    result = await async_work()
    span.set_output(result)

# Direct span creation
span = client.create_span(
    name="manual_span",
    kind=SpanKind.CLIENT,
)
span.start()
# ... do work ...
span.end()

2. OpenTelemetry Integration

Beacon automatically configures OpenTelemetry to export spans:

from lumenova_beacon import BeaconClient
from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

# Initialize (auto-configures OpenTelemetry)
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    auto_instrument_opentelemetry=True  # Default
)

# Instrument libraries
AnthropicInstrumentor().instrument()
OpenAIInstrumentor().instrument()

# Now all API calls are automatically traced!
from anthropic import Anthropic
anthropic = Anthropic()
response = anthropic.messages.create(
    model="claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Hello!"}]
)  # Automatically traced with proper span hierarchy

Supported Instrumentors

Install additional instrumentors as needed:

pip install opentelemetry-instrumentation-anthropic
pip install opentelemetry-instrumentation-openai
pip install opentelemetry-instrumentation-fastapi
pip install opentelemetry-instrumentation-redis
pip install opentelemetry-instrumentation-httpx
pip install opentelemetry-instrumentation-requests

3. LangChain Integration

Automatically trace all LangChain operations:

from lumenova_beacon import BeaconClient, BeaconLangGraphkHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

client = BeaconClient()
handler = BeaconLangGraphkHandler(
    session_id="session-123"
)

# Use with request-time callbacks (recommended)
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke(
    "What is the capital of France?",
    config={"callbacks": [handler]}
)

# Works with chains
prompt = ChatPromptTemplate.from_template("Tell me about {topic}")
chain = prompt | llm
response = chain.invoke(
    {"topic": "AI"},
    config={"callbacks": [handler]}
)

# Traces agents, tools, retrievers, and more
from langchain.agents import create_react_agent, AgentExecutor

agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke(
    {"input": "What's the weather?"},
    config={"callbacks": [handler]}
)

4. Strands Agents Integration

Trace AWS Strands Agent executions with automatic span hierarchy:

from lumenova_beacon import BeaconClient, BeaconStrandsHandler
from strands import Agent

client = BeaconClient()
handler = BeaconStrandsHandler(
    session_id="my-session",
    agent_name="My Agent",
)

agent = Agent(model=model, callback_handler=handler)
result = agent("Hello, world!")
print(handler.trace_id)  # Link to Beacon trace

5. CrewAI Integration

Trace CrewAI Crew executions via the event listener:

from lumenova_beacon import BeaconClient, BeaconCrewAIListener
from crewai import Agent, Crew, Task

client = BeaconClient()

# Auto-registers with CrewAI event bus
listener = BeaconCrewAIListener(
    session_id="my-session",
    crew_name="My Research Crew",
)

crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff()
print(listener.trace_id)  # Link to Beacon trace

6. Dataset Management

Manage test datasets with an ActiveRecord-style API. Both sync and async methods are available:

  • Sync methods (simple names): Dataset.method(...) or dataset.method(...)
  • Async methods ('a' prefix): await Dataset.amethod(...) or await dataset.amethod(...)
from lumenova_beacon import BeaconClient
from lumenova_beacon.datasets import Dataset, DatasetRecord

client = BeaconClient()

# Create dataset (sync)
dataset = Dataset.create(
    name="qa-evaluation",
    description="Question answering test cases"
)

# Create dataset (async)
dataset = await Dataset.acreate(
    name="qa-evaluation",
    description="Question answering test cases"
)

# Add a single record with flexible column-based data (sync)
dataset.create_record(
    data={
        "prompt": "What is AI?",
        "expected_answer": "Artificial Intelligence is...",
        "difficulty": "easy",
        "category": "definitions"
    }
)

# Add a single record (async)
await dataset.acreate_record(
    data={
        "prompt": "What is AI?",
        "expected_answer": "Artificial Intelligence is...",
        "difficulty": "easy"
    }
)

# Bulk create records (sync)
records = [
    {
        "data": {
            "question": "What is ML?",
            "expected_answer": "Machine Learning...",
            "difficulty": "medium"
        }
    },
    {
        "data": {
            "question": "What is DL?",
            "expected_answer": "Deep Learning...",
            "difficulty": "hard"
        }
    }
]
dataset.bulk_create_records(records)

# Bulk create records (async)
await dataset.abulk_create_records(records)

# List datasets (sync)
datasets, pagination = Dataset.list(page=1, page_size=20, search="qa")
for ds in datasets:
    print(f"{ds.name}: {ds.description}")

# List datasets (async)
datasets, pagination = await Dataset.alist(page=1, page_size=20, search="qa")
for ds in datasets:
    print(f"{ds.name}: {ds.description}")

# Get dataset (sync)
dataset = Dataset.get(dataset_id="dataset-uuid", include_records=True)

# Get dataset (async)
dataset = await Dataset.aget(dataset_id="dataset-uuid", include_records=True)

# List records with pagination (sync)
records, pagination = dataset.list_records(page=1, page_size=50)

# List records with pagination (async)
records, pagination = await dataset.alist_records(page=1, page_size=50)

# Update dataset (sync)
dataset.update(name="updated-name", description="New description")

# Update dataset (async)
await dataset.aupdate(name="updated-name", description="New description")

# Delete dataset (cascade deletes records) (sync)
dataset.delete()

# Delete dataset (async)
await dataset.adelete()

7. Prompt Management

Version-controlled prompt templates with labels:

Creating Prompts

from lumenova_beacon import BeaconClient
from lumenova_beacon.prompts import Prompt

client = BeaconClient()

# Create text prompt (sync)
prompt = Prompt.create(
    name="greeting",
    template="Hello {{name}}! Welcome to {{company}}.",
    description="Customer greeting template",
    tags=["customer-support", "greeting"]
)

# Create chat prompt (async)
prompt = await Prompt.acreate(
    name="support-bot",
    messages=[
        {"role": "system", "content": "You are a helpful assistant for {{product}}."},
        {"role": "user", "content": "{{question}}"}
    ],
    tags=["support"]
)

# Quick sync example
prompt = Prompt.create(
    name="quick-prompt",
    template="Hi {{name}}!"
)

Fetching and Using Prompts

# Get latest version (sync)
prompt = Prompt.get("greeting")

# Get specific version (async)
prompt = await Prompt.aget("greeting", version=2)

# Get labeled version (sync)
prompt = Prompt.get("greeting", label="production")

# Get by ID (async)
prompt = await Prompt.aget(prompt_id="prompt-uuid")

# Format prompt with variables
message = prompt.format(name="Alice", company="Acme Corp")
# Result: "Hello Alice! Welcome to Acme Corp."

# Chat prompt formatting (async)
prompt = await Prompt.aget("support-bot")
messages = prompt.format(product="CloudSync", question="How do I sync?")
# Result: [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}]

Versioning and Labels

# Publish new version (async)
new_version = await prompt.apublish(
    template="Hi {{name}}! Welcome to {{company}}. We're excited to have you!",
    message="Added enthusiastic tone"
)
print(f"Published version {new_version.version}")

# Set labels (sync)
prompt.set_label("staging", version=2)
prompt.set_label("production", version=2)

# Promote staging to production after testing (async)
staging_prompt = await Prompt.aget("greeting", label="staging")
# ... test the prompt ...
await staging_prompt.aset_label("production")

LangChain Conversion

from langchain_core.prompts import PromptTemplate, ChatPromptTemplate

# Convert text prompt to LangChain (sync)
prompt = Prompt.get("greeting", label="production")
lc_prompt = prompt.to_langchain()  # Returns PromptTemplate
result = lc_prompt.format(name="Bob", company="TechCorp")

# Convert chat prompt to LangChain (async)
chat_prompt = await Prompt.aget("support-bot", label="production")
lc_chat = chat_prompt.to_langchain()  # Returns ChatPromptTemplate
messages = lc_chat.format_messages(product="DataHub", question="Reset password?")

# Use in chain
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")
chain = lc_chat | llm
response = await chain.ainvoke({"product": "CloudSync", "question": "Why sync failing?"})

List and Search

# List all prompts (sync)
prompts = Prompt.list(page=1, page_size=20)

# Filter by tags (async)
support_prompts = await Prompt.alist(tags=["customer-support"])

# Search by text (sync)
results = Prompt.list(search="greeting")

# Async version
prompts = await Prompt.alist(page=1, page_size=10)

8. Experiment Management

Run experiments over datasets with step-by-step pipelines:

from lumenova_beacon.experiments import Experiment

# Create an experiment (sync)
experiment = Experiment.create(
    name="qa-eval-v1",
    dataset_id="dataset-uuid",
    description="Evaluate QA model accuracy",
)

# Run the experiment with a processing function
run = experiment.run(process_fn=my_pipeline)

# List experiments (async)
experiments, pagination = await Experiment.alist(page=1, page_size=20)

9. Evaluation Management

Evaluate experiment results with custom evaluators:

from lumenova_beacon.evaluations import Evaluation, EvaluationRun

# Create an evaluation (sync)
evaluation = Evaluation.create(
    name="accuracy-check",
    experiment_id="experiment-uuid",
)

# List evaluations (async)
evaluations, pagination = await Evaluation.alist(page=1, page_size=20)

10. LLM Config Management

Retrieve LLM configurations from the Beacon platform:

from lumenova_beacon.llm_configs import LLMConfig

# Get an LLM config (sync)
config = LLMConfig.get(config_id="config-uuid")

# List configs (async)
configs, pagination = await LLMConfig.alist(page=1, page_size=20)

11. Data Masking

Automatically mask sensitive data (PII) before spans are exported:

from lumenova_beacon import BeaconClient
from lumenova_beacon.masking.integrations.beacon_guardrails import (
    create_beacon_masking_function,
    MaskingMode,
    PIIType,
)

# Create a masking function backed by Beacon Guardrails API
masking_fn = create_beacon_masking_function(
    pii_types=[PIIType.PERSON, PIIType.EMAIL_ADDRESS, PIIType.US_SSN],
    mode=MaskingMode.REDACT,
)

# Pass it to the client - all span data is masked before export
client = BeaconClient(
    endpoint="https://your-beacon-endpoint.lumenova.ai",
    api_key="your-api-key",
    masking_function=masking_fn,
)

You can also provide a custom masking function:

def my_masking_fn(text: str) -> str:
    return text.replace("secret", "***")

client = BeaconClient(masking_function=my_masking_fn)

12. Guardrails

Apply content guardrail policies via the Beacon API:

from lumenova_beacon.guardrails import Guardrail

guardrail = Guardrail(guardrail_id="guardrail-uuid")

# Sync
result = guardrail.apply("some user input")

# Async
result = await guardrail.aapply("some user input")

API Reference

Main Exports

from lumenova_beacon import (
    BeaconClient,           # Main client
    BeaconConfig,           # Configuration class
    get_client,             # Get current client singleton
    trace,                  # Tracing decorator
    # Integrations (lazy-loaded)
    BeaconLangGraphHandler,  # LangChain/LangGraph
    BeaconLangGraphConfig,  # LangGraph configuration (adds support for interruptions)
    BeaconStrandsHandler,   # Strands Agents
    BeaconCrewAIListener,   # CrewAI
)

from lumenova_beacon.datasets import Dataset, DatasetRecord
from lumenova_beacon.prompts import Prompt
from lumenova_beacon.experiments import Experiment
from lumenova_beacon.evaluations import Evaluation, EvaluationRun
from lumenova_beacon.llm_configs import LLMConfig
from lumenova_beacon.guardrails import Guardrail
from lumenova_beacon.types import SpanKind, StatusCode, SpanType

BeaconClient

client = BeaconClient(
    endpoint: str | None = None,
    api_key: str | None = None,
    file_directory: str | None = None,
    session_id: str | None = None,
    service_name: str | None = None,
    environment: str | None = None,
    auto_instrument_opentelemetry: bool = True,
    isolated: bool = False,
    auto_instrument_litellm: bool = False,
    masking_function: Callable | None = None,
    verify: bool | None = None,
    eager_export: bool | None = None,
)

# Methods
span = client.create_span(name, kind, span_type, session_id)
ctx = client.trace(name, kind, span_type)  # Context manager (sync & async)
client.export_span(span)     # Export a single span
client.export_spans(spans)   # Export multiple spans
client.flush()               # Flush pending spans

Dataset

# Class methods (sync - simple names)
dataset = Dataset.create(name: str, description: str | None = None, column_schema: list[dict[str, Any]] | None = None)
dataset = Dataset.get(dataset_id: str, include_records: bool = False)
datasets, pagination = Dataset.list(page=1, page_size=20, search=None)

# Class methods (async - 'a' prefix)
dataset = await Dataset.acreate(...)
dataset = await Dataset.aget(...)
datasets, pagination = await Dataset.alist(...)

# Instance methods (sync - simple names)
dataset.save()
dataset.update(name=None, description=None)
dataset.delete()
record = dataset.create_record(data: dict[str, Any])
dataset.bulk_create_records(records: list[dict])
records, pagination = dataset.list_records(page=1, page_size=50)

# Instance methods (async - 'a' prefix)
await dataset.asave()
await dataset.aupdate(...)
await dataset.adelete()
record = await dataset.acreate_record(...)
await dataset.abulk_create_records(...)
records, pagination = await dataset.alist_records(...)

# Properties
dataset.id
dataset.name
dataset.description
dataset.record_count
dataset.created_at
dataset.updated_at
dataset.column_schema

DatasetRecord

# Class methods (sync - simple names)
record = DatasetRecord.get(dataset_id: str, record_id: str)
records, pagination = DatasetRecord.list(dataset_id: str, page=1, page_size=50)

# Class methods (async - 'a' prefix)
record = await DatasetRecord.aget(...)
records, pagination = await DatasetRecord.alist(...)

# Instance methods (sync - simple names)
record.save()
record.update(data: dict[str, Any] | None = None)
record.delete()

# Instance methods (async - 'a' prefix)
await record.asave()
await record.aupdate(...)
await record.adelete()

# Properties
record.id
record.dataset_id
record.data  # dict[str, Any] - flexible column data
record.created_at
record.updated_at

Prompt

# Class methods (sync - simple names)
prompt = Prompt.create(name, template=None, messages=None, description=None, tags=None)
prompt = Prompt.get(name=None, prompt_id=None, label="latest", version=None)
prompts = Prompt.list(page=1, page_size=10, tags=None, search=None)

# Class methods (async - 'a' prefix)
prompt = await Prompt.acreate(...)
prompt = await Prompt.aget(...)
prompts = await Prompt.alist(...)

# Instance methods (sync - simple names)
prompt.update(name=None, description=None, tags=None)
prompt.delete()
new_version = prompt.publish(template=None, messages=None, message="")
prompt.set_label(label: str, version: int | None = None)

# Instance methods (async - 'a' prefix)
await prompt.aupdate(...)
await prompt.adelete()
new_version = await prompt.apublish(...)
await prompt.aset_label(...)

# Rendering (always sync)
result = prompt.format(**kwargs)
result = prompt.compile(variables: dict)
template = prompt.to_template()  # Convert to Python f-string format
lc_prompt = prompt.to_langchain()  # Convert to LangChain template

# Properties
prompt.id
prompt.name
prompt.type  # "text" or "chat"
prompt.version
prompt.template  # For TEXT prompts
prompt.messages  # For CHAT prompts
prompt.labels  # list[str]
prompt.tags  # list[str]

Span

span = Span(name, kind, span_type)

# Lifecycle
span.start()
span.end(status_code=StatusCode.OK)

# Status
span.set_status(StatusCode.ERROR, "description")
span.record_exception(exc: Exception)

# Attributes
span.set_attribute("key", value)
span.set_attributes({"k1": "v1", "k2": "v2"})
span.set_input(data: dict)
span.set_output(data: dict)
span.set_metadata("key", value)

# Properties
span.trace_id
span.span_id
span.parent_id
span.name
span.kind
span.span_type

Type Enums

from lumenova_beacon.types import SpanKind, StatusCode, SpanType

# SpanKind
SpanKind.INTERNAL
SpanKind.SERVER
SpanKind.CLIENT
SpanKind.PRODUCER
SpanKind.CONSUMER

# StatusCode
StatusCode.UNSET
StatusCode.OK
StatusCode.ERROR

# SpanType
SpanType.SPAN
SpanType.GENERATION
SpanType.CHAIN
SpanType.TOOL
SpanType.RETRIEVAL
SpanType.AGENT
SpanType.FUNCTION
SpanType.REQUEST
SpanType.SERVER
SpanType.TASK
SpanType.CACHE
SpanType.EMBEDDING
SpanType.HANDOFF
SpanType.CONDITIONAL

Error Handling

Exception Hierarchy

from lumenova_beacon.exceptions import (
    BeaconError,                # Base exception
    ConfigurationError,         # Configuration issues
    TransportError,             # Transport errors
    HTTPTransportError,         #   HTTP transport errors
    FileTransportError,         #   File transport errors
    SpanError,                  # Span-related errors
    DatasetError,               # Dataset errors
    DatasetNotFoundError,       #   Dataset not found
    DatasetValidationError,     #   Dataset validation
    PromptError,                # Prompt errors
    PromptNotFoundError,        #   Prompt not found
    PromptValidationError,      #   Prompt validation
    PromptCompilationError,     #   Template compilation
    PromptNetworkError,         #   Network errors
    ExperimentError,            # Experiment errors
    ExperimentNotFoundError,    #   Experiment not found
    ExperimentValidationError,  #   Experiment validation
    EvaluationError,            # Evaluation errors
    EvaluationNotFoundError,    #   Evaluation not found
    EvaluationValidationError,  #   Evaluation validation
    LLMConfigError,             # LLM config errors
    LLMConfigNotFoundError,     #   LLM config not found
    MaskingError,               # Masking errors
    MaskingAPIError,            #   Masking API errors
    MaskingNotFoundError,       #   Masking not found
    MaskingValidationError,     #   Masking validation
    GuardrailError,             # Guardrail errors
    GuardrailNotFoundError,     #   Guardrail not found
    GuardrailValidationError,   #   Guardrail validation
)

Retry Logic

All HTTP operations automatically retry up to 3 times with exponential backoff:

from lumenova_beacon.exceptions import PromptNetworkError

try:
    prompt = await Prompt.get("my-prompt")
except PromptNetworkError as e:
    # Failed after 3 automatic retries
    print(f"Network error: {e}")
except PromptNotFoundError as e:
    # Prompt doesn't exist
    print(f"Not found: {e}")

Graceful Degradation

from lumenova_beacon import BeaconClient

# Disable tracing in development
client = BeaconClient(enabled=False)

# Tracing becomes no-op when disabled
@trace
def my_function():
    return "result"  # No tracing overhead

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lumenova_beacon-2.4.16.tar.gz (639.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lumenova_beacon-2.4.16-py3-none-any.whl (152.5 kB view details)

Uploaded Python 3

File details

Details for the file lumenova_beacon-2.4.16.tar.gz.

File metadata

  • Download URL: lumenova_beacon-2.4.16.tar.gz
  • Upload date:
  • Size: 639.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for lumenova_beacon-2.4.16.tar.gz
Algorithm Hash digest
SHA256 5a44c21f5bcee1153d56a1bbbb78bfa7b12857bc1ef797f47653d1388f0b726f
MD5 fb67c9711127f7acc119a780667a54be
BLAKE2b-256 0cc381a2410a4d95bd5cc1b3e6d03258038b610f59897518a150f7a0e8df0d0b

See more details on using hashes here.

File details

Details for the file lumenova_beacon-2.4.16-py3-none-any.whl.

File metadata

File hashes

Hashes for lumenova_beacon-2.4.16-py3-none-any.whl
Algorithm Hash digest
SHA256 fc5c342af73fe0a61c54c4b44b2b887091b8fbf30337a47585b24103410fa647
MD5 3e99f0117529751cad927fc90a419972
BLAKE2b-256 95d98176a6a12a59ff5351938d85b2fe9a5ff0534bfb12a62849efcb8f33756d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page