Skip to main content

Python client for GrandJury server API - collective intelligence for model evaluation

Project description

grandjury

Python client for the GrandJury ML evaluation and verdict analysis API.

This package provides comprehensive access to the GrandJury server for ML model evaluation and voting analysis, supporting:

  • Model scoring with decay-adjusted algorithms
  • Vote analysis across multiple dimensions (time, completeness, confidence)
  • Multiple data formats (pandas, polars, CSV, parquet, dict/list)
  • Performance optimizations with optional dependencies
  • Backward compatibility with existing code

Patent Pending.

Installation

pip install grandjury

Optional performance dependencies:

pip install grandjury[performance]  # Installs msgspec, pyarrow, polars

Quick Start

Basic Model Evaluation

from grandjury import GrandJuryClient

# Initialize client
client = GrandJuryClient(api_key="your-api-key")

# Evaluate model performance
result = client.evaluate_model(
    previous_score=0.7,
    votes=[0.9, 0.8, 0.6],
    reputations=[1.0, 1.0, 0.8]
)
print(f"Score: {result['score']:.4f}")

Vote Analysis with Multiple Data Formats

import pandas as pd
import polars as pl

# Your vote data
vote_data = [
    {
        "inference_id": 1,
        "vote": True,
        "voter_id": 101,
        "vote_time": "2024-07-07T19:22:30",
        # ... other fields
    }
    # ... more votes
]

# No authentication needed for analysis endpoints  
client = GrandJuryClient()

# Use with different data formats
histogram = client.vote_histogram(vote_data)  # dict/list
histogram = client.vote_histogram(pd.DataFrame(vote_data))  # pandas
histogram = client.vote_histogram(pl.DataFrame(vote_data))  # polars
histogram = client.vote_histogram("votes.csv")  # CSV file
histogram = client.vote_histogram("votes.parquet")  # Parquet file

# Vote completeness analysis
completeness = client.vote_completeness(
    data=vote_data,
    voter_list=[101, 102, 103]
)

# Population confidence
confidence = client.population_confidence(
    data=vote_data,
    voter_list=[101, 102, 103]
)

# Majority vote analysis
majority = client.majority_good_votes(
    data=vote_data,
    good_vote=True,
    threshold=0.5
)

# Vote distribution per inference
distribution = client.votes_distribution(vote_data)

Backward Compatibility

# Original function still works
from grandjury import evaluate_model

result = evaluate_model(
    predictions=["Model output 1", "Model output 2"],
    references=["Expected 1", "Expected 2"],
    api_key="your-api-key"
)

API Endpoints

Method Description Authentication
evaluate_model() Model scoring with decay algorithms Required
vote_histogram() Vote time distribution analysis Optional
vote_completeness() Voting completeness metrics Optional
population_confidence() Population confidence analysis Optional
majority_good_votes() Majority vote counting Optional
votes_distribution() Vote distribution per inference Optional

Performance Features

The client automatically uses performance optimizations when available:

  • msgspec: Faster JSON serialization
  • PyArrow: Efficient Parquet file reading
  • Polars: Native DataFrame support

Install with: pip install msgspec pyarrow polars

Error Handling

try:
    result = client.vote_histogram(invalid_data)
except Exception as e:
    print(f"API Error: {e}")

Server URL Configuration

# Default: https://grandjury-server.onrender.com/api/v1
client = GrandJuryClient()

# Custom server
client = GrandJuryClient(base_url="https://your-server.com")
# Automatically appends /api/v1 if missing
pip install grandjury

Optional performance dependencies:

pip install grandjury[performance]  # Installs msgspec, pyarrow, polars

Quick Start

Basic Model Evaluation

from grandjury import GrandJuryClient

# Initialize client
client = GrandJuryClient(api_key="your-api-key")

# Evaluate model performance
result = client.evaluate_model(
    previous_score=0.7,
    votes=[0.9, 0.8, 0.6],
    reputations=[1.0, 1.0, 0.8]
)
print(f"Score: {result['score']:.4f}")

Vote Analysis with Multiple Data Formats

import pandas as pd
import polars as pl

# Your vote data
vote_data = [
    {
        "inference_id": 1,
        "vote": True,
        "voter_id": 101,
        "vote_time": "2024-07-07T19:22:30",
        # ... other fields
    }
    # ... more votes
]

# No authentication needed for analysis endpoints  
client = GrandJuryClient()

# Use with different data formats
histogram = client.vote_histogram(vote_data)  # dict/list
histogram = client.vote_histogram(pd.DataFrame(vote_data))  # pandas
histogram = client.vote_histogram(pl.DataFrame(vote_data))  # polars
histogram = client.vote_histogram("votes.csv")  # CSV file
histogram = client.vote_histogram("votes.parquet")  # Parquet file

# Vote completeness analysis
completeness = client.vote_completeness(
    data=vote_data,
    voter_list=[101, 102, 103]
)

# Population confidence
confidence = client.population_confidence(
    data=vote_data,
    voter_list=[101, 102, 103]
)

# Majority vote analysis
majority = client.majority_good_votes(
    data=vote_data,
    good_vote=True,
    threshold=0.5
)

# Vote distribution per inference
distribution = client.votes_distribution(vote_data)

Backward Compatibility

# Original function still works
from grandjury import evaluate_model

result = evaluate_model(
    predictions=["Model output 1", "Model output 2"],
    references=["Expected 1", "Expected 2"],
    api_key="your-api-key"
)

API Endpoints

Method Description Authentication
evaluate_model() Model scoring with decay algorithms Required
vote_histogram() Vote time distribution analysis Optional
vote_completeness() Voting completeness metrics Optional
population_confidence() Population confidence analysis Optional
majority_good_votes() Majority vote counting Optional
votes_distribution() Vote distribution per inference Optional

Performance Features

The client automatically uses performance optimizations when available:

  • msgspec: Faster JSON serialization
  • PyArrow: Efficient Parquet file reading
  • Polars: Native DataFrame support

Install with: pip install msgspec pyarrow polars

Error Handling

try:
    result = client.vote_histogram(invalid_data)
except Exception as e:
    print(f"API Error: {e}")

Server URL Configuration

# Default: https://grandjury-server.onrender.com/api/v1
client = GrandJuryClient()

# Custom server
client = GrandJuryClient(base_url="https://your-server.com")
# Automatically appends /api/v1 if missing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

grandjury-1.0.0.tar.gz (176.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

grandjury-1.0.0-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file grandjury-1.0.0.tar.gz.

File metadata

  • Download URL: grandjury-1.0.0.tar.gz
  • Upload date:
  • Size: 176.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for grandjury-1.0.0.tar.gz
Algorithm Hash digest
SHA256 b1e1062be1a26be427012bdcd7d0239bf5d6e5ccde5be89415fd4158daf11f44
MD5 550da7bfac9327ac034ece7e17005922
BLAKE2b-256 0133e7c93f174bb7c4b60a5e9e2cc03fb8d6e8facf0145b84e0ec897597b0c6a

See more details on using hashes here.

File details

Details for the file grandjury-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: grandjury-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for grandjury-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1ffd0ac1b944ad33efb2ecb68ce2be7c94b0e051835c0812da08f711c052c6e0
MD5 80c1c0fe8ac85ee2bb8b06fe4a05e429
BLAKE2b-256 0f8fa628eef80047854d204a5378341f076cc62925105b242838979058d8163b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page