Cryptographic identity verification for AI models — like SSL certificates for AI conversations
Project description
ModelSignature Python SDK
Model Feedback & Reports, Right in Your Chat!
Receive end-user feedback & bug reports on your AI model, no matter who's hosting.
Installation
# Core SDK - API client for model management
pip install modelsignature
# With embedding - includes LoRA fine-tuning for baking feedback links into models
pip install 'modelsignature[embedding]'
The embedding extra adds PyTorch, Transformers, and PEFT for fine-tuning.
Requirements: Python 3.8+
Quick Start
Embed a feedback link directly into your model using LoRA fine-tuning. Users can ask "Where can I report issues?" and get your feedback page URL - works anywhere your model is deployed.
import modelsignature as msig
# One-line embedding with LoRA fine-tuning
result = msig.embed_signature_link(
model="mistralai/Mistral-7B-Instruct-v0.3",
link="https://modelsignature.com/models/model_abc123",
api_key="your_api_key", # Validates ownership
mode="adapter", # or "merge"
fp="4bit" # Memory optimization
)
# After deployment, users can ask:
# "I'd like to report a bug" → "Submit feedback at https://modelsignature.com/models/model_abc123"
Why embed feedback links?
- Users can report bugs & issues directly from the chat
- Works on HuggingFace, Replicate, or any hosting platform
- Feedback channel persists with the model
- One-time setup, no runtime overhead
Training time: ~40-50 minutes on T4 GPU (Google Colab free tier)
Model Registration
Register your model to get a feedback page where users can submit reports:
from modelsignature import ModelSignatureClient
client = ModelSignatureClient(api_key="your_api_key")
model = client.register_model(
display_name="My Assistant",
api_model_identifier="my-assistant-v1", # Immutable - used for versioning
endpoint="https://api.example.com/v1/chat",
version="1.0.0",
description="Customer support AI assistant",
model_type="language",
is_public=True
)
print(f"Feedback page: https://modelsignature.com/models/{model.model_id}")
Note: Provider registration can be done via web dashboard or API. See full documentation for details.
Receiving User Feedback
View Incident Reports
# Get all incidents reported for your models
incidents = client.get_my_incidents(status="reported")
for incident in incidents:
print(f"Issue: {incident['title']}")
print(f"Category: {incident['category']}")
print(f"Severity: {incident['severity']}")
print(f"Description: {incident['description']}")
Categories & Severity Levels
Users can report issues in these categories:
- Technical Error - Bugs, incorrect outputs, failures
- Harmful Content - Safety concerns, inappropriate responses
- Hallucination - False or fabricated information
- Bias - Unfair or skewed responses
- Other - General feedback
Severity levels: low, medium, high, critical
Key Features
Direct Feedback Channel
- Users report bugs & issues directly from chat
- Incident dashboard for tracking reports
- Community statistics and trust metrics
- Verified vs. anonymous reports
Model Management
- Versioning with immutable identifiers
- Health monitoring and uptime tracking
- Archive/unarchive model versions
- Trust scoring system (unverified → premium)
Optional: Cryptographic Verification
- JWT tokens for identity verification (enterprise use case)
- mTLS deployment authentication
- Response binding to prevent output substitution
- Sigstore bundle support for model integrity
Alternative: Runtime Wrapper
For self-hosted deployments, you can generate verification links at runtime instead of embedding:
from modelsignature import ModelSignatureClient, IdentityQuestionDetector
client = ModelSignatureClient(api_key="your_api_key")
detector = IdentityQuestionDetector()
# In your inference loop
if detector.is_identity_question(user_input):
verification = client.create_verification(
model_id="model_abc123",
user_fingerprint="session_xyz"
)
return verification.verification_url
Generates short-lived verification URLs (15 min expiry). No model modification required.
Advanced Usage
Programmatic Incident Reporting
from modelsignature import IncidentCategory, IncidentSeverity
# Report incidents programmatically
incident = client.report_incident(
model_id="model_abc123",
category=IncidentCategory.TECHNICAL_ERROR.value,
title="Incorrect math calculations",
description="Model consistently returns wrong answers for basic arithmetic",
severity=IncidentSeverity.MEDIUM.value
)
Model Versioning
# Create new version (same identifier)
model_v2 = client.register_model(
api_model_identifier="my-assistant", # Same as v1
version="2.0.0",
force_new_version=True, # Required
# ...
)
# Get version history
history = client.get_model_history(model_v2.model_id)
Community Statistics
# Get community stats for your model
stats = client.get_model_community_stats("model_abc123")
print(f"Total feedback reports: {stats['total_verifications']}")
print(f"Open incidents: {stats['unresolved_incidents']}")
print(f"Trust level: {stats['provider_trust_level']}")
API Key Management
# List API keys
keys = client.list_api_keys()
# Create new key
new_key = client.create_api_key("Production Key")
print(f"Key: {new_key.api_key}") # Only shown once
# Revoke key
client.revoke_api_key(key_id="key_123")
Configuration
client = ModelSignatureClient(
api_key="your_key",
base_url="https://api.modelsignature.com",
timeout=30,
max_retries=3,
debug=True
)
Error Handling
from modelsignature import ConflictError, ValidationError, AuthenticationError
try:
model = client.register_model(...)
except ConflictError as e:
# Model already exists - create new version
print(f"Conflict: {e.existing_resource}")
except ValidationError as e:
# Invalid parameters
print(f"Validation error: {e.errors}")
except AuthenticationError as e:
# Invalid API key
print(f"Auth failed: {e}")
Available exceptions: AuthenticationError, PermissionError, NotFoundError, ConflictError, ValidationError, RateLimitError, ServerError
Examples
Check the examples/ directory for integration patterns:
- Embedding Example - LoRA fine-tuning
- Incident Reporting - User feedback workflow
- OpenAI Integration - Function calling
- Anthropic Integration - Tool integration
- Middleware Example - Request interception
Documentation
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Run tests:
python -m pytest - Submit a pull request
Support
- Documentation: docs.modelsignature.com
- Issues: GitHub Issues
- Email: support@modelsignature.com
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file modelsignature-0.3.0.tar.gz.
File metadata
- Download URL: modelsignature-0.3.0.tar.gz
- Upload date:
- Size: 50.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
80d90cb4fe42bb4e4b3b6a63755e29598299400a96e53d7242a0dde3157cf175
|
|
| MD5 |
c3c0e441b7f3643eb6bccd11bbd226c7
|
|
| BLAKE2b-256 |
abaad2aae699e7944616da535f0d70a7684f7704686afcb49f342d17d1cc1911
|
File details
Details for the file modelsignature-0.3.0-py3-none-any.whl.
File metadata
- Download URL: modelsignature-0.3.0-py3-none-any.whl
- Upload date:
- Size: 44.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e8e028d8e3bcbb4fe3d4f7cb38aaa3148b9123739eb85fc5fcc90a3881b9a32
|
|
| MD5 |
66d6aa0d3d66111d86211abca3769ea5
|
|
| BLAKE2b-256 |
02264fbcce32a9e2b4f3d6a541b66038460425d4541aab0489553cfe41f3f984
|