Skip to main content

Official Python SDK for Variably feature flags, LLM experimentation, and prompt optimization platform

Project description

Variably Python SDK

Official Python SDK for Variably — feature flags, LLM experimentation, and prompt optimization.

Installation

pip install variably-sdk

Quick Start

from variably import VariablyClient

# Initialize the client
client = VariablyClient({
    "api_key": "your-api-key",
    "base_url": "https://api.variably.com",  # optional, defaults to localhost:8080
    "environment": "production"  # optional
})

# Evaluate a boolean feature flag
user_context = {
    "user_id": "user-123",
    "email": "user@example.com",
    "country": "US"
}

is_feature_enabled = client.evaluate_flag_bool(
    "new-checkout-flow",
    False,  # default value
    user_context
)

if is_feature_enabled:
    # Show new checkout flow
    pass

# Evaluate a feature gate
has_access = client.evaluate_gate("premium-features", user_context)

# Track events
client.track({
    "name": "button_clicked",
    "user_id": "user-123",
    "properties": {
        "button_name": "checkout",
        "page": "product-detail"
    }
})

# Clean up resources
client.close()

Prompt Experimentation

Variably provides two modes for LLM prompt experimentation:

BYOR (Bring Your Own Runtime)

You call your own LLM. Variably handles variant allocation and 41-dimensional evaluation.

from variably import VariablyClient
import time

client = VariablyClient({"api_key": "your-api-key"})

user_context = {"user_id": "user-123"}
input_variables = {"query": "What are the symptoms of Type 2 diabetes?"}

# Step 1: Get the allocated variant
variant = client.get_variant("rag-prompt-experiment", user_context, input_variables)
print(f"Variant: {variant.variant_key}, Model: {variant.model}")

# Step 2: Call your LLM with the variant's prompt template
prompt = variant.prompt_template.format(**input_variables)
start = time.time()
llm_response = call_your_llm(prompt, model=variant.model)  # your LLM call
latency = int((time.time() - start) * 1000)

# Step 3: Submit the response for 41-dimensional evaluation
result = client.submit_response(
    experiment_key="rag-prompt-experiment",
    variant_key=variant.variant_key,
    executed_prompt=prompt,
    response=llm_response,
    user_context=user_context,
    input_variables=input_variables,
    provider=variant.provider,
    model=variant.model,
    latency_ms=latency,
)
print(f"Submitted: {result.status}")

Managed Execution

Variably selects the variant, calls the LLM, and evaluates — all in one call.

response = client.evaluate_prompt(
    experiment_key="rag-prompt-experiment",
    user_context={"user_id": "user-123"},
    input_variables={"query": "What are the symptoms of Type 2 diabetes?"},
    evaluation_mode="full",  # "full" | "fast"
)

print(f"Content: {response.content}")
print(f"Model: {response.model}, Latency: {response.latency_ms}ms")
print(f"Tokens: {response.token_usage}")
print(f"Quality Score: {response.quality_score}")

Configuration

from variably import VariablyConfig, VariablyClient

config = VariablyConfig(
    api_key="your-api-key",
    base_url="https://api.variably.com",  # default: http://localhost:8080
    environment="production",  # default: development
    timeout=5000,  # timeout in milliseconds, default: 5000
    retry_attempts=3,  # default: 3
    enable_analytics=True,  # default: True
    cache={
        "ttl": 300,  # TTL in seconds, default: 300 (5 minutes)
        "max_size": 1000,  # default: 1000
        "enabled": True  # default: True
    },
    log_level="INFO"  # DEBUG, INFO, WARNING, ERROR
)

client = VariablyClient(config)

Advanced Usage

Environment Variables

You can create a client using environment variables:

from variably import create_client_from_env

# Uses these environment variables:
# VARIABLY_API_KEY (required)
# VARIABLY_BASE_URL
# VARIABLY_ENVIRONMENT
# VARIABLY_TIMEOUT
# VARIABLY_RETRY_ATTEMPTS
# VARIABLY_ENABLE_ANALYTICS
# VARIABLY_LOG_LEVEL

client = create_client_from_env()

Different Flag Types

# Boolean flags
bool_value = client.evaluate_flag_bool("feature-enabled", False, user_context)

# String flags
string_value = client.evaluate_flag_string("theme", "light", user_context)

# Number flags
number_value = client.evaluate_flag_number("max-items", 10, user_context)

# JSON flags
json_value = client.evaluate_flag_json("config", {"timeout": 5000}, user_context)

# Get full evaluation details
result = client.evaluate_flag("feature-flag", "default", user_context)
print(f"Value: {result.value}, Reason: {result.reason}, Cache Hit: {result.cache_hit}")

Batch Evaluation

flags = client.evaluate_flags([
    "feature-a",
    "feature-b", 
    "feature-c"
], user_context)

print(flags["feature-a"].value)

Event Tracking

from datetime import datetime

# Single event
client.track({
    "name": "purchase_completed",
    "user_id": "user-123",
    "properties": {
        "amount": 99.99,
        "currency": "USD",
        "items": ["item-1", "item-2"]
    },
    "timestamp": datetime.utcnow()  # optional, auto-generated if not provided
})

# Batch events
client.track_batch([
    {"name": "page_view", "user_id": "user-123", "properties": {"page": "/home"}},
    {"name": "button_click", "user_id": "user-123", "properties": {"button": "cta"}}
])

Cache Management

# Clear cache
client.clear_cache()

# Get cache stats
stats = client.cache.get_stats()
print(stats)  # {"size": 10, "max_size": 1000, "enabled": True, "ttl": 300}

Metrics

# Get SDK metrics
metrics = client.get_metrics()
print(metrics)
# {
#     "api_calls": 25,
#     "cache_hits": 15,
#     "cache_misses": 10,
#     "errors": 1,
#     "average_latency": 45.2,
#     "cache_hit_rate": 0.6,
#     "error_rate": 0.04,
#     "flags_evaluated": 20,
#     "gates_evaluated": 5,
#     "events_tracked": 12,
#     "start_time": "2023-10-01T12:00:00Z",
#     "uptime_seconds": 3600
# }

Context Manager

# Use with context manager for automatic cleanup
with VariablyClient({"api_key": "your-api-key"}) as client:
    result = client.evaluate_flag_bool("feature", False, user_context)
    # client.close() is called automatically

Custom Logger

from variably import VariablyClient, create_logger

# Create custom logger
logger = create_logger(
    name="my-app",
    level="DEBUG",
    structured=True,  # JSON logging
    silent=False
)

# Client will use the custom logger
client = VariablyClient({
    "api_key": "your-api-key",
    "log_level": "DEBUG"
})

Error Handling

from variably import (
    VariablyError,
    NetworkError,
    AuthenticationError,
    ValidationError,
    RateLimitError,
    TimeoutError,
    ConfigurationError
)

try:
    result = client.evaluate_flag("my-flag", False, user_context)
except AuthenticationError:
    print("Invalid API key")
except NetworkError as e:
    print(f"Network error: {e.status_code}")
except ValidationError as e:
    print(f"Validation error in field: {e.field}")
except RateLimitError as e:
    print(f"Rate limited, retry after {e.retry_after} seconds")
except TimeoutError:
    print("Request timed out")
except ConfigurationError as e:
    print(f"Configuration error in parameter: {e.parameter}")
except VariablyError as e:
    print(f"Variably SDK error: {e}")

Type Hints

The SDK includes full type hints for better IDE support:

from typing import Dict, Any
from variably import VariablyClient, UserContext, FlagResult

user_context: UserContext = {
    "user_id": "user-123",
    "email": "user@example.com",
    "attributes": {
        "plan": "premium",
        "signup_date": "2023-01-01"
    }
}

result: FlagResult = client.evaluate_flag("feature", False, user_context)

Async Support

For async applications, you can wrap the synchronous client:

import asyncio
from concurrent.futures import ThreadPoolExecutor
from variably import VariablyClient

class AsyncVariablyClient:
    def __init__(self, config):
        self.client = VariablyClient(config)
        self.executor = ThreadPoolExecutor(max_workers=4)
    
    async def evaluate_flag_bool(self, flag_key, default_value, user_context):
        loop = asyncio.get_event_loop()
        return await loop.run_in_executor(
            self.executor,
            self.client.evaluate_flag_bool,
            flag_key, default_value, user_context
        )
    
    async def close(self):
        self.client.close()
        self.executor.shutdown(wait=True)

# Usage
async def main():
    client = AsyncVariablyClient({"api_key": "your-api-key"})
    
    result = await client.evaluate_flag_bool("feature", False, {
        "user_id": "user-123"
    })
    
    await client.close()

asyncio.run(main())

Development

Setup

# Install development dependencies
pip install -e ".[dev]"

Testing

pytest

Code Quality

# Format code
black src/ tests/

# Sort imports
isort src/ tests/

# Lint
flake8 src/ tests/

# Type check
mypy src/

Publishing to PyPI

Prerequisites

  1. Create a PyPI account at https://pypi.org/account/register/
  2. Generate an API token at https://pypi.org/manage/account/token/
    • Scope: select "Entire account" for first upload, or project-specific after that
  3. Install build tools:
    pip install build twine
    

Configure PyPI credentials

Create ~/.pypirc:

[distutils]
index-servers = pypi

[pypi]
username = __token__
password = pypi-YOUR_API_TOKEN_HERE

Secure the file:

chmod 600 ~/.pypirc

Build and publish

# 1. Clean previous builds
rm -rf dist/ build/ src/*.egg-info

# 2. Build sdist and wheel
python -m build

# 3. Verify the package (optional but recommended)
twine check dist/*

# 4. Upload to TestPyPI first (optional, for dry-run)
twine upload --repository testpypi dist/*

# 5. Upload to PyPI
twine upload dist/*

Verify the published package

pip install variably-sdk==2.0.0
python -c "from variably import VariablyClient, PromptVariant; print('OK')"

Version bumping checklist

When releasing a new version, update these files:

  • src/variably/version.py__version__
  • pyproject.tomlversion
  • src/variably/http_client.pyUser-Agent header

Requirements

  • Python 3.7+
  • requests >= 2.25.0

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

variably_sdk-2.0.0.tar.gz (20.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

variably_sdk-2.0.0-py3-none-any.whl (18.7 kB view details)

Uploaded Python 3

File details

Details for the file variably_sdk-2.0.0.tar.gz.

File metadata

  • Download URL: variably_sdk-2.0.0.tar.gz
  • Upload date:
  • Size: 20.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for variably_sdk-2.0.0.tar.gz
Algorithm Hash digest
SHA256 402e8d44fa32cf1bfe0a9e17bc14b795111f259569306d852f3660685e5bb7ce
MD5 f760e4f373e9590853bf1a6479aeb6b0
BLAKE2b-256 b0175860300833197725188aec0b98fade511a836e155f5e1b91e1b51a519b82

See more details on using hashes here.

File details

Details for the file variably_sdk-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: variably_sdk-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 18.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for variably_sdk-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 26db0571e0c58b3bfb736be718fb91a29766c50c7b15e57f626ee2c818bb4992
MD5 6c6bc8958ec2bc9c52ec61f632b68846
BLAKE2b-256 5a2c624ac315b30bc16027f499d490ba82cb237cc822e851805ef052fa8bff37

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page