Skip to main content

Human state to LLM context translation - Python bindings for the Attuned framework

Project description

Attuned

Declare human state. Get appropriate AI behavior.

Attuned is the behavioral layer for LLM applications. Set user context, get conditioned responses. Works with any LLM.

pip install attuned

Quick Start

from attuned import Attuned

# Declare user state - set what you need, rest defaults to neutral
state = Attuned(
    verbosity_preference=0.2,  # Brief responses
    warmth=0.9,                # Warm and friendly
)

# Get prompt context - works with ANY LLM
system_prompt = f"You are an assistant.\n\n{state.prompt()}"

# Use with OpenAI
response = openai.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "How do I learn Python?"}
    ]
)

# Or Anthropic
response = anthropic.messages.create(
    model="claude-3-5-sonnet",
    system=system_prompt,
    messages=[{"role": "user", "content": "How do I learn Python?"}]
)

# Or Ollama, Mistral, Gemini, or any LLM that accepts a system prompt

Why Attuned?

Without Attuned:

# Hand-crafted prompts. Does "be concise" work? Who knows.
system = "You are a helpful assistant. Be concise. Be friendly."

With Attuned:

# Statistically validated. 68% shorter responses with brief. Proven.
state = Attuned(verbosity_preference=0.2, warmth=0.9)
system = f"You are a helpful assistant.\n\n{state.prompt()}"

Validation results:

  • verbosity_preference=0.2 → 68% shorter responses (p<0.0001, d=2.5)
  • warmth=0.9 → 5x more warm language (p<0.0001, d=1.3)
  • cognitive_load=0.9 → 81% fewer multi-step plans (p<0.0001, d=2.0)

Axes (23 Available)

Set any axes you care about. Unset axes default to 0.5 (neutral, no effect).

Category Axes
Cognitive cognitive_load, decision_fatigue, tolerance_for_complexity, urgency_sensitivity
Emotional emotional_openness, emotional_stability, anxiety_level, need_for_reassurance
Social warmth, formality, boundary_strength, assertiveness, reciprocity_expectation
Preferences ritual_need, transactional_preference, verbosity_preference, directness_preference
Control autonomy_preference, suggestion_tolerance, interruption_tolerance, reflection_vs_action_bias
Safety stakes_awareness, privacy_sensitivity

Presets

Common patterns out of the box:

from attuned import Attuned

# Anxious user - warm, reassuring, not overwhelming
state = Attuned.presets.anxious_user()

# Busy executive - brief, formal, direct
state = Attuned.presets.busy_executive()

# Learning student - detailed, patient, educational
state = Attuned.presets.learning_student()

# Casual chat - warm, casual, balanced
state = Attuned.presets.casual_chat()

# High stakes - careful, thorough, formal
state = Attuned.presets.high_stakes()

# Overwhelmed - minimal, supportive, no pressure
state = Attuned.presets.overwhelmed()

Integrations (Optional)

Thin wrappers for less boilerplate. The core state.prompt() works with anything.

OpenAI

from attuned import Attuned
from attuned.integrations.openai import AttunedOpenAI

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedOpenAI(state=state)
response = client.chat("How do I learn Python?")

Anthropic

from attuned import Attuned
from attuned.integrations.anthropic import AttunedAnthropic

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedAnthropic(state=state)
response = client.message("How do I learn Python?")

LiteLLM (100+ providers)

from attuned import Attuned
from attuned.integrations.litellm import AttunedLiteLLM

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedLiteLLM(state=state)

# Same code, any provider
response = client.chat("gpt-4o-mini", "Hello")
response = client.chat("claude-3-sonnet-20240229", "Hello")
response = client.chat("ollama/llama2", "Hello")
response = client.chat("gemini/gemini-pro", "Hello")

What Attuned Produces

When you call state.prompt(), you get text like this:

## Interaction Guidelines
- Offer suggestions, not actions
- Drafts require explicit user approval
- Silence is acceptable if no action is required
- Use warm, friendly language. Include encouraging phrases like 'Great question!'
- Keep responses brief and to the point.

Tone: warm-casual
Verbosity: brief

This is injected into the LLM's system prompt. That's it. No magic. Just validated prompt engineering.

Advanced Usage

For full control, use the underlying types:

from attuned import StateSnapshot, RuleTranslator, Source

snapshot = StateSnapshot.builder() \
    .user_id("user_123") \
    .source(Source.SelfReport) \
    .axis("warmth", 0.7) \
    .axis("cognitive_load", 0.9) \
    .build()

translator = RuleTranslator()
context = translator.to_prompt_context(snapshot)
print(context.format_for_prompt())

HTTP Client (Server Mode)

For distributed deployments:

from attuned import AttunedClient, StateSnapshot

client = AttunedClient("http://localhost:8080")
client.upsert_state(snapshot)
context = client.get_context("user_123")

Governance

Every axis has governance metadata:

from attuned import get_axis

axis = get_axis("cognitive_load")
print(axis.intent)         # What this axis is FOR
print(axis.forbidden_uses) # What it must NEVER be used for

Attuned is designed to respect users:

  • Never optimizes for engagement/conversion
  • Never executes actions (produces context, not commands)
  • Self-report always overrides inference
  • Full transparency into translation rules

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

attuned-1.0.0.tar.gz (95.8 kB view details)

Uploaded Source

File details

Details for the file attuned-1.0.0.tar.gz.

File metadata

  • Download URL: attuned-1.0.0.tar.gz
  • Upload date:
  • Size: 95.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for attuned-1.0.0.tar.gz
Algorithm Hash digest
SHA256 238d8e91a37dd2e84ee65f3c9511d04734a4f40bc714834df83d59df11d30c83
MD5 59e7414e611907e1c146fcf8bf1266ee
BLAKE2b-256 8eacc39f686c8e6e0cfcde89e3c6601134952858e918c93d28cd6b3546d1144e

See more details on using hashes here.

Provenance

The following attestation bundles were made for attuned-1.0.0.tar.gz:

Publisher: release.yml on JtPerez-Acle/Attuned

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page