Skip to main content

Human state to LLM context translation - Python bindings for the Attuned framework

Project description

Attuned

Declare human state. Get appropriate AI behavior.

Attuned is the behavioral layer for LLM applications. Set user context, get conditioned responses. Works with any LLM.

pip install attuned

Quick Start

from attuned import Attuned

# Declare user state - set what you need, rest defaults to neutral
state = Attuned(
    verbosity_preference=0.2,  # Brief responses
    warmth=0.9,                # Warm and friendly
)

# Get prompt context - works with ANY LLM
system_prompt = f"You are an assistant.\n\n{state.prompt()}"

# Use with OpenAI
response = openai.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": "How do I learn Python?"}
    ]
)

# Or Anthropic
response = anthropic.messages.create(
    model="claude-3-5-sonnet",
    system=system_prompt,
    messages=[{"role": "user", "content": "How do I learn Python?"}]
)

# Or Ollama, Mistral, Gemini, or any LLM that accepts a system prompt

Why Attuned?

Without Attuned:

# Hand-crafted prompts. Does "be concise" work? Who knows.
system = "You are a helpful assistant. Be concise. Be friendly."

With Attuned:

# Statistically validated. 68% shorter responses with brief. Proven.
state = Attuned(verbosity_preference=0.2, warmth=0.9)
system = f"You are a helpful assistant.\n\n{state.prompt()}"

Validation results:

  • verbosity_preference=0.2 → 68% shorter responses (p<0.0001, d=2.5)
  • warmth=0.9 → 5x more warm language (p<0.0001, d=1.3)
  • cognitive_load=0.9 → 81% fewer multi-step plans (p<0.0001, d=2.0)

Axes (23 Available)

Set any axes you care about. Unset axes default to 0.5 (neutral, no effect).

Category Axes
Cognitive cognitive_load, decision_fatigue, tolerance_for_complexity, urgency_sensitivity
Emotional emotional_openness, emotional_stability, anxiety_level, need_for_reassurance
Social warmth, formality, boundary_strength, assertiveness, reciprocity_expectation
Preferences ritual_need, transactional_preference, verbosity_preference, directness_preference
Control autonomy_preference, suggestion_tolerance, interruption_tolerance, reflection_vs_action_bias
Safety stakes_awareness, privacy_sensitivity

Presets

Common patterns out of the box:

from attuned import Attuned

# Anxious user - warm, reassuring, not overwhelming
state = Attuned.presets.anxious_user()

# Busy executive - brief, formal, direct
state = Attuned.presets.busy_executive()

# Learning student - detailed, patient, educational
state = Attuned.presets.learning_student()

# Casual chat - warm, casual, balanced
state = Attuned.presets.casual_chat()

# High stakes - careful, thorough, formal
state = Attuned.presets.high_stakes()

# Overwhelmed - minimal, supportive, no pressure
state = Attuned.presets.overwhelmed()

Integrations (Optional)

Thin wrappers for less boilerplate. The core state.prompt() works with anything.

OpenAI

from attuned import Attuned
from attuned.integrations.openai import AttunedOpenAI

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedOpenAI(state=state)
response = client.chat("How do I learn Python?")

Anthropic

from attuned import Attuned
from attuned.integrations.anthropic import AttunedAnthropic

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedAnthropic(state=state)
response = client.message("How do I learn Python?")

LiteLLM (100+ providers)

from attuned import Attuned
from attuned.integrations.litellm import AttunedLiteLLM

state = Attuned(verbosity_preference=0.2, warmth=0.9)
client = AttunedLiteLLM(state=state)

# Same code, any provider
response = client.chat("gpt-4o-mini", "Hello")
response = client.chat("claude-3-sonnet-20240229", "Hello")
response = client.chat("ollama/llama2", "Hello")
response = client.chat("gemini/gemini-pro", "Hello")

What Attuned Produces

When you call state.prompt(), you get text like this:

## Interaction Guidelines
- Offer suggestions, not actions
- Drafts require explicit user approval
- Silence is acceptable if no action is required
- Use warm, friendly language. Include encouraging phrases like 'Great question!'
- Keep responses brief and to the point.

Tone: warm-casual
Verbosity: brief

This is injected into the LLM's system prompt. That's it. No magic. Just validated prompt engineering.

Advanced Usage

For full control, use the underlying types:

from attuned import StateSnapshot, RuleTranslator, Source

snapshot = StateSnapshot.builder() \
    .user_id("user_123") \
    .source(Source.SelfReport) \
    .axis("warmth", 0.7) \
    .axis("cognitive_load", 0.9) \
    .build()

translator = RuleTranslator()
context = translator.to_prompt_context(snapshot)
print(context.format_for_prompt())

HTTP Client (Server Mode)

For distributed deployments:

from attuned import AttunedClient, StateSnapshot

client = AttunedClient("http://localhost:8080")
client.upsert_state(snapshot)
context = client.get_context("user_123")

Governance

Every axis has governance metadata:

from attuned import get_axis

axis = get_axis("cognitive_load")
print(axis.intent)         # What this axis is FOR
print(axis.forbidden_uses) # What it must NEVER be used for

Attuned is designed to respect users:

  • Never optimizes for engagement/conversion
  • Never executes actions (produces context, not commands)
  • Self-report always overrides inference
  • Full transparency into translation rules

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

attuned-1.0.2.tar.gz (103.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

attuned-1.0.2-cp39-abi3-manylinux_2_34_x86_64.whl (1.8 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.34+ x86-64

File details

Details for the file attuned-1.0.2.tar.gz.

File metadata

  • Download URL: attuned-1.0.2.tar.gz
  • Upload date:
  • Size: 103.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/1.10.2

File hashes

Hashes for attuned-1.0.2.tar.gz
Algorithm Hash digest
SHA256 9cd2df24e067292dfb3b2624a5a0d0899f8357849e69e346184143fd2e7744e5
MD5 6eebf3fd3976bba26a958bb66daf7232
BLAKE2b-256 d401068aa4a884762107e4b7945f033eec838820152c05ac01f2fe03ea799c49

See more details on using hashes here.

File details

Details for the file attuned-1.0.2-cp39-abi3-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for attuned-1.0.2-cp39-abi3-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 7de07cbaeb166c3dc00fc50c62fc1d83ab7be6e3f72669acf4868570da3d3725
MD5 f4d34b8b5111923284f66c22fb421d08
BLAKE2b-256 25af3410cdd2699698a32c61b885a2d7c214702dd9b1a7735680a748daed408c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page