Adaptive voice-interview engine.
Project description
interview_kit
Adaptive voice-interview engine. The operator defines a Conversation — a persona for the interviewer, a purpose, and a list of Goals (each with a "what good looks like" standard). The engine runs the conversation as a voice agent (LiveKit + Deepgram STT + Anthropic LLM + Cartesia TTS), adapts mid-call (clarifying, drilling, skipping redundant goals), and produces a structured Extract mapping every claim back to who said it and when. This package is the engine — storage, web layer, link domain, and UI are the consumer's responsibility.
Install
Requires Python 3.11+.
pip install interview_kit
The voice extra pulls in LiveKit and the audio plugins:
pip install "interview_kit[voice]"
Smoke test (no API key)
interview_kit demo
Runs the full agent loop against a synthetic respondent and a deterministic fake LLM. Prints the transcript and the structured Extract.
Quickstart
Save the following as interview.yaml:
persona:
system_prompt: You are running a discovery interview about morning routines.
style: neutral
voice_id: demo-voice
purpose: Understand the interviewee's morning routine.
background:
interviewee_role: staff engineer
interviewee_expertise: end-to-end pipeline ownership
goals:
- id: routine
intent: Map the morning routine
standard: At least two rituals named with timing.
- id: exceptions
intent: Find common exception paths
standard: At least one exception flow named.
Then, with ANTHROPIC_API_KEY set:
import asyncio
from interview_kit import Conversation, Engine
from interview_kit.testing.simulators import RamblyKnowledgeableSimulator
async def main() -> None:
engine = Engine.with_defaults()
template = Conversation.from_yaml("interview.yaml")
conv = await engine.create_conversation(**template.model_dump(exclude={"id"}))
extract = await engine.simulate_session(conv.id, RamblyKnowledgeableSimulator())
print(extract.model_dump_json(indent=2))
asyncio.run(main())
Production / voice integration
See docs/integration.md for the FastAPI + LiveKit
AgentServer wiring, ConversationStore and EventSink implementation
guidance, and the operational gaps the consumer must close.
Development
See CONTRIBUTING.md for the source checkout, test, and local-voice workflows.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file interview_kit-0.3.0.tar.gz.
File metadata
- Download URL: interview_kit-0.3.0.tar.gz
- Upload date:
- Size: 238.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
524b24a624197a87dd6617d0a4390b72c7fac5870dafe63c28e086ca6c6914ac
|
|
| MD5 |
5f199aba020d6f7aeb995ff75346d197
|
|
| BLAKE2b-256 |
a3a26132c8657fae6fdfb7e7a02560723f63c7629ea307a959d76299aee5b28f
|
File details
Details for the file interview_kit-0.3.0-py3-none-any.whl.
File metadata
- Download URL: interview_kit-0.3.0-py3-none-any.whl
- Upload date:
- Size: 62.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7042bcb096c2767d25edee57937929f24708e80dae46d8bbb430589555c859bc
|
|
| MD5 |
c9f9325e8455f461d2888b777a6a62cf
|
|
| BLAKE2b-256 |
1cba97aac7f043edef00ef47e3d7e2db32fea0a8e560ee570247546c8c29e712
|