Persona Interview & Consistency Evaluation Framework
Project description
PICON — Persona Interrogation framework for Consistency evaluation
An official Python package for evaluating LLM-based persona agents, called PICON.
By running a multi-turn interview and fact-checking pipeline, you can measure how consistently and accurately a persona agent behaves.
PICON evaluates persona agents across three dimensions:
- Internal Consistency: freedom from self-contradiction across answers.
- External Consistency: alignment of claims with real-world facts (via web search).
- Retest Stability: consistency of answers when the same questions are repeated within and across sessions.
Recent updates
- March 2026 (v0.1.0): Initial release with interview pipeline, evaluation, and CLI.
Installation
pip install picon-eval
import picon
print(picon.__version__)
For development or full extras (CharacterAI, Google GenAI, etc.):
git clone https://github.com/willystumblr/picon.git
cd picon
pip install -e ".[all]"
Quick Starts
[!NOTE] Before using PICON, you must provide API keys either directly or in a
.envfile.
- OpenAI models (gpt-*): Set
OPENAI_API_KEYin your.envfile.- Gemini models (gemini/*): Set
GEMINI_API_KEYin your.envfile.- Web search (external verification): Set
SERPER_API_KEYin your.envfile. Get one at serper.dev.
Environment Variables
Create a .env file in your working directory:
# LLM API Keys (at least one required)
OPENAI_API_KEY="YOUR_OPENAI_KEY"
GEMINI_API_KEY="YOUR_GEMINI_KEY"
# Web Search (required for external verification)
SERPER_API_KEY="YOUR_SERPER_KEY"
# Address validation (required for external verification)
GOOGLE_GEOCODE="YOUR_GOOGLE_GEOCODE_KEY"
# Optional
ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY"
GOOGLE_CLAIM_SEARCH="YOUR_GOOGLE_API_KEY" # Fact-check search
GOOGLE_CX_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID" # Custom Search Engine ID
Component-Based Usage
Import individual components and compose your own simulation pipeline:
from picon import Questioner, EntityExtractor, Evaluator, Interviewee
from picon import InterrogationSimulation
# Set up agents
questioner = Questioner(model="gpt-5")
extractor = EntityExtractor(model="gpt-5.1")
evaluator = Evaluator(model="gemini/gemini-2.5-flash")
# Set up the persona to evaluate
interviewee = Interviewee(
model="gpt-5",
persona="You are a 35-year-old software engineer living in Seoul.",
name="John",
)
# Run the full interview + evaluation pipeline
sim = InterrogationSimulation(
interviewee=interviewee,
questioner=questioner,
extractor=extractor,
evaluator=evaluator,
num_turns=20,
num_sessions=2,
)
result = sim.run(do_eval=True)
print(result.eval_scores)
result.save("results/john.json")
# Example output:
# {
# "internal_harmonic_mean": 0.82,
# "external_ec": 0.75,
# "inter_session_stability": 0.68,
# "intra_session_stability": 0.91,
# }
With default agent models, you can omit agent setup entirely:
from picon import Interviewee, InterrogationSimulation
interviewee = Interviewee(model="gpt-5", persona="You are ...", name="John")
result = InterrogationSimulation(interviewee=interviewee, num_turns=20).run()
Evaluate an LLM Persona (Simple API)
For quick evaluations, use the picon.run() shortcut:
import picon
result = picon.run(
model="gpt-5",
persona="You are a 35-year-old software engineer living in Seoul.",
name="John",
num_turns=20,
num_sessions=2,
do_eval=True,
)
print(result.eval_scores)
result.save("results/john.json")
# Equivalent CLI command
picon --agent_model gpt-5 \
--agent_persona "You are a 35-year-old software engineer living in Seoul." \
--agent_name "John" \
--num_turns 20 --num_sessions 2 --do_eval
Evaluate an External Agent Endpoint
If you already have a persona agent running (e.g. a wrapping server, fine-tuned model, RAG agent), provide its OpenAI-compatible endpoint URL (/v1/chat/completions).
from picon import Interviewee, InterrogationSimulation
interviewee = Interviewee(api_base="http://localhost:8000/v1", name="MyAgent")
result = InterrogationSimulation(interviewee=interviewee, num_turns=20).run()
# Equivalent CLI command
picon --agent_api_base http://localhost:8000/v1 \
--agent_name "MyAgent" \
--num_turns 20 --num_sessions 2 --do_eval
Self-hosted Models (vLLM)
For self-hosted models, provide both api_base and model:
from picon import Interviewee, InterrogationSimulation
interviewee = Interviewee(
api_base="http://localhost:8000/v1",
model="meta-llama/Llama-3-8B",
persona="You are a 30-year-old teacher named Jane...",
name="Jane",
)
result = InterrogationSimulation(interviewee=interviewee).run()
picon --agent_api_base http://localhost:8000/v1 \
--agent_model meta-llama/Llama-3-8B \
--agent_persona "You are a 30-year-old teacher named Jane..." \
--agent_name "Jane" --do_eval
Separate Interview and Evaluation
import picon
# Step 1: Interview only
interview_result = picon.run_interview(
name="John",
model="gpt-5",
persona="You are a 35-year-old software engineer...",
num_turns=20,
num_sessions=2,
)
# Step 2: Evaluate
persona_stats = picon.run_evaluation(interview_result, eval_factors=["internal", "external"])
print(persona_stats)
Evaluate an Existing Result File
scores = picon.evaluate("results/john.json", eval_factors=["internal", "external"])
Connecting an External Agent
PICON can evaluate any persona agent that exposes an OpenAI-compatible chat completions endpoint (POST /v1/chat/completions).
If your agent already serves this endpoint (e.g. vLLM or any OpenAI-compatible server), just pass the URL directly — no wrapping needed.
Case 1: Your agent already has an OpenAI-compatible endpoint
If you're serving a model via vLLM or any server that implements /v1/chat/completions:
import picon
result = picon.run(
api_base="http://<your-server-ip>:8000/v1",
name="Alice",
do_eval=True,
)
picon --agent_api_base http://<your-server-ip>:8000/v1 \
--agent_name "Alice" --do_eval
Case 2: Your agent has custom logic (RAG, API calls, etc.)
If your agent doesn't have an OpenAI-compatible endpoint, wrap it with a simple server.
You only need to implement one endpoint that accepts messages and returns a response:
import time
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import uvicorn
app = FastAPI()
def generate_response(messages: list) -> str:
"""Replace this with your own agent logic."""
user_message = messages[-1]["content"]
# ... your custom logic (RAG retrieval, API call, etc.) ...
return "This is my response."
@app.post("/v1/chat/completions")
async def chat_completions(request: Request):
body = await request.json()
messages = body.get("messages", [])
content = generate_response(messages)
return {
"id": f"chatcmpl-{int(time.time())}",
"object": "chat.completion",
"created": int(time.time()),
"model": "my-agent",
"choices": [{
"index": 0,
"message": {"role": "assistant", "content": content},
"finish_reason": "stop",
}],
"usage": {"prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0},
}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8001)
Then evaluate with PICON:
picon --agent_api_base http://<your-server-ip>:8001/v1 \
--agent_name "MyAgent" --do_eval
See examples/ for full end-to-end scripts with vLLM + LoRA and HumanSimulacra RAG
How It Works
1. Get-to-Know Ask predefined demographic questions (WVS dataset)
|
2. Main Interrogation Each turn runs this agent chain:
|
|-- Questioner Generate the next question based on conversation history
|-- Interviewee The persona under evaluation answers the question
|-- Extractor Pull out entities and verifiable claims from the answer
|-- Web Search Fact-check extracted claims against the web
'-- Evaluator Compare this answer with previous answers for consistency
|
3. Repeat Phase Re-ask the get-to-know questions to measure stability
|
4. Finalize Compute all evaluation scores and save results
API Reference
Component Classes
Interviewee(model, persona, name, api_base, api_key):
model(str): LLM model name. Required ifapi_baseis not provided.persona(str): System prompt or path to a.txtfile. Default:"".name(str): Interviewee display name. Default:"Agent".api_base(str): OpenAI-compatible endpoint URL. Required ifmodelis not provided.api_key(str): API key for the endpoint. Default:None.
Questioner(model, prompt_path)/EntityExtractor(model, prompt_path)/Evaluator(model, prompt_path)/WebSearch(model, prompt_path):
model(str): LLM model name. Each agent has its own default (see below).prompt_path(str): Custom system prompt file.Noneuses the built-in prompt.
InterrogationSimulation(interviewee, questioner, extractor, web_search, evaluator, ...):
interviewee(Interviewee): The persona agent to evaluate. Required.questioner(Questioner): Questioner agent.Nonecreates one with default model.extractor(EntityExtractor): Extractor agent.Nonecreates one with default model.web_search(WebSearch): Web search agent.Nonecreates one with default model.evaluator(Evaluator): Evaluator agent.Nonecreates one with default model.num_turns(int): Interview turns per session. Default:30.num_sessions(int): Number of repeated sessions. Default:2.nhd_model(str): Model for AI detection. Default:"gpt-5-nano".output_dir(str): Output directory. Default:"data/results".question_seed(int): Random seed for question selection. Default:42.
Simple API
picon.run()/picon.run_interview()Parameters:
name(str): Interviewee name. Default:"Agent".model(str): LLM model name (e.g."gpt-5","gemini/gemini-2.5-flash"). Required ifapi_baseis not provided.persona(str): System prompt or path to a.txtfile. Default:"".api_base(str): OpenAI-compatible API endpoint URL. Required ifmodelis not provided.api_key(str): API key for the persona endpoint. Default:None.num_turns(int): Number of interview turns. Default:30.num_sessions(int): Number of repeated sessions. Default:2.do_eval(bool): Run evaluation after interview. Default:True.eval_factors(list): Evaluation factors to run:"internal","external","intra","inter". Default:None(all).questioner_model(str): Model for the questioner agent. Default:"gpt-5".extractor_model(str): Model for the entity extractor agent. Default:"gpt-5.1".web_search_model(str): Model for the web search agent. Default:"gpt-5".evaluator_model(str): Model for the evaluator agent. Default:"gemini/gemini-2.5-flash".nhd_model(str): Model for AI detection. Default:"gpt-5-nano".output_dir(str): Output directory for results. Default:"data/results".question_seed(int): Random seed for question selection. Default:42.
Evaluation Metrics
| Metric | Description |
|---|---|
| Internal Responsiveness | Relevance of answers to questions |
| Internal Consistency | Consistency of answers to repeated questions |
| Internal Harmonic Mean | Harmonic mean of Responsiveness and Consistency |
| External Coverage | Fraction of turns containing at least one verifiable claim |
| External Non-refutation Rate | Per-turn rate of claims not refuted by web evidence |
| External Consistency (EC) | Harmonic mean of Coverage and Non-refutation Rate |
| Inter-session Stability | Answer stability across sessions |
| Intra-session Stability | Answer stability within a session |
Examples
End-to-end scripts in examples/:
# OpenCharacter (vLLM + LoRA)
python examples/test_opencharacter_vllm.py
# HumanSimulacra (RAG agent)
python examples/test_human_simulacra.py
python examples/test_human_simulacra.py --character "Kevin Kelly" --model "gpt-5"
Citation
If you use PICON in your research, please cite:
@article{kim2026picon,
title={PICON: A Multi-Turn Interrogation Framework for Evaluating Persona Agent Consistency},
author={Kim, Minseo and Im, Sujeong and Choi, Junseong and Lee, Junhee and Shim, Chaeeun and Choi, Edward},
journal={arXiv preprint arXiv:2603.25620},
year={2026}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file picon_eval-0.1.3.tar.gz.
File metadata
- Download URL: picon_eval-0.1.3.tar.gz
- Upload date:
- Size: 109.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6def9b5e2eb90895011039712c8c743aa0a62e1229b1b68d975adb715063a02b
|
|
| MD5 |
2ccb72b129c0f7802c9f69707235296e
|
|
| BLAKE2b-256 |
7999b17e5e00638bf42b97f40ddc490e71fd81bf144906a41dc64488c788a24f
|
File details
Details for the file picon_eval-0.1.3-py3-none-any.whl.
File metadata
- Download URL: picon_eval-0.1.3-py3-none-any.whl
- Upload date:
- Size: 125.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b2f991b1476448e285979e7e2b473e8fb5705db08eba39b6b84df14775954286
|
|
| MD5 |
33f62456ecb1cf9c5f6fdaec20a7547f
|
|
| BLAKE2b-256 |
b057abfdc06e282f13b0ad4365a1f780375cf5dd5257a3ddcb038801b27ecba5
|