Skip to main content

Shared dataset loading and prompt formatting for implicit-personalization projects

Project description

persona-data

Docs

Shared dataset loading, prompt formatting, and environment utilities for the implicit-personalization projects.

Overview

persona-data provides the common dataset and prompt helpers used across the persona projects:

  • SynthPersonaDataset for persona profiles plus QA pairs
  • PersonaGuessDataset for turn-based persona games
  • NemotronPersonasFranceDataset for French persona profiles from NVIDIA
  • NemotronPersonasUSADataset for US persona profiles from NVIDIA
  • prompt helpers for roleplay and multiple-choice evaluation
  • environment helpers for seeds, devices, and artifact paths

Installation

Add as a uv git source in your project's pyproject.toml:

[project]
dependencies = ["persona-data"]

[tool.uv.sources]
persona-data = { git = "ssh://git@github.com/implicit-personalization/persona-data.git" }

Then run uv sync.

For local development alongside other repos, use an editable path source:

[tool.uv.sources]
persona-data = { path = "../persona-data", editable = true }

Testing

uv run --with pytest pytest tests/test_datasets.py

The release workflow also runs tests/smoke_test.py against the built wheel and source distribution.

Package layout

src/persona_data/
├── __init__.py
├── synth_persona.py       # SynthPersonaDataset, PersonaDataset, PersonaData, QAPair, BiographySection, Statement
├── persona_guess.py       # PersonaGuessDataset, GameRecord, Turn
├── nemotron_personas.py   # NemotronPersonasFranceDataset, NemotronPersonasUSADataset
├── prompts.py             # format_roleplay_prompt, system_prompt_for_variant, format_mc_question, format_messages
└── environment.py         # load_env, set_seed, get_device, get_artifacts_dir

Datasets

Each dataset is a module with its own types and a loader that downloads from Hugging Face, cached via HF_HOME.

SynthPersona

from persona_data.synth_persona import SynthPersonaDataset

dataset = SynthPersonaDataset()

persona = dataset[0]
persona.name              # "Ethan Robinson"
persona.templated_view    # short attribute-based system prompt
persona.biography_view    # full biography text
persona.sections          # list of BiographySection

qa_pairs = dataset.get_qa(persona.id, type="implicit", difficulty=[1, 2])
questions = dataset.questions(persona.id, type="explicit")

PersonaGuess

from persona_data.persona_guess import PersonaGuessDataset

games = PersonaGuessDataset()
game = games[0]
turns = games.get_qa(game.game_id, player="A")
questions = games.questions(game.game_id, player="B")

Prompt formatting

from persona_data.prompts import format_messages, format_roleplay_prompt

system_prompt = format_roleplay_prompt(persona.biography_view)

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "Where did you grow up?"},
    {"role": "assistant", "content": "I grew up in Little Rock, Arkansas."},
]
full_prompt, response_start_idx = format_messages(messages, tokenizer)

format_roleplay_prompt supports mode="roleplay" (default) and mode="conversational".

Use system_prompt_for_variant(persona, variant) when iterating over persona variants. It reads <variant>_view by default, and accepts persona_option="baseline" when you need the persona-less Assistant prompt. Downstream artifact code can use BASELINE_PERSONA_ID and BASELINE_PERSONA_NAME from persona_data.prompts for the shared baseline identity.

For multiple-choice prompts, use format_mc_question(qa) to render the question, choices, and trailing answer-only instruction. Use mc_answer_only_instruction(n_choices) if you need just the instruction text, and mc_correct_letter(qa) to get the gold label.

format_messages handles tokenizers that do not support the "system" role (for example Gemma 2) by merging system content into the first user message.

Environment helpers

from persona_data.environment import load_env, set_seed, get_device, get_artifacts_dir

load_env()            # loads .env from cwd (searches parent dirs)
set_seed(1337)        # sets random, numpy, and torch seeds
device = get_device() # cuda > mps > cpu

Used by

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

persona_data-0.2.4.tar.gz (8.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

persona_data-0.2.4-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file persona_data-0.2.4.tar.gz.

File metadata

  • Download URL: persona_data-0.2.4.tar.gz
  • Upload date:
  • Size: 8.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for persona_data-0.2.4.tar.gz
Algorithm Hash digest
SHA256 e20344fc92f0deaabbd5b1c5d787bb75b77bb8ff3b6cd538d67808253185e37f
MD5 7cbe56b0f2a63567b6190ad66e752dcb
BLAKE2b-256 5d34c888965a3dc8fab9c8cb2e99d5e8b5fe87554b5df1da20d7e6fb18299063

See more details on using hashes here.

File details

Details for the file persona_data-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: persona_data-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 11.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for persona_data-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d7d1cc67ecf32a88a501ce715e1f1b83760df10c9198c1753e62a7bf31ddfbb0
MD5 daf91670cc3e917f69ecf38905e965eb
BLAKE2b-256 43d8aca75e0da15e368a0e36ebd47106489350d04811d43eb119af90d441a95b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page