Skip to main content

Match recall segments with story segments.

Project description

rMatch

Automatic recall & story matching tool.

Ruff packaging framework: uv pre-commit

Quick start

Command line

pip install rmatch

# single recall file
rmatch story.txt recall.txt --matcher anthropic

# directory of recall files (one per subject)
rmatch story.txt recalls/ --matcher anthropic

# estimate API cost without sending requests
rmatch story.txt recalls/ --matcher openai --dry-run

Python API

from rmatch import Matcher

matcher = Matcher(matcher_name="anthropic", api_key="your_api_key")
matches = matcher.match(
    story_segments=["The cat sat on the mat.", "It purred softly."],
    recall_segments=["A cat was on a mat."],
)
# [(0, [0])]  — recall segment 0 matched story segment 0

Or use run_matching to load files, run matching, and save results in one call:

from rmatch.match import run_matching

results = run_matching(
    story_file="story.txt",
    recall_file="recalls/",
    matcher_name="anthropic",
    api_key="your_api_key",
)

Setup API keys

API keys are resolved in this order (first match wins):

  1. api_key argument passed directly in Python
  2. .env file in the current working directory
  3. Environment variables already set in your shell

Set them as environment variables:

export ANTHROPIC_API_KEY="your_api_key"   # for --matcher anthropic (default)
export OPENAI_API_KEY="your_api_key"      # for --matcher openai
export HF_TOKEN="your_hf_token"           # for --matcher huggingface

Or put a .env file in your working directory:

ANTHROPIC_API_KEY="your_api_key"
OPENAI_API_KEY="your_api_key"
HF_TOKEN="your_hf_token"

Output format

A JSON file with:

{
  "matcher_name": "anthropic",
  "story_name": "story",
  "story_segmentation": "lines",
  "recall_segmentation": "lines",
  "matches": {
    "sub-001": [[0, [3, 7]], [1, [12]]],
    "sub-002": [[0, [1]], [1, [5, 6]]]
  }
}

Each entry in matches maps a subject ID to a list of [recall_segment_id, [matched_story_segment_ids...]] pairs.

Benchmarking

Requires rBench:

# outside of this dir
git clone git@github.com:GabrielKP/rBench.git

Add to .env or environment:

BENCHMARK_ROOT="path/to/rBench"

Run:

uv run src/rmatch/evaluate.py {alice,monthiversary,memsearch}

API / Documentation

Input formats

Story file — a .txt or .json file containing the story segments to match against.

  • .txt: one segment per line (blank lines are ignored).
  • .json: must contain a "segments" array of strings. Optionally includes "segmentation_method".
{
  "segmentation_method": "sentences",
  "segments": [
    "The cat sat on the mat.",
    "It purred softly."
  ]
}

Recall file — a .txt file, a .json file, or a directory of either.

  • .txt file: one recall segment per line. The filename stem is used as the subject ID.
  • .json file: must contain a "recalls" object mapping subject IDs to segment arrays.
  • Directory: all .txt or all .json files inside are loaded (mixing formats is not allowed). Each .txt file becomes one subject; .json files are merged.
{
  "segmentation_method": "clauses",
  "recalls": {
    "sub-001": ["A cat was on a mat.", "It was purring."],
    "sub-002": ["There was a cat on something."]
  }
}

CLI reference

rmatch STORY_FILE RECALL_FILE [options]

General options

  • STORY_FILE (positional, required) — Path to the story .txt or .json file.
  • RECALL_FILE (positional, required) — Path to a recall .txt/.json file or a directory of them.
  • -M, --matcher (str) — Which matcher backend to use. One of: anthropic, openai, reranker, huggingface. Default: anthropic.
  • -m, --model-name (str) — Override the matcher's default model (see defaults below).
  • --track-emissions — Enable CodeCarbon carbon-emissions tracking. Results are saved beside the output file.
  • -f, --overwrite — Overwrite the output file if it already exists.

LLM matcher options (anthropic, openai, huggingface)

  • --window-size (int) — Number of surrounding recall segments (before and after) to include as context for each target segment. Set to 0 to disable context. Default: 5.
  • --dry-runanthropic & openai only. Estimate token usage and cost without making API calls.

Self-hosted / HuggingFace options

  • -q, --quantization (str) — Load the model in reduced precision: 4bit (NF4) or 8bit. Requires bitsandbytes.
  • -bs, --batch-size (int) — Number of prompts to process in parallel. Default: 4.
  • --max-new-tokens (int) — Maximum tokens the model may generate per prompt. Default: 64.
  • --verbose-errors — Print the raw model output when parsing fails. Useful for debugging prompt issues.

Reranker options

  • --device (str) — PyTorch device for the reranker model (e.g. cpu, cuda, mps). Default: auto.
  • --threshold (float) — Minimum similarity score for a story segment to be considered a match. Default: 0.09.
  • --top-k (int) — Number of top-scoring story candidates to evaluate per recall segment. Default: 5.

Default models

  • anthropicclaude-opus-4-6
  • openaigpt-4.1
  • rerankerBAAI/bge-reranker-v2-m3
  • huggingfacemeta-llama/Llama-3.2-1B-Instruct

Python API

Matcher (main entry point)

from rmatch import Matcher

matcher = Matcher(matcher_name="anthropic", model_name=None, **kwargs)
matches  = matcher.match(story_segments, recall_segments)

Matcher(matcher_name, **kwargs) is a factory — it returns the appropriate subclass based on matcher_name. All keyword arguments are forwarded to the subclass constructor.

Constructor arguments:

  • model_name (str) — Override the default model. Applies to all matchers.
  • window_size (int) — Context window radius around the target recall segment. Default: 5. Applies to: anthropic, openai, huggingface.
  • dry_run (bool) — Estimate cost without calling the API. Applies to: anthropic, openai.
  • api_key (str) — API key. Falls back to .env, then environment variables. Applies to: anthropic, openai, huggingface.
  • device (str) — PyTorch device string. Applies to: reranker.
  • threshold (float) — Score threshold for matches. Default: 0.09. Applies to: reranker.
  • top_k (int) — Top-k candidates per recall segment. Default: 5. Applies to: reranker.
  • quantization (str)"4bit" or "8bit". Applies to: huggingface.
  • batch_size (int) — Batch size for inference. Default: 4. Applies to: huggingface.
  • max_new_tokens (int) — Max generated tokens. Default: 64. Applies to: huggingface.
  • verbose_errors (bool) — Log raw output on parse failures. Applies to: huggingface.

matcher.match(story_segments, recall_segments)

  • story_segments (list[str]) — Ordered list of story segments (the ground-truth story elements).
  • recall_segments (list[str]) — Ordered list of a single participant's recall segments.

Returns list[tuple[int, list[int]]] — one entry per recall segment:

[
    (0, [2, 5]),   # recall segment 0 matched story segments 2 and 5
    (1, []),       # recall segment 1 had no matches
    (2, [0]),      # recall segment 2 matched story segment 0
]

run_matching (file-level convenience)

from rmatch.match import run_matching

results = run_matching(
    story_file,            # Path — story .txt or .json
    recall_file,           # Path — recall file or directory
    matcher_name,          # str  — "anthropic", "openai", "reranker", "huggingface"
    track_emissions,       # bool — enable CodeCarbon tracking
    story_name=None,       # str | None — override auto-detected story name
    story_segmentation=None,   # str | None — override detected segmentation method
    recall_segmentation=None,  # str | None — override detected segmentation method
    overwrite=False,       # bool — overwrite existing output file
    **kwargs,              # forwarded to the Matcher constructor (model_name, window_size, etc.)
)

Loads story and recall files, runs matching for every subject, and saves a JSON results file. Returns the output dictionary.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rmatch-0.2.0.tar.gz (155.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rmatch-0.2.0-py3-none-any.whl (26.6 kB view details)

Uploaded Python 3

File details

Details for the file rmatch-0.2.0.tar.gz.

File metadata

  • Download URL: rmatch-0.2.0.tar.gz
  • Upload date:
  • Size: 155.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for rmatch-0.2.0.tar.gz
Algorithm Hash digest
SHA256 6cd8c539a0abcd98be17874d6ba16054c09fd2fd0c2dedd7bd15193628430eb5
MD5 123928bc6a3588c003087e3bb9ea602d
BLAKE2b-256 c9532ee1e3d65529cdd39b175f1b0f27de36a8cf226322b21cc07cdf9dac50f4

See more details on using hashes here.

File details

Details for the file rmatch-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: rmatch-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 26.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for rmatch-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e355d1d45d29a517e29ed06da7fb314af21468c0af5a8402e833798f3284f883
MD5 a97f0e623e902d9acbc7a4e4621e5e4c
BLAKE2b-256 af546d3de3be27efc3810647da31374817f7926761f4c0f2ff475753cb5973a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page