Skip to main content

Two-stage vector + LLM job-to-profile matching engine

Project description

strata-match

Status: Pre-Alpha PyPI version Python versions License: MIT CI

Two-stage job-to-profile matching engine. Fast vector similarity as a first pass, then LLM-powered nuance scoring for the matches that matter. Get a score, a rationale, strengths, gaps, and a confidence tier — not just a number.

Features

  • Two-stage scoring — Vector cosine similarity (Stage 1) gates expensive LLM nuance scoring (Stage 2); batch flows skip LLM work below vector_threshold.
  • Multiple embedding providers — Built-in OpenAI, Google Gemini, and Ollama embedding backends via create_embedding_provider / create_matcher.
  • Prompt caching–friendly layoutbuild_score_prompt_parts splits static profile text from per-job text for provider-level prompt caches (e.g. Anthropic ephemeral cache).
  • Token trackingMatchResult.tokens_used and BatchMatchResult.total_tokens for cost visibility.

Why This Exists

Keyword matching for jobs is broken. A senior Python developer doesn't match "Staff Engineer — Backend Platform" because the words don't overlap, even though the fit is obvious. Pure embedding similarity gets closer but can't reason about career trajectory, transferable skills, or the difference between "nice to have" and "must have."

strata-match combines both approaches:

  1. Vector gate (fast, cheap) — Cosine similarity on embeddings catches obvious non-matches before you spend money on LLM calls. Below the threshold? Skip it. This filters out 70-80% of listings at near-zero cost.

  2. LLM scoring (slow, rich) — For candidates that pass the vector gate, a structured prompt sends the full profile and job description to an LLM. You get back a 0-100 score, a written rationale, specific strengths and gaps, and a confidence tier (HIGH / MEDIUM / LOW).

The result: high-quality matching at a fraction of what it would cost to run every listing through an LLM.

Use Cases

  • Job search platforms — Score thousands of listings against a candidate profile, surface the top matches with explanations of why they match
  • Recruiting and talent matching — Flip the model: score candidates against a job description, rank by fit, use gap analysis for interview prep
  • Career coaching tools — Show candidates where they're strong, where they have gaps, and what skills would unlock the next tier of opportunities
  • Internal mobility — Match employees to open internal roles, identify skill adjacencies, recommend lateral moves
  • Market positioning — Score your profile against 100 job descriptions in your target space to understand where you're competitive and where you need to grow

Installation

pip install strata-match

Requires Python 3.11+.

With OpenAI embedding and scoring support:

pip install strata-match[openai]

With all optional embedding + LLM backends (OpenAI, Gemini, Ollama, LiteLLM):

pip install strata-match[all]

Quick start

import asyncio
from strata_match import (
    CandidateProfile,
    JobDescription,
    create_matcher,
    match_job,
    match_batch,
)

async def main():
    matcher = create_matcher("openai", vector_threshold=0.5)

    profile = CandidateProfile(
        title="Senior Software Engineer",
        skills=["Python", "FastAPI", "PostgreSQL", "AWS", "System Design"],
        years_of_experience=8,
        experience_summary="Full-stack engineer specializing in distributed systems, "
            "data pipelines, and API design. Led migration from monolith to "
            "microservices serving 2M requests/day.",
    )

    jobs = [
        JobDescription(
            title="Staff Engineer — Backend Platform",
            company="Acme Corp",
            requirements=["Python", "System Design", "Technical Leadership"],
            description="Lead the backend platform team. Own the API layer, "
                "drive architecture decisions, mentor senior engineers.",
        ),
        JobDescription(
            title="Frontend Developer",
            company="Widget Inc",
            requirements=["React", "TypeScript", "CSS"],
            description="Build pixel-perfect UIs for our consumer product.",
        ),
    ]

    # Single match with full rationale
    result = await match_job(matcher, profile, jobs[0])
    print(f"Score: {result.score}/100 ({result.confidence_tier})")
    print(f"Rationale: {result.rationale}")
    print(f"Strengths: {result.strengths}")
    print(f"Gaps: {result.gaps}")

    # Batch matching — vector gate skips obvious mismatches
    batch = await match_batch(matcher, profile, jobs)
    for r in batch.results:
        print(f"{r.job_title}: {r.score} ({r.confidence_tier})")

asyncio.run(main())

Example output:

Score: 82/100 (HIGH)
Rationale: Strong backend systems experience directly maps to platform team needs.
  8 years of Python + system design + API architecture align well with staff-level
  expectations. Migration leadership demonstrates the technical leadership requirement.
  Gap: no explicit mention of mentoring experience, though team lead implies it.
Strengths: ['Python expertise', 'System design', 'API architecture', 'Migration leadership']
Gaps: ['Explicit mentoring/coaching experience', 'Staff-level scope communication']

Staff Engineer — Backend Platform: 82 (HIGH)
Frontend Developer: 0 (skipped by vector gate)

Documentation

How it works

Profile + Job
     │
     ▼
┌─────────────────────────────────┐
│  Stage 1: Vector Similarity     │  Cost: ~$0.0001/comparison
│  Embed profile + job → cosine   │  Speed: <100ms
│  similarity score [0, 1]        │
│                                 │
│  Below threshold? → SKIP        │  Filters 70-80% of listings
└────────────┬────────────────────┘
             │ passes gate
             ▼
┌─────────────────────────────────┐
│  Stage 2: LLM Nuance Scoring   │  Cost: ~$0.01/comparison
│  Structured prompt with full    │  Speed: 2-5 seconds
│  profile + job description      │
│                                 │
│  Returns: score (0-100),        │
│  rationale, strengths, gaps,    │
│  confidence tier                │
└─────────────────────────────────┘

Why Two Stages?

Economics. LLM calls cost 100x more than embedding comparisons. If you're scoring a profile against 500 job listings, running all 500 through an LLM costs ~$5 and takes 20 minutes. With the vector gate filtering at 0.5 threshold, you send maybe 100 to the LLM — $1 and 4 minutes. Same quality matches, 80% cost reduction.

Confidence Tiers

Tier Meaning Typical Score Range
HIGH Strong match — profile clearly fits the role 70-100
MEDIUM Partial match — transferable skills, some gaps 40-69
LOW Weak match — significant gaps or career pivot needed 0-39

Pluggable Providers

Embedding and LLM scoring use a provider abstraction. Swap models without changing your matching logic:

# OpenAI (default, highest quality)
matcher = create_matcher("openai")

# Google Gemini (good quality, lower cost)
matcher = create_matcher("gemini")

# Local Ollama (free, private, slower)
matcher = create_matcher("ollama", model="nomic-embed-text")

# LiteLLM for Stage 2 (any supported chat model)
matcher = create_matcher(
    "openai",
    scoring_provider="litellm",
    scoring_model="anthropic/claude-3-haiku",
)

Custom providers implement EmbeddingProvider and LLMProvider:

from strata_match.providers import create_embedding_provider
from strata_match.llm_providers import create_llm_provider

Part of the Strata Ecosystem

strata-match is the scoring engine for Strata — an autonomous AI job search platform where specialized agents collaborate to discover, evaluate, and match job opportunities. In that context, the Match Agent runs strata-match against every new listing that passes deduplication, stores results with confidence tiers, and routes high-confidence matches to the Apply Agent for resume tailoring.

But strata-match is fully standalone. It has no dependency on the Strata platform and works anywhere you need intelligent job-to-profile matching.

Development

Requires Python 3.11+ and uv (or pip).

git clone https://github.com/andrewcrenshaw/strata-match.git
cd strata-match

# Install with dev dependencies
uv sync --all-extras

# Run tests
uv run pytest

# Lint
uv run ruff check .

# Type check
uv run mypy src/ tests/

# Regenerate API docs (HTML under docs/api/)
uv run python scripts/generate_api_docs.py

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

strata_match-0.1.3.tar.gz (290.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

strata_match-0.1.3-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file strata_match-0.1.3.tar.gz.

File metadata

  • Download URL: strata_match-0.1.3.tar.gz
  • Upload date:
  • Size: 290.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for strata_match-0.1.3.tar.gz
Algorithm Hash digest
SHA256 e6e459dbf3be8198acd53b14b809791e0c6ba6764c0ecc4e6127dd0ec4a9c421
MD5 58a2c4e7fe2231cd3be56f56c0bf5fa9
BLAKE2b-256 29aa9d627385bf8f05b0b32d31bfe1cc3be4d32dcdcbf3581344ce9424bbd7f9

See more details on using hashes here.

File details

Details for the file strata_match-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: strata_match-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for strata_match-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 eded6e97de92f9fba616bfbb9f369006c6e7e86bb7499cc032cc276f05b626f0
MD5 c23f06794eeed46dcca4ed1c63d22611
BLAKE2b-256 02f6cb7980b6b5a74eedd7f3adba071d53922dd630d5d214626bf1a616ae5a8d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page