Two-stage vector + LLM job-to-profile matching engine
Project description
strata-match
Two-stage job-to-profile matching engine. Fast vector similarity as a first pass, then LLM-powered nuance scoring for the matches that matter. Get a score, a rationale, strengths, gaps, and a confidence tier — not just a number.
Features
- Two-stage scoring — Vector cosine similarity (Stage 1) gates expensive LLM nuance scoring (Stage 2); batch flows skip LLM work below
vector_threshold. - Multiple embedding providers — Built-in OpenAI, Google Gemini, and Ollama embedding backends via
create_embedding_provider/create_matcher. - Prompt caching–friendly layout —
build_score_prompt_partssplits static profile text from per-job text for provider-level prompt caches (e.g. Anthropic ephemeral cache). - Token tracking —
MatchResult.tokens_usedandBatchMatchResult.total_tokensfor cost visibility.
Why This Exists
Keyword matching for jobs is broken. A senior Python developer doesn't match "Staff Engineer — Backend Platform" because the words don't overlap, even though the fit is obvious. Pure embedding similarity gets closer but can't reason about career trajectory, transferable skills, or the difference between "nice to have" and "must have."
strata-match combines both approaches:
-
Vector gate (fast, cheap) — Cosine similarity on embeddings catches obvious non-matches before you spend money on LLM calls. Below the threshold? Skip it. This filters out 70-80% of listings at near-zero cost.
-
LLM scoring (slow, rich) — For candidates that pass the vector gate, a structured prompt sends the full profile and job description to an LLM. You get back a 0-100 score, a written rationale, specific strengths and gaps, and a confidence tier (HIGH / MEDIUM / LOW).
The result: high-quality matching at a fraction of what it would cost to run every listing through an LLM.
Use Cases
- Job search platforms — Score thousands of listings against a candidate profile, surface the top matches with explanations of why they match
- Recruiting and talent matching — Flip the model: score candidates against a job description, rank by fit, use gap analysis for interview prep
- Career coaching tools — Show candidates where they're strong, where they have gaps, and what skills would unlock the next tier of opportunities
- Internal mobility — Match employees to open internal roles, identify skill adjacencies, recommend lateral moves
- Market positioning — Score your profile against 100 job descriptions in your target space to understand where you're competitive and where you need to grow
Installation
pip install strata-match
Requires Python 3.11+.
With OpenAI embedding and scoring support:
pip install strata-match[openai]
With all optional embedding + LLM backends (OpenAI, Gemini, Ollama, LiteLLM):
pip install strata-match[all]
Quick start
import asyncio
from strata_match import (
CandidateProfile,
JobDescription,
create_matcher,
match_job,
match_batch,
)
async def main():
matcher = create_matcher("openai", vector_threshold=0.5)
profile = CandidateProfile(
title="Senior Software Engineer",
skills=["Python", "FastAPI", "PostgreSQL", "AWS", "System Design"],
years_of_experience=8,
experience_summary="Full-stack engineer specializing in distributed systems, "
"data pipelines, and API design. Led migration from monolith to "
"microservices serving 2M requests/day.",
)
jobs = [
JobDescription(
title="Staff Engineer — Backend Platform",
company="Acme Corp",
requirements=["Python", "System Design", "Technical Leadership"],
description="Lead the backend platform team. Own the API layer, "
"drive architecture decisions, mentor senior engineers.",
),
JobDescription(
title="Frontend Developer",
company="Widget Inc",
requirements=["React", "TypeScript", "CSS"],
description="Build pixel-perfect UIs for our consumer product.",
),
]
# Single match with full rationale
result = await match_job(matcher, profile, jobs[0])
print(f"Score: {result.score}/100 ({result.confidence_tier})")
print(f"Rationale: {result.rationale}")
print(f"Strengths: {result.strengths}")
print(f"Gaps: {result.gaps}")
# Batch matching — vector gate skips obvious mismatches
batch = await match_batch(matcher, profile, jobs)
for r in batch.results:
print(f"{r.job_title}: {r.score} ({r.confidence_tier})")
asyncio.run(main())
Example output:
Score: 82/100 (HIGH)
Rationale: Strong backend systems experience directly maps to platform team needs.
8 years of Python + system design + API architecture align well with staff-level
expectations. Migration leadership demonstrates the technical leadership requirement.
Gap: no explicit mention of mentoring experience, though team lead implies it.
Strengths: ['Python expertise', 'System design', 'API architecture', 'Migration leadership']
Gaps: ['Explicit mentoring/coaching experience', 'Staff-level scope communication']
Staff Engineer — Backend Platform: 82 (HIGH)
Frontend Developer: 0 (skipped by vector gate)
Documentation
- Documentation index — guides and API reference
- Custom scoring — thresholds, tiers, extending behavior
- Embedding providers — OpenAI, Gemini, Ollama
- Prompt customization — scoring prompts and caching
- API reference — HTML generated from docstrings in
docs/api/(runuv run python scripts/generate_api_docs.pyafteruv sync --all-extrasto regenerate)
How it works
Profile + Job
│
▼
┌─────────────────────────────────┐
│ Stage 1: Vector Similarity │ Cost: ~$0.0001/comparison
│ Embed profile + job → cosine │ Speed: <100ms
│ similarity score [0, 1] │
│ │
│ Below threshold? → SKIP │ Filters 70-80% of listings
└────────────┬────────────────────┘
│ passes gate
▼
┌─────────────────────────────────┐
│ Stage 2: LLM Nuance Scoring │ Cost: ~$0.01/comparison
│ Structured prompt with full │ Speed: 2-5 seconds
│ profile + job description │
│ │
│ Returns: score (0-100), │
│ rationale, strengths, gaps, │
│ confidence tier │
└─────────────────────────────────┘
Why Two Stages?
Economics. LLM calls cost 100x more than embedding comparisons. If you're scoring a profile against 500 job listings, running all 500 through an LLM costs ~$5 and takes 20 minutes. With the vector gate filtering at 0.5 threshold, you send maybe 100 to the LLM — $1 and 4 minutes. Same quality matches, 80% cost reduction.
Confidence Tiers
| Tier | Meaning | Typical Score Range |
|---|---|---|
| HIGH | Strong match — profile clearly fits the role | 70-100 |
| MEDIUM | Partial match — transferable skills, some gaps | 40-69 |
| LOW | Weak match — significant gaps or career pivot needed | 0-39 |
Pluggable Providers
Embedding and LLM scoring use a provider abstraction. Swap models without changing your matching logic:
# OpenAI (default, highest quality)
matcher = create_matcher("openai")
# Google Gemini (good quality, lower cost)
matcher = create_matcher("gemini")
# Local Ollama (free, private, slower)
matcher = create_matcher("ollama", model="nomic-embed-text")
# LiteLLM for Stage 2 (any supported chat model)
matcher = create_matcher(
"openai",
scoring_provider="litellm",
scoring_model="anthropic/claude-3-haiku",
)
Custom providers implement EmbeddingProvider and LLMProvider:
from strata_match.providers import create_embedding_provider
from strata_match.llm_providers import create_llm_provider
Part of the Strata Ecosystem
strata-match is the scoring engine for Strata — an autonomous AI job search platform where specialized agents collaborate to discover, evaluate, and match job opportunities. In that context, the Match Agent runs strata-match against every new listing that passes deduplication, stores results with confidence tiers, and routes high-confidence matches to the Apply Agent for resume tailoring.
But strata-match is fully standalone. It has no dependency on the Strata platform and works anywhere you need intelligent job-to-profile matching.
Development
Requires Python 3.11+ and uv (or pip).
git clone https://github.com/andrewcrenshaw/strata-match.git
cd strata-match
# Install with dev dependencies
uv sync --all-extras
# Run tests
uv run pytest
# Lint
uv run ruff check .
# Type check
uv run mypy src/ tests/
# Regenerate API docs (HTML under docs/api/)
uv run python scripts/generate_api_docs.py
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file strata_match-0.2.2.tar.gz.
File metadata
- Download URL: strata_match-0.2.2.tar.gz
- Upload date:
- Size: 309.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
11fbbdeb4ca4d02ad30ff235fdec73d241b20afc20cf5f8575710a07560ab916
|
|
| MD5 |
95fcc2ed0e2df70cfa3692321a0d22c2
|
|
| BLAKE2b-256 |
54c2b2987ea99796c339c3bfe65df26c77864bfab18fe2102af9e3e924aa285e
|
File details
Details for the file strata_match-0.2.2-py3-none-any.whl.
File metadata
- Download URL: strata_match-0.2.2-py3-none-any.whl
- Upload date:
- Size: 34.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ccb77485064416cdcb4684819c5e8b582cdb440995e34716c52485261a98dbd8
|
|
| MD5 |
998e115dc1a324069fa4f58df4b72a90
|
|
| BLAKE2b-256 |
c45480a884b9b44f255f09c1f38ac38b7060146d5d695510e6d522a2f3c8815e
|