LLM-based compliance note evaluation for financial services
Project description
assert-review
LLM-based compliance note evaluation for financial services.
Evaluates adviser suitability notes against regulatory framework definitions (FCA, MiFID II, etc.), returning structured gap reports with per-element scores, evidence quotes, and actionable remediation suggestions. No PyTorch, no BERT, no heavy dependencies.
⚠️ Experimental — do not use in live or production systems.
Outputs are non-deterministic (LLM-based) and have not been validated against real regulatory decisions. This package is intended for research, prototyping, and internal tooling only. It is not a substitute for qualified compliance review and must not be used to make or support live regulatory or client-facing decisions.
Installation
pip install assert-review
Quick Start
from assert_review import evaluate_note, LLMConfig
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
)
report = evaluate_note(
note_text="Client meeting note text goes here...",
framework="fca_suitability_v1",
llm_config=config,
)
print(report.overall_rating) # "Compliant" / "Minor Gaps" / "Requires Attention" / "Non-Compliant"
print(report.overall_score) # 0.0–1.0
print(report.passed) # True / False
for item in report.items:
print(f"{item.element_id}: {item.status} (score: {item.score:.2f})")
if item.suggestions:
for s in item.suggestions:
print(f" → {s}")
evaluate_note()
Full parameter reference:
from assert_review import evaluate_note, LLMConfig, PassPolicy
report = evaluate_note(
note_text=note,
framework="fca_suitability_v1", # built-in ID or path to a custom YAML
llm_config=config,
verbose=False, # include LLM reasoning in GapItem.notes
custom_instruction=None, # additional instruction appended to all element prompts
pass_policy=None, # custom PassPolicy (see below)
metadata={"note_id": "N-001"}, # arbitrary key/value pairs, passed through to GapReport
)
GapReport
| Field | Type | Description |
|---|---|---|
framework_id |
str |
Framework used for evaluation |
framework_version |
str |
Framework version |
passed |
bool |
Whether the note passes the framework's policy thresholds |
overall_score |
float |
Weighted mean element score, 0.0–1.0 |
overall_rating |
str |
Human-readable compliance rating (see below) |
items |
List[GapItem] |
Per-element evaluation results |
summary |
str |
LLM-generated narrative summary of the evaluation |
stats |
GapReportStats |
Counts by status and severity |
metadata |
dict |
Caller-supplied metadata, passed through unchanged |
Overall rating values:
| Rating | Meaning |
|---|---|
Compliant |
Passed — all elements fully present |
Minor Gaps |
Passed — but some elements are partial or optional elements missing |
Requires Attention |
Failed — high/medium gaps, no critical blockers |
Non-Compliant |
Failed — one or more critical required elements missing or below threshold |
GapItem
| Field | Type | Description |
|---|---|---|
element_id |
str |
Element identifier from the framework |
status |
str |
"present", "partial", or "missing" |
score |
float |
0.0–1.0 quality score for this element |
evidence |
Optional[str] |
Quote or paraphrase from the note supporting the assessment. None when element is missing. |
severity |
str |
"critical", "high", "medium", or "low" |
required |
bool |
Whether this element is required by the framework |
suggestions |
List[str] |
Actionable remediation suggestions (empty when status == "present") |
notes |
Optional[str] |
LLM reasoning (only populated when verbose=True) |
Verbose Output
Pass verbose=True to include per-element LLM reasoning in GapItem.notes:
report = evaluate_note(
note_text=note,
framework="fca_suitability_v1",
llm_config=config,
verbose=True,
)
for item in report.items:
if item.notes:
print(f"{item.element_id}: {item.notes}")
Custom Evaluation Instructions
Append additional instructions to all element prompts for domain-specific guidance:
report = evaluate_note(
note_text=note,
framework="fca_suitability_v1",
llm_config=config,
custom_instruction="This note relates to a high-net-worth client with complex tax considerations. Apply stricter standards for risk and objectives documentation.",
)
Configurable Pass Policy
Override the default pass/fail thresholds:
from assert_review import PassPolicy
policy = PassPolicy(
critical_partial_threshold=0.5, # partial critical element treated as blocker if score < this
required_pass_threshold=0.6, # required element must score >= this to pass
score_correction_missing_cutoff=0.2,
score_correction_present_min=0.5,
score_correction_present_floor=0.7,
)
report = evaluate_note(
note_text=note,
framework="fca_suitability_v1",
llm_config=config,
pass_policy=policy,
)
Bundled Frameworks
| Framework ID | Description |
|---|---|
fca_suitability_v1 |
FCA suitability note requirements under COBS 9.2 / PS13/1 (9 elements) |
Custom Frameworks
Pass a path to your own YAML file:
report = evaluate_note(
note_text=note,
framework="/path/to/my_framework.yaml",
llm_config=config,
)
The YAML schema mirrors the built-in frameworks. See packages/assert-review/assert_review/frameworks/fca_suitability_v1.yaml in the source repo for a reference example.
CLI
# Evaluate a single note
assert-review evaluate note.txt --framework fca_suitability_v1
# Output as JSON
assert-review evaluate note.txt --framework fca_suitability_v1 --output json
# Batch evaluate from CSV
assert-review batch notes.csv --framework fca_suitability_v1 --note-column text
# Use OpenAI instead of Bedrock
assert-review evaluate note.txt --framework fca_suitability_v1 \
--provider openai --model gpt-4o --api-key $OPENAI_API_KEY
LLM Configuration
from assert_review import LLMConfig
# AWS Bedrock (uses ~/.aws credentials by default)
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
)
# AWS Bedrock with explicit credentials
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
api_key="your-aws-access-key-id",
api_secret="your-aws-secret-access-key",
aws_session_token="your-session-token", # optional
)
# OpenAI
config = LLMConfig(
provider="openai",
model_id="gpt-4o",
api_key="your-openai-api-key",
)
Supported Bedrock Model Families
| Model Family | Example Model IDs |
|---|---|
| Amazon Nova | us.amazon.nova-pro-v1:0, amazon.nova-lite-v1:0 |
| Anthropic Claude | anthropic.claude-3-sonnet-20240229-v1:0 |
| Meta Llama | meta.llama3-70b-instruct-v1:0 |
| Mistral AI | mistral.mistral-large-2402-v1:0 |
| Cohere Command | cohere.command-r-plus-v1:0 |
| AI21 Labs | ai21.jamba-1-5-large-v1:0 |
Proxy Configuration
# Single proxy
config = LLMConfig(
provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
proxy_url="http://proxy.example.com:8080",
)
# Protocol-specific proxies
config = LLMConfig(
provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
http_proxy="http://proxy.example.com:8080",
https_proxy="http://proxy.example.com:8443",
)
# Authenticated proxy
config = LLMConfig(
provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
proxy_url="http://username:password@proxy.example.com:8080",
)
Standard HTTP_PROXY / HTTPS_PROXY environment variables are also respected.
Public API
from assert_review import (
evaluate_note, # main entry point
NoteEvaluator, # evaluator class for advanced use
GapReport, # full evaluation result
GapItem, # per-element result
GapReportStats, # summary statistics
PassPolicy, # pass/fail threshold configuration
LLMConfig, # re-exported from assert-core
)
Dependencies
- assert-core — shared LLM provider layer (AWS Bedrock, OpenAI)
- PyYAML — framework loading
Migrating from assert_llm_tools
assert-review replaces the compliance note evaluation functionality of assert_llm_tools, which is now deprecated. Swap the imports:
# Before
from assert_llm_tools import evaluate_note, LLMConfig
from assert_llm_tools.metrics.note.models import PassPolicy, GapReport, GapItem
# After
from assert_review import evaluate_note, LLMConfig, PassPolicy, GapReport, GapItem
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file assert_review-0.1.2.tar.gz.
File metadata
- Download URL: assert_review-0.1.2.tar.gz
- Upload date:
- Size: 34.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26788e9ee6c06d2159ceb50ff58a8f24d402741bbab1d4795327c5e0d82b8147
|
|
| MD5 |
d034657e15e3faac1489a1861fd99a67
|
|
| BLAKE2b-256 |
ec3f3a787e17bbddd73f3e12bec1aaf512321222cfd57489f8a5054cb947ba24
|
Provenance
The following attestation bundles were made for assert_review-0.1.2.tar.gz:
Publisher:
publish-assert-review.yml on charliedouglas/assert_llm_tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
assert_review-0.1.2.tar.gz -
Subject digest:
26788e9ee6c06d2159ceb50ff58a8f24d402741bbab1d4795327c5e0d82b8147 - Sigstore transparency entry: 1003748252
- Sigstore integration time:
-
Permalink:
charliedouglas/assert_llm_tools@983d32979bef6cd0d77424d598c0b845294160d0 -
Branch / Tag:
refs/tags/assert-review-v0.1.2 - Owner: https://github.com/charliedouglas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-assert-review.yml@983d32979bef6cd0d77424d598c0b845294160d0 -
Trigger Event:
release
-
Statement type:
File details
Details for the file assert_review-0.1.2-py3-none-any.whl.
File metadata
- Download URL: assert_review-0.1.2-py3-none-any.whl
- Upload date:
- Size: 29.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2921407624a60107a671bbdbe29d5b90403ff9a3a73960d9bbd4d3dc56e4e42b
|
|
| MD5 |
ccc37e9d29bdd3431cce197ec05e758c
|
|
| BLAKE2b-256 |
c8a6901d496f5468a383dbb316f7e0ce55d4d2945bc01264d7573b0a4c7517ff
|
Provenance
The following attestation bundles were made for assert_review-0.1.2-py3-none-any.whl:
Publisher:
publish-assert-review.yml on charliedouglas/assert_llm_tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
assert_review-0.1.2-py3-none-any.whl -
Subject digest:
2921407624a60107a671bbdbe29d5b90403ff9a3a73960d9bbd4d3dc56e4e42b - Sigstore transparency entry: 1003748255
- Sigstore integration time:
-
Permalink:
charliedouglas/assert_llm_tools@983d32979bef6cd0d77424d598c0b845294160d0 -
Branch / Tag:
refs/tags/assert-review-v0.1.2 - Owner: https://github.com/charliedouglas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-assert-review.yml@983d32979bef6cd0d77424d598c0b845294160d0 -
Trigger Event:
release
-
Statement type: