Skip to main content

LLM-based compliance note evaluation for financial services

Project description

assert-review

LLM-based compliance note evaluation for financial services.

Evaluates adviser suitability notes against regulatory framework definitions (FCA, MiFID II, etc.), returning structured gap reports with per-element scores, evidence quotes, and actionable remediation suggestions. No PyTorch, no BERT, no heavy dependencies.

⚠️ Experimental — do not use in live or production systems.

Outputs are non-deterministic (LLM-based) and have not been validated against real regulatory decisions. This package is intended for research, prototyping, and internal tooling only. It is not a substitute for qualified compliance review and must not be used to make or support live regulatory or client-facing decisions.

Installation

pip install assert-review

Quick Start

from assert_review import evaluate_note, LLMConfig

config = LLMConfig(
    provider="bedrock",
    model_id="us.amazon.nova-pro-v1:0",
    region="us-east-1",
)

report = evaluate_note(
    note_text="Client meeting note text goes here...",
    framework="fca_suitability_v1",
    llm_config=config,
)

print(report.overall_rating)   # "Compliant" / "Minor Gaps" / "Requires Attention" / "Non-Compliant"
print(report.overall_score)    # 0.0–1.0
print(report.passed)           # True / False

for item in report.items:
    print(f"{item.element_id}: {item.status} (score: {item.score:.2f})")
    if item.suggestions:
        for s in item.suggestions:
            print(f"  → {s}")

evaluate_note()

Full parameter reference:

from assert_review import evaluate_note, LLMConfig, PassPolicy

report = evaluate_note(
    note_text=note,
    framework="fca_suitability_v1",   # built-in ID or path to a custom YAML
    llm_config=config,
    verbose=False,                     # include LLM reasoning in GapItem.notes
    custom_instruction=None,           # additional instruction appended to all element prompts
    pass_policy=None,                  # custom PassPolicy (see below)
    metadata={"note_id": "N-001"},     # arbitrary key/value pairs, passed through to GapReport
)

GapReport

Field Type Description
framework_id str Framework used for evaluation
framework_version str Framework version
passed bool Whether the note passes the framework's policy thresholds
overall_score float Weighted mean element score, 0.0–1.0
overall_rating str Human-readable compliance rating (see below)
items List[GapItem] Per-element evaluation results
summary str LLM-generated narrative summary of the evaluation
stats GapReportStats Counts by status and severity
metadata dict Caller-supplied metadata, passed through unchanged

Overall rating values:

Rating Meaning
Compliant Passed — all elements fully present
Minor Gaps Passed — but some elements are partial or optional elements missing
Requires Attention Failed — high/medium gaps, no critical blockers
Non-Compliant Failed — one or more critical required elements missing or below threshold

GapItem

Field Type Description
element_id str Element identifier from the framework
status str "present", "partial", or "missing"
score float 0.0–1.0 quality score for this element
evidence Optional[str] Quote or paraphrase from the note supporting the assessment. None when element is missing.
severity str "critical", "high", "medium", or "low"
required bool Whether this element is required by the framework
suggestions List[str] Actionable remediation suggestions (empty when status == "present")
notes Optional[str] LLM reasoning (only populated when verbose=True)

Verbose Output

Pass verbose=True to include per-element LLM reasoning in GapItem.notes:

report = evaluate_note(
    note_text=note,
    framework="fca_suitability_v1",
    llm_config=config,
    verbose=True,
)

for item in report.items:
    if item.notes:
        print(f"{item.element_id}: {item.notes}")

Custom Evaluation Instructions

Append additional instructions to all element prompts for domain-specific guidance:

report = evaluate_note(
    note_text=note,
    framework="fca_suitability_v1",
    llm_config=config,
    custom_instruction="This note relates to a high-net-worth client with complex tax considerations. Apply stricter standards for risk and objectives documentation.",
)

Configurable Pass Policy

Override the default pass/fail thresholds:

from assert_review import PassPolicy

policy = PassPolicy(
    critical_partial_threshold=0.5,      # partial critical element treated as blocker if score < this
    required_pass_threshold=0.6,         # required element must score >= this to pass
    score_correction_missing_cutoff=0.2,
    score_correction_present_min=0.5,
    score_correction_present_floor=0.7,
)

report = evaluate_note(
    note_text=note,
    framework="fca_suitability_v1",
    llm_config=config,
    pass_policy=policy,
)

Bundled Frameworks

Framework ID Description
fca_suitability_v1 FCA suitability note requirements under COBS 9.2 / PS13/1 (9 elements)

Custom Frameworks

Pass a path to your own YAML file:

report = evaluate_note(
    note_text=note,
    framework="/path/to/my_framework.yaml",
    llm_config=config,
)

The YAML schema mirrors the built-in frameworks. See packages/assert-review/assert_review/frameworks/fca_suitability_v1.yaml in the source repo for a reference example.

CLI

# Evaluate a single note
assert-review evaluate note.txt --framework fca_suitability_v1

# Output as JSON
assert-review evaluate note.txt --framework fca_suitability_v1 --output json

# Batch evaluate from CSV
assert-review batch notes.csv --framework fca_suitability_v1 --note-column text

# Use OpenAI instead of Bedrock
assert-review evaluate note.txt --framework fca_suitability_v1 \
  --provider openai --model gpt-4o --api-key $OPENAI_API_KEY

LLM Configuration

from assert_review import LLMConfig

# AWS Bedrock (uses ~/.aws credentials by default)
config = LLMConfig(
    provider="bedrock",
    model_id="us.amazon.nova-pro-v1:0",
    region="us-east-1",
)

# AWS Bedrock with explicit credentials
config = LLMConfig(
    provider="bedrock",
    model_id="us.amazon.nova-pro-v1:0",
    region="us-east-1",
    api_key="your-aws-access-key-id",
    api_secret="your-aws-secret-access-key",
    aws_session_token="your-session-token",  # optional
)

# OpenAI
config = LLMConfig(
    provider="openai",
    model_id="gpt-4o",
    api_key="your-openai-api-key",
)

Supported Bedrock Model Families

Model Family Example Model IDs
Amazon Nova us.amazon.nova-pro-v1:0, amazon.nova-lite-v1:0
Anthropic Claude anthropic.claude-3-sonnet-20240229-v1:0
Meta Llama meta.llama3-70b-instruct-v1:0
Mistral AI mistral.mistral-large-2402-v1:0
Cohere Command cohere.command-r-plus-v1:0
AI21 Labs ai21.jamba-1-5-large-v1:0

Proxy Configuration

# Single proxy
config = LLMConfig(
    provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
    proxy_url="http://proxy.example.com:8080",
)

# Protocol-specific proxies
config = LLMConfig(
    provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
    http_proxy="http://proxy.example.com:8080",
    https_proxy="http://proxy.example.com:8443",
)

# Authenticated proxy
config = LLMConfig(
    provider="bedrock", model_id="us.amazon.nova-pro-v1:0", region="us-east-1",
    proxy_url="http://username:password@proxy.example.com:8080",
)

Standard HTTP_PROXY / HTTPS_PROXY environment variables are also respected.

Public API

from assert_review import (
    evaluate_note,     # main entry point
    NoteEvaluator,     # evaluator class for advanced use
    GapReport,         # full evaluation result
    GapItem,           # per-element result
    GapReportStats,    # summary statistics
    PassPolicy,        # pass/fail threshold configuration
    LLMConfig,         # re-exported from assert-core
)

Dependencies

  • assert-core — shared LLM provider layer (AWS Bedrock, OpenAI)
  • PyYAML — framework loading

Migrating from assert_llm_tools

assert-review replaces the compliance note evaluation functionality of assert_llm_tools, which is now deprecated. Swap the imports:

# Before
from assert_llm_tools import evaluate_note, LLMConfig
from assert_llm_tools.metrics.note.models import PassPolicy, GapReport, GapItem

# After
from assert_review import evaluate_note, LLMConfig, PassPolicy, GapReport, GapItem

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

assert_review-0.1.3.tar.gz (34.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

assert_review-0.1.3-py3-none-any.whl (29.7 kB view details)

Uploaded Python 3

File details

Details for the file assert_review-0.1.3.tar.gz.

File metadata

  • Download URL: assert_review-0.1.3.tar.gz
  • Upload date:
  • Size: 34.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for assert_review-0.1.3.tar.gz
Algorithm Hash digest
SHA256 7b5315a59a3db8bf2e9944fd071270f9d460da1d505bf442b41934673a5d1f50
MD5 56ecb4ca67557e46136fa4da475e0001
BLAKE2b-256 88c7186037de040c112f03b3945659176beb977cc9790cd38945d949e96787f2

See more details on using hashes here.

Provenance

The following attestation bundles were made for assert_review-0.1.3.tar.gz:

Publisher: publish-assert-review.yml on charliedouglas/assert_llm_tools

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file assert_review-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: assert_review-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 29.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for assert_review-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 912e5fae9e7cee211c36c7a03274ae5c75ef377acc34f9b41cdb96de601e668a
MD5 950a6a57357b8cdfd24da5c9ad845eb4
BLAKE2b-256 f4ded758d8776ba53041b2690b95565463a487416ac5fc7bb896a7ce3a723e1f

See more details on using hashes here.

Provenance

The following attestation bundles were made for assert_review-0.1.3-py3-none-any.whl:

Publisher: publish-assert-review.yml on charliedouglas/assert_llm_tools

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page