Skip to main content

Factor attribution and marketplace analytics CLI

Project description

FactorLens

FactorLens is an offline-first factor attribution assistant in Rust.

It computes statistical factors (PCA) from price history, writes artifacts, and supports explainability through a pluggable LLM backend interface (local and bedrock).

MVP Features

  • Price ingestion from CSV
  • PCA factor model fitting
  • Portfolio factor attribution
  • Residual outlier detection
  • Artifact outputs (json + csv)
  • Markdown report generation
  • Explain command using a local llama.cpp backend (llama-cli) with a Bedrock-ready backend contract

Workspace Layout

  • crates/factor_core: Returns, PCA, attribution math
  • crates/factor_io: CSV IO and artifact writing
  • crates/factor_cli: CLI binary (factorlens)
  • crates/llm_local: LLMClient trait + local/bedrock backends
  • crates/report: Markdown report generation

Build Instructions

Build Rust CLI (local)

cargo build -p factor_cli

Release binary:

cargo build -p factor_cli --release

Build Python wheel (local)

python -m pip install --upgrade maturin
maturin build --release --manifest-path crates/factor_cli/Cargo.toml

Install built wheel:

python -m pip install target/wheels/factorlens-*.whl

Build + publish wheels via GitHub Actions (recommended for cross-platform)

# tag-based release build/publish
git tag v0.1.3
git push origin v0.1.3

# or manual workflow trigger
gh workflow run release.yml -f publish_to_pypi=true -f ref=main

Input Formats

prices.csv

  • date (YYYY-MM-DD)
  • ticker
  • close

portfolio.csv (optional)

  • ticker
  • weight

holdings.csv (optional alternative to portfolio.csv)

  • ticker
  • either market_value or both shares and price

factors.csv (for known-factor regression mode)

  • date (YYYY-MM-DD)
  • one or more numeric factor columns (for example: MKT, SMB, HML)

Quick Start

cargo run -p factor_cli -- factors fit \
  --prices data/prices.csv \
  --k 3 \
  --out artifacts/ \
  --portfolio data/portfolio.csv

# safer residual analysis: auto-pick k (< number of assets)
cargo run -p factor_cli -- factors fit \
  --prices data/prices.csv \
  --k-auto \
  --out artifacts/ \
  --portfolio data/portfolio.csv

# alternative: derive weights automatically from holdings
cargo run -p factor_cli -- factors fit \
  --prices data/prices.csv \
  --k 3 \
  --out artifacts/ \
  --holdings data/holdings.csv

cargo run -p factor_cli -- report \
  --artifacts artifacts/ \
  --format markdown \
  --out artifacts/report.md

# known-factor regression mode (MKT/SMB/HML-style)
cargo run -p factor_cli -- factors regress \
  --prices data/prices.csv \
  --factors data/factors.csv \
  --out artifacts/ \
  --portfolio data/portfolio.csv

cargo run -p factor_cli -- explain \
  --backend local \
  --model models/llama.gguf \
  --artifacts artifacts/ \
  --question "What drove the largest drawdown?"

Notes

  • explain --backend local expects llama-cli on your PATH.
  • explain --backend bedrock uses AWS Bedrock via AWS CLI (aws bedrock-runtime converse).
  • This project is designed for explainability of computed analytics, not market prediction.

Python (pip) Package

FactorLens is published as a platform-specific binary wheel via maturin.

Build/install locally:

python -m pip install --upgrade maturin
maturin build --release --manifest-path crates/factor_cli/Cargo.toml
python -m pip install target/wheels/factorlens-*.whl

Run:

factorlens factors fit --prices data/prices.csv --k 3 --out artifacts/

Explainability Notes

  • factors fit excludes weekend dates by default.
  • Pass --include-weekends if your dataset intentionally includes weekend trading.
  • explain supports focused analysis with --focus-factors.

Examples:

cargo run -p factor_cli -- factors fit --prices data/prices.csv --k 3 --out artifacts/ --portfolio data/portfolio.csv
cargo run -p factor_cli -- factors fit --prices data/prices.csv --k 3 --out artifacts/ --portfolio data/portfolio.csv --include-weekends

cargo run -p factor_cli -- explain --backend local --model models/llama_instruct.gguf --artifacts artifacts/ --question "What drove the largest drawdown?" --focus-factors factor_1,factor_2

Custom Factor Names

By default, FactorLens auto-generates factor names from your dataset loadings (top positive and negative loading tickers per factor), so it works on any dataset.

You can still override labels with a CSV or TSV file via --factor-labels.

Example data/factor_labels.csv:

factor,label
factor_1_contrib,Broad Market Beta
factor_2_contrib,Growth vs Value Rotation
factor_3_contrib,Idiosyncratic Spread

Use in explain:

cargo run -p factor_cli -- explain --backend local --model models/llama_instruct.gguf --artifacts artifacts/ --question "What drove the largest drawdown?" --factor-labels data/factor_labels.csv

Notes:

  • Factor keys may be factor_1, factor_1_contrib, or just 1.
  • # comment lines are ignored.

Suggested Questions

  • What was the worst modeled drawdown day, and what factors drove it?
  • On the worst day, what percentage came from each factor?
  • Which factor is my largest average downside contributor over the full sample?
  • Which dates had the biggest positive factor-driven gains?
  • Which 5 days had the largest residuals (moves not explained by factors)?
  • Did my risk concentration increase in the last month?
  • Is my portfolio dominated by one factor or diversified across factors?
  • How stable are exposures across time windows?
  • Which factor changed direction most often?
  • Which factor contributed most to volatility, not just returns?
  • If I remove factor_1, how much modeled downside is left?
  • Compare drawdown drivers with and without weekends included.
  • Using only factor_1,factor_2, what drove the drawdown?
  • Which assets are most aligned with factor_1 loadings?
  • Which assets increased my exposure to downside factors most?

Generic Table Analysis

Analyze any CSV table by grouping columns and numeric metrics you choose:

cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --group-by region,product_line,channel \
  --out artifacts/analysis.md

# profile-based quick starts
cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --profile exec \
  --out artifacts/analysis_exec.md

cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --profile segment \
  --out artifacts/analysis_segment.md

cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --profile supplier \
  --out artifacts/analysis_supplier.md

# custom profile config (recommended for private/domain fields)
cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --profile exec_custom \
  --profile-config profiles/profiles.example.toml \
  --out artifacts/analysis.md

# filtered + ranked view
cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --where region=US \
  --rank-by revenue_usd \
  --top 10 \
  --min-records 20 \
  --out artifacts/analysis_filtered_ranked.md

Auto-detect useful grouping columns (if --group-by is omitted):

cargo run -p factor_cli -- analyze \
  --input data/your_file.csv \
  --out artifacts/analysis_auto.md

Or analyze directly from Postgres:

# option 1: inline query
factorlens analyze \
  --postgres-url "$DATABASE_URL" \
  --query "SELECT region, channel, revenue_usd, cost_usd FROM analytics.sales" \
  --profile exec_custom \
  --profile-config profiles/profiles.example.toml \
  --out artifacts/analysis.md

# option 2: query file
factorlens analyze \
  --postgres-url "$DATABASE_URL" \
  --query-file sql/sales_analysis.sql \
  --profile exec_custom \
  --profile-config profiles/profiles.example.toml \
  --out artifacts/analysis.md

Notes:

  • Outputs both markdown and JSON (<out>.json).
  • If --metrics is omitted, numeric metrics are auto-detected from the input file.
  • --profile built-ins (exec, segment, supplier) are generic (no hardcoded domain columns).
  • Use --profile-config <path.toml> for your own private, file-specific profile mappings.
  • Input source is exclusive: use either --input <csv> or --postgres-url + (--query or --query-file).
  • --postgres-url can be omitted if DATABASE_URL env var is set.
  • Recommended layout: commit profiles/profiles.example.toml, keep private variants as profiles/*.local.toml or profiles/*.private.toml (gitignored).
  • --where accepts comma-separated column=value filters (AND semantics).
  • --rank-by ranks groups by a chosen metric (default ranking is by count).
  • --top controls how many groups are listed in the report.
  • --min-records drops tiny segments before ranking (useful to avoid one-record outliers).

Example --profile-config file:

[profiles.exec_custom]
group_by = ["region", "channel"]
metrics = ["revenue_usd"]
rank_by = "revenue_usd"
top = 12
min_records = 20
auto_group_k = 3

PyPI Publishing (Rustream-Style)

FactorLens uses the same publishing pattern as rustream: maturin + GitHub Actions to build platform wheels (Linux/macOS/Windows) and publish to PyPI.

Release from macOS via CLI

  1. Bump version in pyproject.toml.
  2. Commit and push to main.
  3. Create and push a release tag:
git tag v0.1.3
git push origin v0.1.3

This triggers .github/workflows/release.yml, which:

  • builds platform-specific wheels via maturin
  • publishes to PyPI using PYPI_API_TOKEN
  • attaches wheels to GitHub Release

To manually trigger from CLI without a tag:

gh workflow run release.yml -f publish_to_pypi=true -f ref=main
gh run list --workflow release.yml
gh run view <run-id> --log

Jupyter Usage

Install from PyPI in Jupyter:

pip install --upgrade factorlens==0.1.3
factorlens --help

Local model:

factorlens explain \
  --backend local \
  --model /path/to/model.gguf \
  --artifacts /path/to/artifacts \
  --question "What drove the largest drawdown?"

Bedrock:

export AWS_REGION=us-east-1
factorlens explain \
  --backend bedrock \
  --model anthropic.claude-3-5-sonnet-20240620-v1:0 \
  --artifacts /path/to/artifacts \
  --question "What drove the largest drawdown?"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

factorlens-0.1.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ x86-64

factorlens-0.1.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.4 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ ARM64

factorlens-0.1.5-py3-none-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

factorlens-0.1.5-py3-none-macosx_10_12_x86_64.whl (1.4 MB view details)

Uploaded Python 3macOS 10.12+ x86-64

File details

Details for the file factorlens-0.1.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for factorlens-0.1.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ee1a831576ec2c90508f1f93be3bee46718cde600c45a6fce08dacfa1ee4ac3b
MD5 e21b1bae5d49831cbb22f22253e9004f
BLAKE2b-256 3d254edd34531ba65d7287cc8d9427d599d52d24c9c1741f7eb568ba36f1219a

See more details on using hashes here.

Provenance

The following attestation bundles were made for factorlens-0.1.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: release.yml on kraftaa/factorlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file factorlens-0.1.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for factorlens-0.1.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 f86a95ff3f377a451164f17bbe028131dfa0408792a94230b60a4212f2118f8c
MD5 09f348571f1af30d92e4944a3867599e
BLAKE2b-256 cffde7ef62a2e953ceda83f9d5970581cb42cc1aadafc9e773804c43dad247dd

See more details on using hashes here.

Provenance

The following attestation bundles were made for factorlens-0.1.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: release.yml on kraftaa/factorlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file factorlens-0.1.5-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for factorlens-0.1.5-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ae2785beac0d7f2e1d2b459a76c3728e3e6d615d0a76b4457f2989f95b8949be
MD5 7a0e658ffb406c35d8a3c6c6b1d5ff2c
BLAKE2b-256 2f764bcae6d0d0c227e217ee2ed940f6658be2dde9abe90d99d758420adf24e9

See more details on using hashes here.

Provenance

The following attestation bundles were made for factorlens-0.1.5-py3-none-macosx_11_0_arm64.whl:

Publisher: release.yml on kraftaa/factorlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file factorlens-0.1.5-py3-none-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for factorlens-0.1.5-py3-none-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 00b2e27815c8d0a41dc979a8981d3166b6461684bb75cbfa4ae176e98392e767
MD5 2efd17e2ec899ac52f9190102fc58ad2
BLAKE2b-256 e5799b591b14752cf929e4efa6a451cd9e3de95ada1b0401b50980e2398345ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for factorlens-0.1.5-py3-none-macosx_10_12_x86_64.whl:

Publisher: release.yml on kraftaa/factorlens

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page