AI-powered quant research knowledge base & brainstorm agent
Project description
Quant_LLM_Wiki: A Karpathy-shaped wiki-first knowledge base for quant research
Features | Architecture | Quick Start | Agent Usage | Configuration | Tests | Contributing
Quant_LLM_Wiki turns WeChat articles, web pages, and research PDFs into an LLM-built Markdown knowledge base for quantitative investment research. It follows Andrej Karpathy's LLM-built KB method: a raw/ ingest layer, an LLM-compiled wiki/ of concept articles, and a schema/ that the LLM and tools both follow. Vector RAG is preserved as a fallback substrate, not the primary retrieval path. Three durable verbs — ingest, query, lint — drive everything. A built-in Rethink Layer scores novelty and quality of brainstormed ideas before output.
The goal is research inspiration and cross-document idea combination, not producing trade-ready strategies.
Features
- Multi-source Ingestion — Ingest from single URLs, batch URL lists, or local HTML files; warns on re-ingesting previously rejected sources
- LLM Enrichment — Automatically extract structured fields: idea blocks, transfer targets, combination hooks, failure modes, and more. Concurrent processing with configurable parallelism
- Hybrid RAG Retrieval — Keyword + vector + RRF fusion retrieval across your knowledge base
- Brainstorm Mode — Generate new strategy ideas by combining insights from multiple articles
- Rethink Layer — Post-generation validation that checks idea novelty (via vector similarity) and scores quality (traceability, coherence, actionability)
- Article Quality Control — Mark articles as
rejectedto remove from KB and prevent re-ingestion; review tool shows only enriched articles - Interactive Agent — LangGraph ReAct agent with 8 tools for full pipeline management, with real-time progress streaming
- Provider-Agnostic — Works with any OpenAI-compatible LLM API (Zhipu GLM, DeepSeek, Moonshot, Qwen, OpenAI, Ollama, etc.)
- Local-First — All data stored locally as Markdown files + ChromaDB vectors
Architecture
The system has three durable layers and three operational verbs. Vector RAG is preserved as supporting substrate, not the primary retrieval path.
Layout
raw/ — incoming source articles (one dir per article: article.md + source.json + images/)
wiki/ — LLM-built Markdown memory (the primary query surface)
├── INDEX.md — auto-maintained table of contents
├── state.json — content hashes, concept scores, retrieval hints
├── lint_report.json — last health audit
├── concepts/<slug>.md
├── sources/<basename>.md
├── queries/<date>_<slug>_<mode>.md — query → wiki feedback log
└── maintenance_report.md — last `qlw lint --maintain` output
schema/ — rules the LLM and tools follow:
concept-schema.md, source-schema.md, wiki-structure.md, operations.md
vector_store/ — ChromaDB substrate, used as fallback only
Articles live flat under raw/. The frontmatter status field (raw, reviewed, high_value, rejected) is the source of truth — there is no directory-as-status convention.
Three operations
┌──> wiki/concepts/<slug>.md
├──> wiki/sources/<basename>.md
WeChat URL / Web URL / PDF / HTML ├──> wiki/INDEX.md
| ├──> wiki/state.json
v │ (hashes, scores, freshness, retrieval hints)
[qlw ingest] ──> raw/<dir>/article.md + source.json
| ▲
v │
[qlw compile] ── schema/-injected LLM ─────┘
(auto after ingest)
|
v
[qlw embed] ── ChromaDB substrate over raw/ + wiki/
(auto after compile)
|
v
[qlw ask / qlw brainstorm] ── wiki-first retrieval (INDEX → matched concepts → source summaries)
| RAG runs ONLY when wiki has no relevant concept or audit reports degradation
| (brainstorm runs Rethink Layer post-generation)
|
v
┌─ outputs/brainstorms/<date>_<slug>_<mode>.md
└─ wiki/queries/<date>_<slug>_<mode>.md ── append_query_log:
cited concepts get importance bump
+ retrieval_hints append in state.json
[qlw lint] ── schema-compliance audit (frontmatter, sections, source anchors)
[qlw lint --fix] ── LLM auto-repair of schema-noncompliant concepts
[qlw lint --maintain] ── gap analysis: unmapped source clusters, under-supported concepts,
stale concepts → suggested ingestion queries / new brainstorm prompts
(writes wiki/maintenance_report.md)
[qlw lint --maintain --apply] ── apply query-derived state updates idempotently
Wiki-first retrieval (load-bearing invariant)
brainstorm_from_kb.retrieve_blocks gates on _should_use_wiki_memory(notes) and _wiki_is_healthy_for_query(kb_root). There is no command == "brainstorm" check — both ask and brainstorm pull kb_layer=wiki_concept blocks first (Chroma-filtered → state-score reranked → lexical fallback), then fill remaining slots with complementary article chunks excluding sources already cited by the surfaced concepts. Pure-vector retrieval is the fallback, not the default.
Query → wiki feedback
Query feedback filing is planned after v0.3.0;
qlw askandqlw brainstormcurrently save normal outputs only.
Every qlw ask/qlw brainstorm output is written to outputs/brainstorms/. qlw lint --maintain can distill query logs into proposed concept-page improvements once feedback filing is enabled. This realizes Karpathy's "my own explorations and queries always 'add up' in the knowledge base."
Schema is enforced, not advisory
schema/concept-schema.md and schema/source-schema.md define required frontmatter fields, valid enum values, and required section headers. wiki_lint checks these on every run (severity: warning), and qlw lint --fix runs an LLM auto-repair pass via recompile_concept for schema-noncompliant concepts. The schema text is also injected into compile-time prompts so the LLM is told the source-anchor invariant.
Rethink Layer
A post-generation validation layer that runs automatically in brainstorm mode:
- Idea Parsing — Extracts structured ideas from LLM output (EN/CN formats)
- Novelty Check — Embeds each idea and queries ChromaDB for similar existing articles (threshold: 0.75)
- Quality Scoring — Traceability (heuristic) + Coherence & Actionability (LLM-as-judge)
- Rethink Report — Appended to output with per-idea scores and reasoning
Agent Layer
The LangGraph ReAct agent provides 12 tools:
| Tool | Description |
|---|---|
ingest_article |
Ingest from URL (auto: WeChat / web / PDF), batch URLs, HTML file, PDF file, PDF URL |
enrich_articles |
LLM-powered structured enrichment (concurrent, with limit support) |
list_articles |
List articles by status (raw / reviewed / high_value); all live flat under raw/ |
review_articles |
Show enriched articles ready for review |
set_article_status |
Update article status field in frontmatter |
embed_knowledge |
Build/update ChromaDB vector index over raw/ + wiki/ |
query_knowledge_base |
Wiki-first Q&A or brainstorm; both modes pull stable wiki concepts before vectors |
compile_wiki |
Compile/update wiki (incremental or rebuild); auto-runs lint |
audit_wiki |
Wiki health report: schema violations, stale concepts, unsupported claims, duplicates |
list_concepts |
List wiki concepts by status (stable / proposed / deprecated) |
set_concept_status |
Override: approve/deprecate/delete a concept (escape hatch) |
read_wiki |
Read INDEX.md / a concept article / a source summary |
File Structure
Quant_LLM_Wiki/
├── pyproject.toml # Package metadata + `qlw` console_script entry point
├── requirements.txt # Python dependencies (kept for non-pip-install users)
├── llm_config.example.env # Example LLM provider config
├── README.md
├── LICENSE
├── quant_llm_wiki/ # Installable Python package (all functionality here)
│ ├── __init__.py
│ ├── cli.py # `qlw` dispatcher (9 subcommands)
│ ├── shared.py # Shared utilities, LLM HTTP client, frontmatter
│ ├── paths.py # KB root resolution (resolve_kb_root)
│ ├── enrich.py # LLM enrichment pipeline
│ ├── embed.py # ChromaDB substrate over raw/ + wiki/
│ ├── sync.py # Article status-based file sync
│ ├── ingest/
│ │ ├── source.py # Unified ingest dispatcher (WeChat / web / PDF / HTML)
│ │ ├── wechat.py # WeChat-specific ingest
│ │ ├── _wechat.py # WeChat HTML extraction internals
│ │ ├── web.py # Generic web extraction (trafilatura)
│ │ ├── pdf.py # PDF extraction (pypdf + pdfplumber)
│ │ └── code_math.py # Code/math preservation utilities
│ ├── wiki/
│ │ ├── compile.py # compile_wiki orchestrator (schema-injected, soft-error)
│ │ ├── compile_llm.py # assign_concepts + recompile_concept LLM wrappers
│ │ ├── index.py # INDEX.md generator
│ │ ├── lint.py # Schema enforcement + health checks + auto_fix
│ │ ├── maintain.py # append_query_log + run_maintenance (Steps 6 + 7)
│ │ ├── schemas.py # ConceptArticle / SourceSummary dataclasses
│ │ ├── seed.py # Seed taxonomy + bootstrap
│ │ └── state.py # Machine state manifest + scoring (freshness decay etc.)
│ ├── query/
│ │ ├── brainstorm.py # query (ask | brainstorm) — wiki-first retrieval
│ │ └── rethink.py # Post-generation novelty + quality validation
│ └── agent/ # LangGraph agent layer
│ ├── cli.py # Interactive ReAct agent CLI
│ ├── graph.py
│ ├── prompts.py
│ └── tools.py
├── raw/ # Incoming source articles, flat (one dir per article)
├── wiki/ # LLM-built Markdown memory
│ ├── INDEX.md # auto-maintained TOC
│ ├── state.json # content hashes, concept scores, retrieval hints
│ ├── lint_report.json # last health audit
│ ├── maintenance_report.md # last `qlw lint --maintain` output
│ ├── concepts/ # one .md per concept
│ ├── sources/ # one .md per raw article (mechanically derived)
│ └── queries/ # one .md per filed `qlw ask`/`qlw brainstorm` (Step 7 feedback log)
├── schema/ # Rules followed by LLM and tools
│ ├── concept-schema.md
│ ├── source-schema.md
│ ├── wiki-structure.md
│ └── operations.md
├── templates/ # Article markdown templates (research-note / strategy-note)
├── tests/ # unittest suite
│ ├── robustness/ # Edge-case tests (Layer 1–4)
│ ├── test_qlw_cli.py # qlw CLI dispatch
│ ├── test_query_wiki_first_ask.py
│ ├── test_wiki_lint_schema.py # Schema enforcement + auto_fix
│ ├── test_wiki_maintain.py # Query feedback + maintenance
│ └── test_*.py # Per-module coverage
└── docs/ # Design specs and usage guides
Repo / package / command names. Repo:
Quant_LLM_Wiki. Package:quant_llm_wiki. Console command:qlw(installed viapipx install quant-llm-wikiorpip install -e .). All 9 subcommands —ingest,enrich,embed,sync,ask,brainstorm,agent,lint,compile— are unified underqlw. pipx-installed users have full functionality without cloning the repo.
Command Renaming (vs. previous versions)
The standalone scripts at the repo root have moved into quant_llm_wiki/ and are dispatched through a single qlw CLI:
| Old | New |
|---|---|
qlw ingest --url X |
qlw ingest --url X |
qlw enrich --limit 10 |
qlw enrich --limit 10 |
qlw embed |
qlw embed |
qlw sync |
qlw sync |
qlw ask --query Q |
qlw ask --query Q |
qlw brainstorm --query Q |
qlw brainstorm --query Q |
qlw agent |
qlw agent |
Install with pip install -e . to put qlw on PATH; otherwise use python -m quant_llm_wiki.cli <subcmd>.
Quick Start
1. Install
The recommended way to install is via pipx, which gives you the qlw command globally without polluting your system Python and without requiring you to activate a venv:
# From PyPI (once published)
pipx install quant-llm-wiki
# Or directly from GitHub (always tracks main)
pipx install git+https://github.com/jackwu321/Quant_LLM_Wiki.git
After install, qlw is on your PATH from any shell. Upgrade later with pipx upgrade quant-llm-wiki.
Alternative: clone for development
If you want to hack on the code, clone and install in editable mode:
git clone https://github.com/jackwu321/Quant_LLM_Wiki.git
cd Quant_LLM_Wiki
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
2. Configure LLM Provider
Copy the example config and fill in your API key:
cp llm_config.example.env .env
# Edit .env with your API key and provider settings
Or set environment variables directly:
export LLM_API_KEY="your-api-key"
export LLM_BASE_URL="https://open.bigmodel.cn/api/paas/v4" # or any OpenAI-compatible endpoint
export LLM_MODEL="glm-4.7" # or gpt-4, deepseek-chat, etc.
See llm_config.example.env for provider-specific examples (DeepSeek, Moonshot, Qwen, OpenAI, Ollama).
3. Ingest
# Single URL (WeChat / web)
qlw ingest --url "https://mp.weixin.qq.com/s/..."
# Saved WeChat HTML
qlw ingest --html-file saved.html
# Batch from a list (one URL per line)
qlw ingest --url-list urls.txt
Each URL has a hard 120 s ceiling; on hit, ingest prints TIMEOUT <url>: exceeded 120s and (in batch mode) continues with the next URL. Override via INGEST_URL_TIMEOUT=<seconds>. Note: a timed-out URL may leave a partial raw/<date>_*/ directory behind (same as ordinary FAILED cases).
4. Enrich + Embed
qlw enrich # all raw articles (concurrent)
qlw enrich --limit 10 # first 10 only
qlw enrich --concurrency 5 # 5 parallel LLM requests
qlw embed # build/update ChromaDB vector index
Each article enrichment has a hard 360 s ceiling; on hit, the article is recorded as failed: timeout: exceeded Ns and the batch continues. Override via LLM_ARTICLE_TIMEOUT=<seconds>. Start / done / TIMEOUT / [llm-retry] events are printed to stderr (separate from the per-completion [i/N] ... ok|failed lines on stdout) so you can see what's happening even when the LLM API is slow or backing off.
5. Query (wiki-first)
# Factual Q&A — wiki concepts first, RAG fallback only
qlw ask --query "What momentum factors are discussed?"
# Brainstorm new ideas (with Rethink Layer + query-feedback)
qlw brainstorm --query "Combine momentum and volatility timing for ETF rotation"
# Show retrieved context only (dry run)
qlw brainstorm --query "..." --dry-run
Wiki maintenance commands
v0.3.0 migration note. v0.3.0 unified
kb.pyintoqlw. If you previously ranpython3 kb.py <cmd>, runqlw <cmd>instead. Thekb querymode has been split intoqlw askandqlw brainstorm.If your repo still uses the legacy
articles/raw/layout, passqlw enrich --articles-root articles/raw(or move articles intoraw/) — v0.3.0 unified all subcommands on<kb-root>/raw/.
All wiki maintenance commands are available via qlw — no clone required. pipx install quant-llm-wiki gives you the full surface:
# Ingest from URL (auto-compile + auto-embed in one shot)
qlw ingest --url "https://mp.weixin.qq.com/s/..."
# Ingest from a local PDF file
qlw ingest --pdf-file paper.pdf
# Ingest from a PDF at a URL
qlw ingest --pdf-url "https://example.com/paper.pdf"
# Schema + health audit
qlw lint
qlw lint --fix # LLM auto-repair of schema-noncompliant concepts
qlw lint --maintain # gap analysis: unmapped sources, under-supported, stale
qlw lint --maintain --apply # apply query-derived state updates (idempotent)
# Manual wiki compile
qlw compile
# Query (ask mode) — outputs saved to outputs/brainstorms/
qlw ask --query "..."
# Note: automatic query log filing into `wiki/queries/` (the loop from the "Query → wiki feedback" section above) is planned after v0.3.0. `qlw ask` and `qlw brainstorm` currently save normal command outputs only.
Agent Usage
The interactive agent manages the full pipeline through natural language:
# Interactive mode
qlw agent
# Single command
qlw agent --query "ingest this article: https://mp.weixin.qq.com/s/..."
qlw agent --query "list all articles"
qlw agent --query "brainstorm: combine factor timing with risk parity"
Example Agent Workflow
You: ingest these articles: url1, url2, url3
Agent: Ingested 3/3 articles. Auto-compiled wiki and refreshed vector index.
You: enrich the first 3 raw articles
Agent: [1/3] ok [2/3] ok [3/3] ok — Enriched 3/3 articles.
You: review the new articles
Agent: [Shows enriched articles with content types and summaries]
You: set articles 1 and 3 as high_value, article 2 as rejected (low research value)
Agent: Updated 3 articles. Article 2 recorded as rejected (URL noted to prevent re-ingest).
You: ingest url2 again
Agent: WARNING — url2 was previously rejected: "文章标题" (reason: low research value).
Use force=True to re-ingest.
You: brainstorm: how to combine momentum with volatility timing
Agent: [Wiki concepts surfaced first; complementary articles fill remaining slots]
[LLM generates ideas; Rethink Layer scores novelty + quality]
[Query filed back into wiki/queries/; cited concepts gain importance]
Configuration
LLM Provider
Quant_LLM_Wiki works with any OpenAI-compatible API. Configure via .env file (auto-loaded) or environment variables:
| Variable | Default | Description |
|---|---|---|
LLM_API_KEY |
— | Your API key |
LLM_BASE_URL |
https://open.bigmodel.cn/api/paas/v4 |
API base URL |
LLM_MODEL |
glm-4.7 |
Chat model name |
LLM_EMBEDDING_MODEL |
embedding-3 |
Embedding model name |
LLM_CONNECT_TIMEOUT |
10 |
Connection timeout (seconds) |
LLM_READ_TIMEOUT |
120 |
Read timeout (seconds) |
LLM_MAX_RETRIES |
2 |
Max retry attempts |
LLM_CONCURRENCY |
3 |
Max parallel LLM requests for enrichment |
Legacy ZHIPU_* prefixed variables are also supported as fallbacks.
Content Classification
Each article is classified with exactly one content_type:
| Type | Description |
|---|---|
methodology |
Research frameworks, models, factor logic |
strategy |
Trading logic with entry/exit rules and backtest |
allocation |
Portfolio construction, rotation, ETF allocation |
risk_control |
Risk management, drawdown control, volatility targeting |
market_review |
Market commentary, sector reviews |
Article Status Lifecycle
All articles live flat under raw/. The frontmatter status field is the source of truth.
| Status | Description |
|---|---|
raw |
Ingested, pending enrichment and review |
reviewed |
Human-reviewed; included in wiki compilation and vector index |
high_value |
High research value; included in wiki compilation and vector index |
rejected |
Low value — removed from KB, source URL recorded to prevent re-ingestion |
Running Tests
Unit Tests
python3 -m unittest discover -s tests -p 'test_*.py' -v
Robustness Tests
The tests/robustness/ suite covers edge cases and failure modes across four layers:
| File | What it tests |
|---|---|
test_layer1_tool_robustness.py |
Agent tools with malformed/missing inputs |
test_layer2_workflow_integration.py |
End-to-end pipeline with bad data |
test_layer3_agent_routing.py |
Agent routing under unexpected queries |
test_layer4_llm_api_robustness.py |
LLM API timeouts, retries, and failures |
python3 -m unittest discover -s tests/robustness -p 'test_*.py' -v
Design Principles
- Wiki-first, RAG-as-substrate — Both
qlw askandqlw brainstormretrieve stable wiki concepts before vectors. ChromaDB runs only as fallback when the wiki is empty/sparse oraudit_wikireports degradation. - Three durable verbs —
qlw ingest,qlw ask/qlw brainstorm,qlw lintper Karpathy's prescription.compileandembedare internal operations auto-run byingest. - Schema is enforced —
schema/concept-schema.mdandschema/source-schema.mddefine required frontmatter fields, valid enums, and required section headers.wiki_lintchecks these on every run;qlw lint --fixruns an LLM auto-repair pass. - Inspiration over execution — The knowledge base serves idea combination, not backtested trading signals.
- Hybrid memory: Markdown + structured state — Markdown is the inspectable interface;
wiki/state.jsonand ChromaDB metadata are the operational substrate (scoring, freshness decay, conflict tracking). - Per-claim provenance — Every bullet in a concept article ends with
[<source_basename>]; un-anchored bullets fail lint and lower confidence. - Content-hash idempotency —
qlw compilereruns produce zero LLM calls when source hashes are unchanged (nomtime, no date guessing). - Queries compound — Every
qlw ask/qlw brainstormfiles intowiki/queries/and bumps state.json scoring for cited concepts.qlw lint --maintaindistills the query log into proposed concept-page improvements. - Complementary retrieval — Wiki concepts surface first, then complementary article chunks fill remaining slots (excluding sources already cited by concepts).
- Graceful degradation — Every component handles missing dependencies without crashing;
audit_wikierrors push the wiki-first path to article-only fallback. - Self-healing vector store — Automatic SQLite integrity check before each ChromaDB operation; corrupted stores are cleaned up and rebuilt transparently.
Releasing (maintainers)
This repo publishes to PyPI automatically when a v*.*.* tag is pushed. The workflow is defined in .github/workflows/publish.yml and uses PyPI Trusted Publishing (OIDC) — no API token is stored in GitHub secrets.
One-time PyPI setup
Before the first release, configure a "pending publisher" on PyPI:
- Log in to https://pypi.org/manage/account/publishing/
- Add a pending publisher with:
- PyPI Project Name:
quant-llm-wiki - Owner:
jackwu321 - Repository name:
Quant_LLM_Wiki - Workflow filename:
publish.yml - Environment name:
pypi
- PyPI Project Name:
- In GitHub repo settings → Environments, create an environment named
pypi(no secrets needed; OIDC handles auth).
Cutting a release
# 1. Bump version in pyproject.toml (e.g. 0.2.0 -> 0.2.1)
# 2. Commit
git commit -am "release: v0.2.1"
# 3. Tag and push
git tag v0.2.1
git push origin main --tags
The workflow will:
- Verify the tag matches
project.versioninpyproject.toml - Build sdist + wheel
- Upload to PyPI via Trusted Publishing
Users then upgrade with pipx upgrade quant-llm-wiki.
Versioning. Follow SemVer: bump patch for fixes, minor for new features, major for breaking changes. The tag
v0.2.1must matchversion = "0.2.1"inpyproject.tomlexactly, or the workflow aborts before publishing.
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Write tests for new functionality
- Ensure all tests pass (
python3 -m unittest discover -s tests -p 'test_*.py') - Commit your changes
- Open a Pull Request
License
This project is licensed under the MIT License — see the LICENSE file for details.
Disclaimer
Quant_LLM_Wiki is a research tool for generating investment strategy ideas. It does not produce trade-ready strategies or financial advice. All generated ideas require independent validation, backtesting, and risk assessment before any real-world application. Use at your own risk.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quant_llm_wiki-0.3.0.tar.gz.
File metadata
- Download URL: quant_llm_wiki-0.3.0.tar.gz
- Upload date:
- Size: 135.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e8e99fdc702f67ade214d34fcfceb48763d2ddf85740186b45bc00be140b4d0
|
|
| MD5 |
6e51396d51b50bd1143b935093904bae
|
|
| BLAKE2b-256 |
d327269c3d0106cad8a3fb9adf2d97be30d57ded1246d0261a3872412005a943
|
Provenance
The following attestation bundles were made for quant_llm_wiki-0.3.0.tar.gz:
Publisher:
publish.yml on jackwu321/Quant_LLM_Wiki
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
quant_llm_wiki-0.3.0.tar.gz -
Subject digest:
7e8e99fdc702f67ade214d34fcfceb48763d2ddf85740186b45bc00be140b4d0 - Sigstore transparency entry: 1504896524
- Sigstore integration time:
-
Permalink:
jackwu321/Quant_LLM_Wiki@9f3b55c911a5210a5194b1c672a37c60e130b718 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/jackwu321
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9f3b55c911a5210a5194b1c672a37c60e130b718 -
Trigger Event:
push
-
Statement type:
File details
Details for the file quant_llm_wiki-0.3.0-py3-none-any.whl.
File metadata
- Download URL: quant_llm_wiki-0.3.0-py3-none-any.whl
- Upload date:
- Size: 107.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5c4b9eb294a82d4ebf4d3388807c17cef37a5abdb13230d122c8ffd63def6ece
|
|
| MD5 |
1c43f11fd56ade8d73bba0e08ba78732
|
|
| BLAKE2b-256 |
992128538cabd24d5d4b501895507d55773e4029608f477edfc459f0736ea70f
|
Provenance
The following attestation bundles were made for quant_llm_wiki-0.3.0-py3-none-any.whl:
Publisher:
publish.yml on jackwu321/Quant_LLM_Wiki
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
quant_llm_wiki-0.3.0-py3-none-any.whl -
Subject digest:
5c4b9eb294a82d4ebf4d3388807c17cef37a5abdb13230d122c8ffd63def6ece - Sigstore transparency entry: 1504896748
- Sigstore integration time:
-
Permalink:
jackwu321/Quant_LLM_Wiki@9f3b55c911a5210a5194b1c672a37c60e130b718 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/jackwu321
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9f3b55c911a5210a5194b1c672a37c60e130b718 -
Trigger Event:
push
-
Statement type: