[RAGFlow](https://ragflow.io/) is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
Project description
Deepdoc
Installations
CPU-only (default):
pip install deepdoc-lib
GPU (Linux x86_64 only):
pip install deepdoc-lib[gpu]
Note: onnxruntime (CPU) and onnxruntime-gpu should not be installed together. If you're switching an existing environment to GPU, uninstall CPU ORT first:
pip uninstall -y onnxruntime
pip install onnxruntime-gpu==1.19.2
Parser Usage
from deepdoc import (
DocxParser,
ExcelParser,
HtmlParser,
PdfModelConfig,
PdfParser,
TokenizerConfig,
)
# Build configs
# Method 1: Explicit configuration (offline mode)
tokenizer_cfg = TokenizerConfig(
offline=True,
nltk_data_dir="/path/to/nltk_data",
)
pdf_model_cfg = PdfModelConfig(
vision_model_dir="/path/to/models/vision",
xgb_model_dir="/path/to/models/xgb",
model_provider="local",
)
# Method 2: Empty configuration (auto-download models and nltk_data)
# tokenizer_cfg = TokenizerConfig()
# pdf_model_cfg = PdfModelConfig()
# Parse PDF
pdf_parser = PdfParser(model_cfg=pdf_model_cfg, tokenizer_cfg=tokenizer_cfg)
result = pdf_parser("document.pdf")
# Parse DOCX / HTML (tokenizer only)
docx_parser = DocxParser(tokenizer_cfg=tokenizer_cfg)
html_parser = HtmlParser(tokenizer_cfg=tokenizer_cfg)
# Parse Excel (no model/tokenizer dependency)
excel_parser = ExcelParser()
with open("data.xlsx", "rb") as f:
result = excel_parser(f.read())
Or use explicit env factories:
tokenizer_cfg = TokenizerConfig.from_env()
pdf_model_cfg = PdfModelConfig.from_env()
pdf_parser = PdfParser(model_cfg=pdf_model_cfg, tokenizer_cfg=tokenizer_cfg)
Or rely on defaults (env + cache). Deepdoc will look for cached bundles under
$DEEPDOC_MODEL_HOME (or ~/.cache/deepdoc) and only download missing files
when the provider allows remote access:
pdf_parser = PdfParser()
env definitions:
# provider: auto | local | modelscope
export DEEPDOC_MODEL_PROVIDER=auto
# shared model cache root (default: ~/.cache/deepdoc)
export DEEPDOC_MODEL_HOME=/path/to/deepdoc-models
# optional bundle-specific local directories
export DEEPDOC_VISION_MODEL_DIR=/path/to/vision
export DEEPDOC_XGB_MODEL_DIR=/path/to/xgb
# single combined ModelScope repo (all bundles in one repo)
# (default: Xorbits/deepdoc)
export DEEPDOC_MODELSCOPE_REPO=Xorbits/deepdoc
# optional shared revision (default: master)
export DEEPDOC_MODELSCOPE_REVISION=master
# offline mode for tokenizer NLTK auto-download
export DEEPDOC_OFFLINE=0
# optional NLTK data controls for tokenizer
export DEEPDOC_NLTK_DATA_DIR=/path/to/nltk_data
Download model artifacts
To pre-download all model bundles (vision/xgb/tokenizer) into the default cache directory (~/.cache/deepdoc), run:
deepdoc-download-models
# or (from source checkout)
python -m deepdoc.download_models
If you want to override the cache location, set DEEPDOC_MODEL_HOME:
export DEEPDOC_MODEL_HOME=./models
deepdoc-download-models
By default this also downloads the required NLTK resources into ~/.cache/deepdoc/nltk_data (or $DEEPDOC_MODEL_HOME/nltk_data) and the cached cl100k_base tiktoken file into ~/.cache/deepdoc/tiktoken_cache (or $DEEPDOC_MODEL_HOME/tiktoken_cache). deepdoc.common.token_utils automatically points TIKTOKEN_CACHE_DIR at the same location unless you override it with DEEPDOC_TIKTOKEN_CACHE_DIR or TIKTOKEN_CACHE_DIR.
If you want to skip either optional offline asset, use:
deepdoc-download-models --no-nltk --no-tiktoken
Vision Model Usage
from deepdoc import create_vision_model
- Use Environment Variable
# Vision model configs
export DEEPDOC_VISION_PROVIDER="qwen"
export DEEPDOC_VISION_API_KEY="your-api-key"
export DEEPDOC_VISION_MODEL="qwen-vl-max"
export DEEPDOC_VISION_LANG="Chinese"
export DEEPDOC_VISION_BASE_URL="http://your_base_url"
# Other configs
export DEEPDOC_LIGHTEN=0 # Whether to use lighten mode
vision_model = create_vision_model()
- Use Default Provider
export DEEPDOC_VISION_API_KEY="your-api-key"
vision_model = create_vision_model("qwen")
Supported providers: ["openai", "qwen", "zhipu", "ollama", "gemini", "anthropic"]
- Use Configuration File
Create deepdoc_config.yaml:
vision_model:
provider: "qwen"
model_name: "qwen-vl-max"
api_key: "your-api-key"
lang: "Chinese"
base_url : "http://your-base-url"
vision_model = create_vision_model("/path/to/deepdoc_config.yaml")
Run
with open("image.jpg", "rb") as f:
result = vision_model.describe_with_prompt(f.read())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file deepdoc_lib-0.2.0.tar.gz.
File metadata
- Download URL: deepdoc_lib-0.2.0.tar.gz
- Upload date:
- Size: 4.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6e23cb38b0d42ec619a60479bf0db24590d8c78611d11c26822c773dd03f568
|
|
| MD5 |
20a450cfb43ffe82b090cd0addb6707d
|
|
| BLAKE2b-256 |
00b323c0cd1026c7b0b8e1a411e927dc241fa54cbeea3f2f13c49734424eadd1
|
Provenance
The following attestation bundles were made for deepdoc_lib-0.2.0.tar.gz:
Publisher:
publish.yml on xorbitsai/deepdoc-lib
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
deepdoc_lib-0.2.0.tar.gz -
Subject digest:
b6e23cb38b0d42ec619a60479bf0db24590d8c78611d11c26822c773dd03f568 - Sigstore transparency entry: 1144477433
- Sigstore integration time:
-
Permalink:
xorbitsai/deepdoc-lib@d0634288e68bdb43ae73743f08bc85c0ee28731a -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/xorbitsai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d0634288e68bdb43ae73743f08bc85c0ee28731a -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file deepdoc_lib-0.2.0-py3-none-any.whl.
File metadata
- Download URL: deepdoc_lib-0.2.0-py3-none-any.whl
- Upload date:
- Size: 4.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
76bbbbccee1bdb7decaf02450c75995b802053b4de030acf895604f622fa77a0
|
|
| MD5 |
6a3871caf9705bb666e2c72268fb1801
|
|
| BLAKE2b-256 |
a9901c1e8553997cc74a9e07939c37b6c6f7ab74707910d49ba7b0ba0e60b9f7
|
Provenance
The following attestation bundles were made for deepdoc_lib-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on xorbitsai/deepdoc-lib
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
deepdoc_lib-0.2.0-py3-none-any.whl -
Subject digest:
76bbbbccee1bdb7decaf02450c75995b802053b4de030acf895604f622fa77a0 - Sigstore transparency entry: 1144477515
- Sigstore integration time:
-
Permalink:
xorbitsai/deepdoc-lib@d0634288e68bdb43ae73743f08bc85c0ee28731a -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/xorbitsai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d0634288e68bdb43ae73743f08bc85c0ee28731a -
Trigger Event:
workflow_dispatch
-
Statement type: