A tool to convert documents into knowledge graphs using Docling.
Project description
Docling Graph
Docling-Graph turns documents into validated Pydantic objects, then builds a directed knowledge graph with explicit semantic relationships.
This transformation enables high-precision use cases in chemistry, finance, and legal domains, where AI must capture exact entity connections (compounds and reactions, instruments and dependencies, properties and measurements) rather than rely on approximate text embeddings.
This toolkit supports two extraction paths: local VLM extraction via Docling, and LLM-based extraction routed through LiteLLM for local runtimes (vLLM, Ollama) and API providers (Mistral, OpenAI, Gemini, IBM WatsonX), all orchestrated through a flexible, config-driven pipeline.
Key Capabilities
-
✍🏻 Input Formats: All inputs go through Docling for conversion (PDF, Office, HTML, images, markdown, etc.); DoclingDocument JSON skips conversion.
-
🧠 Data Extraction: Extract structured data using VLM or LLM. Supports intelligent chunking and flexible processing modes.
-
💎 Graph Construction: Convert validated Pydantic models into NetworkX directed graphs with semantic relationships and stable node IDs, and rich edge metadata.
-
📦 Export: Save graphs in multiple KG databases compatible formats like CSV, and Cypher for bulk import.
-
🔍 Visualization: Explore graphs with interactive HTML pages, and detailed Markdown reports.
Latest Changes
-
🪜 Multi-pass Extraction - Experimental: Use
extraction_contract="delta"orextraction_contract="staged".- Delta extraction for long documents: chunk → token-batched LLM calls → normalize → merge → project to template.
- Staged extraction for complex nested templates: Catalog → ID pass (skeleton) → Fill pass (bottom-up) → Merge.
-
📐 Structured Extraction: LLM extraction now uses API schema-enforced output by default (
response_format=json_schemavia LiteLLM). Disable withstructured_output=False(API) or--no-schema-enforced-llm(CLI) to fall back to the legacy prompt-schema mode if your LLM provider doesn’t support it. -
✨ LiteLLM abstraction: Unified interface to local and remote LLM providers (vLLM, Mistral, OpenAI, WatsonX, etc.) via LiteLLM, offering improved support and greater flexibility.
-
🐛 Trace Capture: Comprehensive debug data via event-based
trace_dataexports with diagnostics for extraction, staged passes, fallback behavior, and more.
Coming Soon
-
🧩 Interactive Template Builder: Guided workflows for building Pydantic templates.
-
🧲 Ontology-Based Templates: Match content to the best Pydantic template using semantic similarity.
-
💾 Graph Database Integration: Export data straight into
Neo4j,ArangoDB, and similar databases.
Quick Start
Requirements
- Python 3.10 or higher
- uv package manager
Installation
# Clone the repository
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install with uv
uv sync # Core + LiteLLM + VLM
For detailed installation instructions, see Installation Guide.
API Key Setup (Remote Inference)
export OPENAI_API_KEY="..." # OpenAI
export MISTRAL_API_KEY="..." # Mistral
export GEMINI_API_KEY="..." # Google Gemini
# IBM WatsonX
export WATSONX_API_KEY="..." # IBM WatsonX API Key
export WATSONX_PROJECT_ID="..." # IBM WatsonX Project ID
export WATSONX_URL="..." # IBM WatsonX URL (optional)
Basic Usage
CLI
# Initialize configuration
uv run docling-graph init
# Convert document from URL (each line except the last must end with \)
uv run docling-graph convert "https://arxiv.org/pdf/2207.02720" \
--template "docs.examples.templates.rheology_research.ScholarlyRheologyPaper" \
--processing-mode "many-to-one" \
--extraction-contract "staged" \
--debug
# Visualize results
uv run docling-graph inspect outputs
Python API - Default Behavior
from docling_graph import run_pipeline, PipelineContext
from docs.examples.templates.rheology_research import ScholarlyRheologyPaper
# Create configuration
config = {
"source": "https://arxiv.org/pdf/2207.02720",
"template": ScholarlyRheologyPaper,
"backend": "llm",
"inference": "remote",
"processing_mode": "many-to-one",
"extraction_contract": "staged", # robust for smaller models
"provider_override": "mistral",
"model_override": "mistral-medium-latest",
"structured_output": True, # default
"use_chunking": True,
}
# Run pipeline - returns data directly, no files written to disk
context: PipelineContext = run_pipeline(config)
# Access results
graph = context.knowledge_graph
models = context.extracted_models
metadata = context.graph_metadata
print(f"Extracted {len(models)} model(s)")
print(f"Graph: {graph.number_of_nodes()} nodes, {graph.number_of_edges()} edges")
For debugging, use --debug with the CLI to save intermediate artifacts to disk; see Trace Data & Debugging. For more examples, see Examples.
Pydantic Templates
Templates define both the extraction schema and the resulting graph structure.
from pydantic import BaseModel, Field
from docling_graph.utils import edge
class Person(BaseModel):
"""Person entity with stable ID."""
model_config = {
'is_entity': True,
'graph_id_fields': ['last_name', 'date_of_birth']
}
first_name: str = Field(description="Person's first name")
last_name: str = Field(description="Person's last name")
date_of_birth: str = Field(description="Date of birth (YYYY-MM-DD)")
class Organization(BaseModel):
"""Organization entity."""
model_config = {'is_entity': True}
name: str = Field(description="Organization name")
employees: list[Person] = edge("EMPLOYS", description="List of employees")
For complete guidance, see:
Documentation
Comprehensive documentation can be found on Docling Graph's Page.
Documentation Structure
The documentation follows the docling-graph pipeline stages:
- Introduction - Overview and core concepts
- Installation - Setup and environment configuration
- Schema Definition - Creating Pydantic templates
- Pipeline Configuration - Configuring the extraction pipeline
- Extraction Process - Document conversion and extraction
- Graph Management - Exporting and visualizing graphs
- CLI Reference - Command-line interface guide
- Python API - Programmatic usage
- Examples - Working code examples
- Advanced Topics - Performance, testing, error handling
- API Reference - Detailed API documentation
- Community - Contributing and development guide
Contributing
We welcome contributions! Please see:
- Contributing Guidelines - How to contribute
- Development Guide - Development setup
- GitHub Workflow - Branch strategy and CI/CD
Development Setup
# Clone and setup
git clone https://github.com/IBM/docling-graph
cd docling-graph
# Install with dev dependencies
uv sync --extra dev
# Run Execute pre-commit checks
uv run pre-commit run --all-files
License
MIT License - see LICENSE for details.
Acknowledgments
- Powered by Docling for advanced document processing
- Uses Pydantic for data validation
- Graph generation powered by NetworkX
- Visualizations powered by Cytoscape.js
- CLI powered by Typer and Rich
IBM ❤️ Open Source AI
Docling Graph has been brought to you by IBM.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file docling_graph-1.3.0.tar.gz.
File metadata
- Download URL: docling_graph-1.3.0.tar.gz
- Upload date:
- Size: 177.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eea074a77de0ebfc335f7014d70c0b081e98216fa0ee4ca3e03ddb9f4cb3d5a2
|
|
| MD5 |
01d78d89d1cc2707dfbf0988cefc4a51
|
|
| BLAKE2b-256 |
7ce9598f15eccd25c0ce48720dc064c9fb068f95a417e55ea1567d883953b344
|
Provenance
The following attestation bundles were made for docling_graph-1.3.0.tar.gz:
Publisher:
release.yml on IBM/docling-graph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
docling_graph-1.3.0.tar.gz -
Subject digest:
eea074a77de0ebfc335f7014d70c0b081e98216fa0ee4ca3e03ddb9f4cb3d5a2 - Sigstore transparency entry: 953614811
- Sigstore integration time:
-
Permalink:
IBM/docling-graph@8ef5ce5e70b939ad849eb3d850d9d4409f7ae79c -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/IBM
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8ef5ce5e70b939ad849eb3d850d9d4409f7ae79c -
Trigger Event:
push
-
Statement type:
File details
Details for the file docling_graph-1.3.0-py3-none-any.whl.
File metadata
- Download URL: docling_graph-1.3.0-py3-none-any.whl
- Upload date:
- Size: 207.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c27791fbebd5ef969ff2f327532fd3fbb8d735df1ca03f757fa815e41c5a2494
|
|
| MD5 |
f8be5d5a76ff18517c42294d12c32875
|
|
| BLAKE2b-256 |
3827bcf5671c15eaa7531ba4049bc50af3e27b1cb9c9c31ea50ac37ce59eb75e
|
Provenance
The following attestation bundles were made for docling_graph-1.3.0-py3-none-any.whl:
Publisher:
release.yml on IBM/docling-graph
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
docling_graph-1.3.0-py3-none-any.whl -
Subject digest:
c27791fbebd5ef969ff2f327532fd3fbb8d735df1ca03f757fa815e41c5a2494 - Sigstore transparency entry: 953614814
- Sigstore integration time:
-
Permalink:
IBM/docling-graph@8ef5ce5e70b939ad849eb3d850d9d4409f7ae79c -
Branch / Tag:
refs/tags/v1.3.0 - Owner: https://github.com/IBM
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8ef5ce5e70b939ad849eb3d850d9d4409f7ae79c -
Trigger Event:
push
-
Statement type: