Skip to main content

Docling PDF conversion package

Project description

Docling

Docling

arXiv PyPI version Python Poetry Code style: black Imports: isort Pydantic v2 pre-commit License MIT

Docling bundles PDF document conversion to JSON and Markdown in an easy, self-contained package.

Features

  • ⚡ Converts any PDF document to JSON or Markdown format, stable and lightning fast
  • 📑 Understands detailed page layout, reading order and recovers table structures
  • 📝 Extracts metadata from the document, such as title, authors, references and language
  • 🔍 Optionally applies OCR (use with scanned PDFs)

For RAG, check out Quackling to get the most out of your docs, be it using LlamaIndex, LangChain or your pipeline.

Installation

To use Docling, simply install docling from your package manager, e.g. pip:

pip install docling

[!NOTE] Works on macOS and Linux environments. Windows platforms are currently not tested.

Use alternative PyTorch distributions

The Docling models depend on the PyTorch library. Depending on your architecture, you might want to use a different distribution of torch. For example, you might want support for different accelerator or for a cpu-only version. All the different ways for installing torch are listed on their website https://pytorch.org/.

One common situation is the installation on Linux systems with cpu-only support. In this case, we suggest the installation of Docling with the following options

# Example for installing on the Linux cpu-only version
pip install docling --extra-index-url https://download.pytorch.org/whl/cpu

Development setup

To develop for Docling, you need Python 3.10 / 3.11 / 3.12 and Poetry. You can then install from your local clone's root dir:

poetry install --all-extras

Usage

Convert a single document

To convert invidual PDF documents, use convert_single(), for example:

from docling.document_converter import DocumentConverter

source = "https://arxiv.org/pdf/2408.09869"  # PDF path or URL
converter = DocumentConverter()
result = converter.convert_single(source)
print(result.render_as_markdown())  # output: "## Docling Technical Report[...]"

Convert a batch of documents

For an example of batch-converting documents, see batch_convert.py.

From a local repo clone, you can run it with:

python examples/batch_convert.py

The output of the above command will be written to ./scratch.

Adjust pipeline features

The example file custom_convert.py contains multiple ways one can adjust the conversion pipeline and features.

Control pipeline options

You can control if table structure recognition or OCR should be performed by arguments passed to DocumentConverter:

doc_converter = DocumentConverter(
    artifacts_path=artifacts_path,
    pipeline_options=PipelineOptions(
        do_table_structure=False,  # controls if table structure is recovered
        do_ocr=True,  # controls if OCR is applied (ignores programmatic content)
    ),
)

Control table extraction options

You can control if table structure recognition should map the recognized structure back to PDF cells (default) or use text cells from the structure prediction itself. This can improve output quality if you find that multiple columns in extracted tables are erroneously merged into one.

pipeline_options = PipelineOptions(do_table_structure=True)
pipeline_options.table_structure_options.do_cell_matching = False  # uses text cells predicted from table structure model

doc_converter = DocumentConverter(
    artifacts_path=artifacts_path,
    pipeline_options=pipeline_options,
)

Impose limits on the document size

You can limit the file size and number of pages which should be allowed to process per document:

conv_input = DocumentConversionInput.from_paths(
    paths=[Path("./test/data/2206.01062.pdf")],
    limits=DocumentLimits(max_num_pages=100, max_file_size=20971520)
)

Convert from binary PDF streams

You can convert PDFs from a binary stream instead of from the filesystem as follows:

buf = BytesIO(your_binary_stream)
docs = [DocumentStream(filename="my_doc.pdf", stream=buf)]
conv_input = DocumentConversionInput.from_streams(docs)
results = doc_converter.convert(conv_input)

Limit resource usage

You can limit the CPU threads used by Docling by setting the environment variable OMP_NUM_THREADS accordingly. The default setting is using 4 CPU threads.

Technical report

For more details on Docling's inner workings, check out the Docling Technical Report.

Contributing

Please read Contributing to Docling for details.

References

If you use Docling in your projects, please consider citing the following:

@techreport{Docling,
  author = {Deep Search Team},
  month = {8},
  title = {Docling Technical Report},
  url = {https://arxiv.org/abs/2408.09869},
  eprint = {2408.09869},
  doi = {10.48550/arXiv.2408.09869},
  version = {1.0.0},
  year = {2024}
}

License

The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docling-1.10.0.tar.gz (36.6 kB view details)

Uploaded Source

Built Distribution

docling-1.10.0-py3-none-any.whl (42.7 kB view details)

Uploaded Python 3

File details

Details for the file docling-1.10.0.tar.gz.

File metadata

  • Download URL: docling-1.10.0.tar.gz
  • Upload date:
  • Size: 36.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for docling-1.10.0.tar.gz
Algorithm Hash digest
SHA256 83eeebaadeef6aec2429f34c89fced753561eb2ab063f8b77786e2b538dbc01c
MD5 c2b0e06c0ffca0fb11d0efc086a310a9
BLAKE2b-256 ba50704d15a9eab11e9c1c94deaef7aef8ee731ced2913ff0deaf77718bac67a

See more details on using hashes here.

Provenance

File details

Details for the file docling-1.10.0-py3-none-any.whl.

File metadata

  • Download URL: docling-1.10.0-py3-none-any.whl
  • Upload date:
  • Size: 42.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for docling-1.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1b5851a4675dc0debeb22c41674f8c3bf3726f97b15d4ab7072b5b72f9069985
MD5 d4ab7d35a0712b3de88c25eb83ac46c6
BLAKE2b-256 440f8dc93844fc1875482b7e36be8ce5ad2af9b380fe1e0668f3ef0174df4582

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page