Skip to main content

A basic document parsing utility. (Database Interface)

Project description

A basic document parsing and loading utility - Database Interfaces

PyPI - Version PyPI - Implementation PyPI - Python Version PyPI - Status Static Badge Static Badge Static Badge Documentation Status PyPI - License PyPI - Wheel

Overview

The docp-* project suite is designed as a comprehensive (doc)ument (p)arsing library. Built in CPython, it consolidates the capabilities of various lower-level libraries, offering a unified solution for parsing binary document structures.

The suite is extended by several sister projects, each providing unique functionality:

Project Description
docp-core Centralized core objects, functionality and settings.
docp-parsers Parse binary documents (e.g. PDF, PPTX, etc.) into Python objects.
docp-loaders Load a parsed document's embeddings into a Chroma vector database, for RAG-enabled LLM use.
docp-docling Convert a PDF into Markdown format via wrappers to the docling libraries.
docp-dbi Interfaces to document databases such as ChromaDB, and Neo4j (coming soon).

This library (docp-dbi) extends the document parsing capability by adding access to a ChromaDB vector database for storing text embeddings. This is particularly useful for implementing RAG-enabled pipelines.

The Toolset (Interfaces)

As of this release, the following database interfaces are supported:

  • ChromaDB (via langchain_chroma)
  • Neo4j (coming soon)

Quickstart

Installation

To install docp-dbi, first activate your target virtual environment, then use pip:

pip install docp-dbi

For older releases, visit PyPI or the GitHub Releases page.

Example Usage

For convenience, here are a couple examples for how to create and interact with a database interface for your project.

Create an interface to ChromaDB:

    >>> from docp_dbi import ChromaDB

    # Create a database interface.
    >>> db = ChromaDB(path='/path/to/chromadb', collection='test-collection')

    # Display a list of all collections in the database.
    >>> db.client.list_collections()

    # Debug: Retrieve records from the database.
    >>> records = db.show_all()

Load a new PDF document into the database, and query against it:

    >>> from docp_dbi import ChromaDB
    >>> from docp_parsers import PDFParser
    >>> from langchain_text_splitters import RecursiveCharacterTextSplitter

    # Parse the PDF document.
    >>> pdf = PDFParser(path='/path/to/documents/rag-pipelines-how-to.pdf')
    >>> pdf.extract_text()

    # Setup a text splitter (for chunking the document).
    >>> splitter = RecursiveCharacterTextSplitter(
    ...     separators=['\n\n\n', '\n\n', '\n', '.'],
    ...     chunk_size=512,
    ...     chunk_overlap=128
    ... )
    # Split the document for storage.
    >>> docs = splitter.split_documents(pdf.doc.documents)

    # Create a database interface, using an offline, local user-defined embedding model.
    >>> db = ChromaDB(path='/path/to/databases/chroma/', 
    ...               collection='test', 
    ...               embedding_model_path='/path/to/models/sentence-transformers/all-MiniLM-L6-v2', 
    ...               offline=True)
    # Embed and store the document chunks.
    >>> db.add_documents(documents=docs)

    # Run your first query.
    >>> result = db.collection.query(query_texts=['How do I implement a RAG pipeline?'])

Using the Library

The documentation suite provides detailed explanations and usage examples for each importable module. For in-depth documentation, code examples, and source links, refer to the Library API page.

A search field is available in the left navigation bar to help you quickly locate specific modules or methods.

Troubleshooting

No troubleshooting guidance is available at this time.

For questions not covered here, or to report bugs, issues, or suggestions, please open an issue on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docp_dbi-1.0.0.tar.gz (8.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

docp_dbi-1.0.0-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file docp_dbi-1.0.0.tar.gz.

File metadata

  • Download URL: docp_dbi-1.0.0.tar.gz
  • Upload date:
  • Size: 8.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for docp_dbi-1.0.0.tar.gz
Algorithm Hash digest
SHA256 203126276e8c72157aec2f822b9762c22d98cc387221f832616518fb1b5efff8
MD5 7c7ddc0f5cd914d8ea2186750647eac7
BLAKE2b-256 eede5c7f4e5e79c8a46059c48166c64e35d9a722a1a62c4271ea342c1988a315

See more details on using hashes here.

File details

Details for the file docp_dbi-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: docp_dbi-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for docp_dbi-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 79bc652e5d0ec5281221f106b1b4e3f1d07738dfd2d2597e3c01eb7135fc548a
MD5 6e532f379224a1276a4791bf4e26554d
BLAKE2b-256 6884cbb3bef27d5442c0056b8c32166bb789e0a623e9dcf800df0b3adef515c4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page