Skip to main content

A fast and light-weight library for ingesting and chunking files

Project description

easyparser

PyPI version License: Apache 2.0

A fast and lightweight Python library for intelligent document parsing, chunking, and analysis.

📊 Library Responsibility

flowchart LR
    Files([Documents]) -->|Input| Parse
    Parse -->|Structured Data| Chunk
    Chunk -->|Semantic Units| Index
    Index -->|Indexed Data| Search
    Search -->|Retrieved Chunks| Context[Represent Context]
    Context -->|Context + Query| LLM([Language Model])

    classDef primary fill:#4CAF50,stroke:#388E3C,color:black,stroke-width:2px,font-weight:bold
    classDef secondary fill:#64B5F6,stroke:#1976D2,color:black,stroke-width:1px
    classDef document fill:#E1BEE7,stroke:#9C27B0,color:black,stroke-width:1px,stroke-dasharray: 5 5

    class Parse,Chunk,Context primary
    class Index,Search secondary
    class Files,LLM document

    subgraph easyparserLibrary["easyparser Library Focus"]
        Parse
        Chunk
        Context
    end

The easyparser library focuses on the critical first steps in the document processing pipeline: Parsing documents into a unified format, easyparser content intelligently, and Representing retrieval context for LLMs.

🌟 Features

  • Universal Document Processing: Parse PDFs, Word documents, PowerPoint, Excel, Markdown, HTML, images, audio, video, and more with a unified API
  • Intelligent Structure Preservation: Keeps document hierarchy, tables, lists, and formatting intact
  • Modular Architecture: Choose parsers and processing steps based on your needs
  • Multimodal Support: Extract and process text, images, tables, and other elements
  • Smart easyparser Strategies: Split content based on headers, semantic meaning, or custom rules
  • LLM-Ready: Output chunks ready for embedding or use with language models
  • Extensible: Easy to add custom parsers and processors

🚀 Quick Start

from easyparser import parse

# Parse a file or directory into chunks
chunks = parse("path/to/document.pdf")

# Print the text
print(chunks.render())

# Or get a hierarchical structure
for depth, chunk in chunks.walk():
    print("  " * depth + f"{chunk.ctype}: {chunk.content[:30]}...")

📦 Installation

pip install easyparser

For specialized parsers, install with extras:

pip install easyparser[pdf,ocr,audio]  # Install with PDF, OCR, and audio support

External Dependencies

Some parsers require external tools to be installed on your system:

  • pandoc: Required for parsing markup languages (EPUB, HTML, RTF, RST, DOCX, etc.)
    # Ubuntu/Debian
    sudo apt-get install pandoc
    
    # macOS
    brew install pandoc
    
For legacy Office documents support
  • libreoffice: Required for file conversion (DOC → DOCX, PPT → PPTX, PPTX → PDF, etc.)

    # Ubuntu/Debian
    sudo apt-get install libreoffice
    
    # macOS
    brew install --cask libreoffice
    

🔍 Supported File Types

  • Documents: PDF, DOCX, PPTX, XLSX, EPUB
  • Markup: HTML, Markdown
  • Data: CSV, JSON, YAML, TOML
  • Media: Images (JPEG, PNG), Audio (MP3, WAV), Video (MP4)
  • Code: Various programming languages
  • Directories: Process entire folders of mixed documents

🧩 Components

Parsers

Parsers read different file formats and convert them into a unified chunk structure:

from easyparser.controller import get_controller
from easyparser.parser import FastPDF, Markdown, RapidOCRImageText

# Get a controller and parse a document
ctrl = get_controller()
chunk = ctrl.as_root_chunk("file.path")

# Parse a PDF with a faster parser
pdf_chunks = FastPDF.run(chunk)

# Parse Markdown into a structured tree
md_chunks = Markdown.run(chunk)

# Extract text from images using OCR
img_chunks = RapidOCRImageText.run(chunk)

Chunking Strategies

Split content into meaningful chunks with different strategies:

from easyparser.split import ChunkByCharacters, FlattenToMarkdown, LumberChunker

# Split by character count or word count
chunks = ChunkByCharacters.run(doc, chunk_size=1000)

# Flatten hierarchical content while preserving structure
chunks = FlattenToMarkdown.run(doc, max_size=500)

# Use an LLM for semantic chunking (requires LLM setup)
chunks = LumberChunker.run(doc, chunk_size=800)

Processing Pipeline

Build custom processing pipelines:

from easyparser.controller import get_controller
from easyparser.parser.pdf import FastPDF
from easyparser.split import MarkdownSplitByHeading, Propositionizer

# Get a controller and parse a document
ctrl = get_controller()
chunk = ctrl.as_root_chunk("document.pdf")
FastPDF.run(chunk)

# Process the parsed document
sections = MarkdownSplitByHeading.run(chunk, min_chunk_size=200)
propositions = Propositionizer.run(sections)

🔧 Advanced Usage

Custom Parsers

Create your own parsers for specialized formats:

from easyparser.base import BaseOperation, Chunk, ChunkGroup, CType

class MyCustomParser(BaseOperation):
    @classmethod
    def run(cls, chunks: Chunk | ChunkGroup, **kwargs) -> ChunkGroup:
        # Custom parsing logic here
        return processed_chunks

LLM Integration

Add LLM support for semantic splitting and processing:

Add LLM support

By default, easyparser uses the llm (repo) with alias easyparser-llm to interact with LLM. Please setup the desired LLM provider according to their docs, and set the alias easyparser-llm to that model. Example, using Gemini model (as of April 2025):

# Install the LLM gemini
$ llm install llm-gemini
# Set the Gemini API key
$ llm keys set gemini
# Alias LLM to 'easyparser-llm' (you can see other model ids by running `llm models`)
$ llm aliases set easyparser-llm gemini-2.5-flash-preview-04-17
# Check the LLM is working correctly
$ llm -m easyparser-llm "Explain quantum mechanics in 100 words"

Once LLM is set up, you can use LLM-based chunkers:

from easyparser.split import LumberChunker, AgenticChunker, Propositionizer

chunks = LumberChunker.run(doc)  # Semantically split content
chunks = Propositionizer.run(doc)  # Convert to atomic propositions
chunks = AgenticChunker.run(doc)  # Group chunks by topic

📊 Example Applications

  • Create knowledge bases from document collections
  • Build RAG (Retrieval-Augmented Generation) systems
  • Extract structured data from unstructured documents
  • Generate document summaries with preserved structure
  • Create question-answering systems over documents

🤝 Contributing

Contributions are welcome! Ensure that you have git and git-lfs installed. git will be used for version control and git-lfs will be used for test data.

# Clone the repository
git clone git@github.com:easyparser/easyparser.git
cd easyparser

# Fetch the test data
git submodule update --init --recursive

# Install development dependnecy
pip install -e ".[dev]"

# Initialize pre-commit hooks
pre-commit install

📄 License

Apache 2.0 License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

easyparser-0.0.2.tar.gz (89.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

easyparser-0.0.2-py3-none-any.whl (102.9 kB view details)

Uploaded Python 3

File details

Details for the file easyparser-0.0.2.tar.gz.

File metadata

  • Download URL: easyparser-0.0.2.tar.gz
  • Upload date:
  • Size: 89.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.10

File hashes

Hashes for easyparser-0.0.2.tar.gz
Algorithm Hash digest
SHA256 ebdbed97f82a12632bd72c1a0e2d8866d255fe754065564cccfd9fee08551102
MD5 7a0ca9cdd2392be11ee182f9a4a6c92e
BLAKE2b-256 070bde7611b76ddf4db4a78c211e80a558820ac3a798d4241b398dc800df98ea

See more details on using hashes here.

File details

Details for the file easyparser-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: easyparser-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 102.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.10

File hashes

Hashes for easyparser-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2539d3e8115efb53f38e9046a3a4b75408e7c39408137315d8e87fc3eed9fcb5
MD5 92b2a1ffd436bb68f5c8de10eef68ac2
BLAKE2b-256 80c401cc10d780f7409b680848d44c025a08c446122bfbd1d776690197723542

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page