Skip to main content

Data loading tools for NLWeb - load schema.org JSON and RSS feeds into vector databases

Project description

NLWeb Data Loading

Data loading tools for NLWeb - load schema.org JSON files and RSS feeds into vector databases with automatic embedding generation.

Overview

nlweb-dataload provides a simple interface for loading structured data into vector databases. It:

  • Loads schema.org JSON files or RSS/Atom feeds
  • Automatically computes embeddings for all documents
  • Uploads to vector databases in batches
  • Supports deletion by site

Installation

# Install from PyPI (when published)
pip install nlweb-dataload

# Or install from source
pip install -e packages/dataload

Quick Start

import asyncio
import nlweb_core
from nlweb_dataload import load_to_db, delete_site

# Initialize NLWeb with config
nlweb_core.init(config_path="config.yaml")

# Load schema.org JSON file
async def main():
    result = await load_to_db(
        file_path="recipes.json",
        site="seriouseats"
    )
    print(f"Loaded {result['total_loaded']} documents")

asyncio.run(main())

Configuration

Add writer configuration to your config.yaml:

# config.yaml
retrieval_endpoints:
  azure_search_prod:
    db_type: azure_ai_search
    api_endpoint: https://your-search.search.windows.net
    api_key_env: AZURE_SEARCH_KEY
    index_name: embeddings1536
    auth_method: api_key  # or azure_ad for managed identity

    # Add writer configuration
    writer:
      enabled: true
      import_path: nlweb_azure_vectordb.azure_search_writer
      class_name: AzureSearchWriter

# Set as write endpoint
write_endpoint: azure_search_prod

Usage

Load JSON File

Load a schema.org JSON file:

from nlweb_dataload import load_to_db

# Single schema.org object or array of objects
result = await load_to_db(
    file_path="data/recipes.json",
    site="seriouseats"
)

Example JSON file:

[
  {
    "@context": "http://schema.org",
    "@type": "Recipe",
    "url": "https://www.seriouseats.com/best-pasta-recipe",
    "name": "Best Pasta Ever",
    "description": "The best pasta recipe you'll ever make",
    "author": {"@type": "Person", "name": "Chef Mario"}
  }
]

Load RSS Feed

Load an RSS or Atom feed (automatically converts to schema.org Article):

from nlweb_dataload import load_to_db

# Load from URL
result = await load_to_db(
    file_path="https://example.com/feed.xml",
    site="example",
    file_type="rss"  # Optional, auto-detected
)

# Load from local file
result = await load_to_db(
    file_path="feeds/blog.xml",
    site="myblog",
    file_type="rss"
)

Delete Site Data

Remove all documents for a site:

from nlweb_dataload import delete_site

result = await delete_site(site="old-site.com")
print(f"Deleted {result['deleted_count']} documents")

Batch Upload

Control batch size for large datasets:

result = await load_to_db(
    file_path="large_dataset.json",
    site="example",
    batch_size=50  # Upload 50 documents at a time (default: 100)
)

Specify Endpoint

Use a specific endpoint instead of default write_endpoint:

result = await load_to_db(
    file_path="data.json",
    site="example",
    endpoint_name="azure_search_staging"  # Override default
)

Data Format

Schema.org JSON

Documents must include these fields:

  • url (required): Unique document URL
  • name or headline (required): Document name/title
  • description (optional): Used for embedding if present

Any valid schema.org type is supported (Recipe, Article, Product, Event, etc.).

RSS/Atom Feeds

RSS/Atom feeds are automatically converted to schema.org Article format with:

  • url: Entry link
  • name/headline: Entry title
  • description: Entry summary/content
  • datePublished: Publication date
  • author: Entry author
  • publisher: Feed title/link
  • keywords: Entry tags/categories

Architecture

Write Interface Separation

NLWeb maintains clean separation between read and write operations:

  • nlweb_core.retriever: Read-only search interface
  • nlweb_dataload.writer: Write interface (upload/delete)

This prevents accidental writes during queries and allows different access patterns.

Writer Interface

Each vector database provider implements VectorDBWriterInterface:

from nlweb_dataload.writer import VectorDBWriterInterface

class MyDatabaseWriter(VectorDBWriterInterface):
    async def upload_documents(self, documents, **kwargs):
        # Upload documents to database
        pass

    async def delete_documents(self, filter_criteria, **kwargs):
        # Delete documents matching criteria
        pass

    async def delete_site(self, site, **kwargs):
        # Delete all documents for site
        pass

Supported Vector Databases

Azure AI Search

Built-in support via nlweb-azure-vectordb:

pip install nlweb-azure-vectordb

Configuration:

retrieval_endpoints:
  azure_search:
    db_type: azure_ai_search
    writer:
      import_path: nlweb_azure_vectordb.azure_search_writer
      class_name: AzureSearchWriter

Other Databases

Create a writer class for your database:

  1. Implement VectorDBWriterInterface
  2. Add to config with import_path and class_name
  3. Install provider package

See nlweb_azure_vectordb.azure_search_writer for reference implementation.

Command Line Usage

# Load JSON file
python -m nlweb_dataload.db_load \
  --file data/recipes.json \
  --site seriouseats \
  --config config.yaml

# Load RSS feed
python -m nlweb_dataload.db_load \
  --file https://example.com/feed.xml \
  --site example \
  --type rss \
  --config config.yaml

# Delete site
python -m nlweb_dataload.db_load \
  --delete-site old-site.com \
  --config config.yaml

Dependencies

  • nlweb-core>=0.5.0 - Core NLWeb functionality
  • feedparser>=6.0.0 - RSS/Atom feed parsing
  • aiohttp>=3.8.0 - Async HTTP for URL loading

Development

# Install in editable mode with dev dependencies
pip install -e "packages/dataload[dev]"

# Run tests
pytest packages/dataload/tests

License

MIT License - Copyright (c) 2025 Microsoft Corporation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nlweb_dataload-0.5.5.tar.gz (16.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nlweb_dataload-0.5.5-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file nlweb_dataload-0.5.5.tar.gz.

File metadata

  • Download URL: nlweb_dataload-0.5.5.tar.gz
  • Upload date:
  • Size: 16.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for nlweb_dataload-0.5.5.tar.gz
Algorithm Hash digest
SHA256 dba746aa95ea93f79a31ebf8dd8f60937b7b8ad8feee5f9a70f0cf9acbc6ac6a
MD5 3d468f42131d5f46da15a8e29703ca3f
BLAKE2b-256 736f6c4f195c99b164f9e1518000de97efb96d5e3ecdafa5665d0cbb10a32c8a

See more details on using hashes here.

File details

Details for the file nlweb_dataload-0.5.5-py3-none-any.whl.

File metadata

  • Download URL: nlweb_dataload-0.5.5-py3-none-any.whl
  • Upload date:
  • Size: 17.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for nlweb_dataload-0.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 8b836b133649ee030015900b8711efaff6ce2dae0d5d4e8dd6ab10f8f176ffe8
MD5 1f9112fe3fc1b8262a28c759504dea12
BLAKE2b-256 73e3d35954c1a778b5635c8a789562950aad656268e3dda3a9c3eca2af2b1951

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page