Skip to main content

Search indices (mainly to be combined with RDF query engines) backed by Rust

Project description

Search RDF

Rust library with restricted Python interface for building and querying search indices, primarily intended to be used with RDF query engines.

Getting Started

Installation

Python

Install the Python package using maturin (requires a Rust toolchain and Python >= 3.12):

pip install maturin
maturin develop --release

This builds the Rust extension and installs the search_rdf package into your current environment.

Rust (CLI)

Build and install using Cargo:

cargo install --path .

Alternatively, build without installing:

cargo build --release

The binary will be available at target/release/search-rdf.

Note that some CLI features (e.g. embedding generation) require the search_rdf Python package to be installed. You might need to set LD_LIBRARY_PATH if you encounter issues with the search-rdf binary not finding the shared library from the Python package. E.g., if you are in a conda env, you can run: export LD_LIBRARY_PATH="$CONDA_PREFIX/lib:$LD_LIBRARY_PATH"

CLI Overview

The search-rdf CLI provides commands to build and serve search indices. All commands require a YAML configuration file.

search-rdf [OPTIONS] [CONFIG] [COMMAND]

Commands:
  data    Download and prepare data
  embed   Generate embeddings for data
  index   Build search indices
  serve   Serve indices via HTTP

Options:
      --force    Force rebuild even if output exists
  -v, --verbose  Enable verbose/debug logging
  -q, --quiet    Suppress info messages (errors and warnings only)
  -h, --help     Print help
  -V, --version  Print version

Running All Steps

To run the complete pipeline (data → embed → index → serve):

search-rdf config.yaml

Running Individual Steps

# Step 1: Download/prepare data
search-rdf data config.yaml

# Step 2: Generate embeddings
search-rdf embed config.yaml

# Step 3: Build indices
search-rdf index config.yaml

# Step 4: Start HTTP server
search-rdf serve config.yaml

Use --force to rebuild outputs even if they already exist:

search-rdf index config.yaml --force

Configuration File Format

The configuration file is written in YAML and has five main sections: datasets, models, embeddings, indices, and server.

Datasets

Defines data sources to be indexed. Each dataset produces a data directory used by indices.

datasets:
  - name: my-dataset           # Unique identifier
    output: data/              # Output directory for processed data
    source:
      # Option 1: SPARQL query against an endpoint
      type: sparql-query
      endpoint: https://query.wikidata.org/sparql
      query: |
        SELECT ?item ?label WHERE {
          ?item rdfs:label ?label .
        }
        LIMIT 1000
      format: json             # json, xml, or tsv
      default_field_type: text # text, image, or image-inline
      headers:                 # Optional HTTP headers
        User-Agent: MyApp/1.0

      # Option 2: Local SPARQL results file
      type: sparql
      path: results.json
      format: json
      default_field_type: text

      # Option 3: JSONL file
      type: jsonl
      path: data.jsonl

SPARQL queries must return exactly 2 columns: an identifier (first column) and a field value (second column). Multiple rows with the same identifier create multiple fields for that item.

Models

Defines embedding models used to generate vector representations.

models:
  # vLLM server (recommended for large-scale embedding)
  - name: my-vllm-model
    type: vllm
    endpoint: http://localhost:8000
    model_name: mixedbread-ai/mxbai-embed-large-v1

  # Sentence Transformers (local inference)
  - name: my-local-model
    type: sentence-transformer
    model_name: sentence-transformers/all-MiniLM-L6-v2
    device: cuda                # cpu, cuda, or mps (default: cpu)
    batch_size: 16              # Inference batch size (default: 16)

  # HuggingFace image models
  - name: my-image-model
    type: huggingface-image
    model_name: openai/clip-vit-base-patch32
    device: cuda
    batch_size: 16

  # OpenCLIP multimodal models (text + image in shared space)
  - name: my-clip-model
    type: open-clip
    model: hf-hub:timm/ViT-B-16-SigLIP2
    device: cuda
    batch_size: 32

Optional embedding parameters can be added to any model:

models:
  - name: my-model
    type: vllm
    endpoint: http://localhost:8000
    model_name: mixedbread-ai/mxbai-embed-large-v1
    params:
      num_dimensions: 512      # Truncate embeddings (for MRL models)
      normalize: true          # L2 normalize embeddings (default: true)

Embeddings

Defines embedding generation jobs that use models to embed dataset fields.

embeddings:
  - name: my-embeddings
    model: my-vllm-model       # Reference to model name
    data: data/                # Input data directory
    output: data/embeddings.safetensors
    batch_size: 64             # Processing batch size (default: 64)

Indices

Defines search indices to build from data and embeddings.

indices:
  # Keyword index (exact token matching with BM25 scoring)
  - name: keyword-index
    type: keyword
    data: data/
    output: index/keyword/

  # Full-text index (Tantivy-based with stemming/tokenization)
  - name: fulltext-index
    type: full-text
    data: data/
    output: index/fulltext/

  # Embedding index with data (semantic search)
  - name: embedding-index
    type: embedding-with-data
    data: data/
    embedding_data: data/embeddings.safetensors
    output: index/embedding/
    model: my-vllm-model       # For query embedding at search time

  # Embedding-only index (no associated text data)
  - name: embedding-only
    type: embedding
    embedding_data: data/embeddings.safetensors
    output: index/embedding-only/

Embedding index parameters:

indices:
  - name: embedding-index
    type: embedding-with-data
    data: data/
    embedding_data: data/embeddings.safetensors
    output: index/embedding/
    model: my-model
    params:
      metric: cosine-normalized  # cosine-normalized, cosine, inner-product, l2, hamming
      precision: bfloat16        # float32, float16, bfloat16, int8, binary
      connectivity: 16           # HNSW M parameter (default: 16)
      expansion_add: 128         # HNSW efConstruction (default: 128)
      expansion_search: 64       # HNSW ef (default: 64)

Server

Configures the HTTP server for serving indices.

server:
  host: 0.0.0.0                 # Bind address (default: 127.0.0.1)
  port: 8080                    # Port (default: 8080)
  cors: true                    # Enable CORS (default: false)
  max_input_size: 100MB         # Max request size in bytes (default: 100MB)
  indices:                      # Indices to serve
    - keyword-index
    - embedding-index
  sparql:                       # Optional: Enable SPARQL service endpoints
    prefix: "http://example.org/"

HTTP API

When the server is running, the following endpoints are available:

Health Check

GET /health

Returns 200 OK if the server is running.

List Indices

GET /indices

Returns a list of available index names.

Search

POST /search/{index_name}
Content-Type: application/json

The request body contains a queries array and search parameters.

Value queries (text, image URL, or base64 image):

{
  "queries": [{"type": "value", "value": "search query"}],
  "k": 10
}

An optional modality field controls how the value is interpreted:

  • "text" — embed as text (default for text-only models)
  • "image" — load as image from URL and embed with vision encoder
  • "image-base64" — decode base64 image data and embed with vision encoder
  • "iri" — treat as an identifier for neighbor search

When modality is omitted, it is inferred from the model and value content:

  • Text-only models (vLLM, sentence-transformer): always text
  • Image-only models (huggingface-image): image URL or base64
  • Multimodal models (open-clip): image if value looks like a URL, otherwise text
{"queries": [{"type": "value", "value": "https://example.com/image.jpg", "modality": "image"}], "k": 10}

Identifier queries (neighbor search by known IRI):

{
  "queries": [{"type": "identifier", "value": "http://www.wikidata.org/entity/Q42"}],
  "k": 10
}

Pre-computed embedding queries:

{
  "queries": [{"type": "embedding", "value": [0.1, 0.2, 0.3]}],
  "k": 10
}

Search parameters vary by index type:

Keyword/Full-text indices:

  • k - Number of results (default: 10)

Embedding indices:

  • k - Number of results (default: 10)
  • min-score - Minimum similarity score filter
  • exact - Use exact search instead of approximate (default: false)
  • rerank - Reranking factor (retrieves k*rerank candidates, then reranks)

Response format:

{
  "matches": [
    [
      {"id": 42, "score": 0.95},
      {"id": 17, "score": 0.87}
    ]
  ]
}

SPARQL Service (optional)

When sparql is configured in the server section:

POST /sparql/{index_name}
POST /sparql/qlproxy/{index_name}

These endpoints enable integration with SPARQL engines that support federated queries.

Example Configuration

Here's a complete example that sets up keyword and semantic search over Wikidata human labels:

datasets:
  - name: wikidata-humans
    output: data/
    source:
      type: sparql-query
      endpoint: https://query.wikidata.org/sparql
      query: |
        PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
        PREFIX wd: <http://www.wikidata.org/entity/>
        PREFIX wdt: <http://www.wikidata.org/prop/direct/>
        SELECT ?item ?label WHERE {
          ?item wdt:P31 wd:Q5 .
          ?item rdfs:label ?label .
          FILTER(LANG(?label) = "en")
        }
        LIMIT 10000
      format: json
      default_field_type: text

models:
  - name: text-embedding
    type: vllm
    endpoint: http://localhost:8000
    model_name: mixedbread-ai/mxbai-embed-xsmall-v1

embeddings:
  - name: wikidata-embeddings
    model: text-embedding
    data: data/
    output: data/embeddings.safetensors
    batch_size: 128

indices:
  - name: keyword
    type: keyword
    data: data/
    output: index/keyword/

  - name: semantic
    type: embedding-with-data
    data: data/
    embedding_data: data/embeddings.safetensors
    output: index/semantic/
    model: text-embedding
    params:
      metric: cosine-normalized
      precision: bfloat16

server:
  host: 0.0.0.0
  port: 8080
  cors: true
  indices:
    - keyword
    - semantic

Run with:

# Build everything and start serving
search-rdf config.yaml

# Or run steps individually
search-rdf data config.yaml
search-rdf embed config.yaml
search-rdf index config.yaml
search-rdf serve config.yaml

Test with curl:

# Keyword search
curl -X POST http://localhost:8080/search/keyword \
  -H "Content-Type: application/json" \
  -d '{"queries": [{"type": "value", "value": "Albert Einstein"}], "k": 5}'

# Semantic search
curl -X POST http://localhost:8080/search/semantic \
  -H "Content-Type: application/json" \
  -d '{"queries": [{"type": "value", "value": "famous physicist"}], "k": 5}'

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

search_rdf-0.5.3.tar.gz (3.9 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

search_rdf-0.5.3-cp312-abi3-win_amd64.whl (2.4 MB view details)

Uploaded CPython 3.12+Windows x86-64

search_rdf-0.5.3-cp312-abi3-manylinux_2_28_x86_64.whl (2.9 MB view details)

Uploaded CPython 3.12+manylinux: glibc 2.28+ x86-64

search_rdf-0.5.3-cp312-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (5.0 MB view details)

Uploaded CPython 3.12+macOS 10.12+ universal2 (ARM64, x86-64)macOS 10.12+ x86-64macOS 11.0+ ARM64

File details

Details for the file search_rdf-0.5.3.tar.gz.

File metadata

  • Download URL: search_rdf-0.5.3.tar.gz
  • Upload date:
  • Size: 3.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for search_rdf-0.5.3.tar.gz
Algorithm Hash digest
SHA256 74eb6d1e365b8cec733c0f57fed72750cdf0980004f5b61e179a14a3a4083fc8
MD5 65f44021c4eb84bb032c32b6c9f1b2ba
BLAKE2b-256 a284d977526e8ca8d4991a2cd6647eb6cc7829e10584da8541e11b263c78bc56

See more details on using hashes here.

Provenance

The following attestation bundles were made for search_rdf-0.5.3.tar.gz:

Publisher: release.yml on bastiscode/search-rdf

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file search_rdf-0.5.3-cp312-abi3-win_amd64.whl.

File metadata

  • Download URL: search_rdf-0.5.3-cp312-abi3-win_amd64.whl
  • Upload date:
  • Size: 2.4 MB
  • Tags: CPython 3.12+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for search_rdf-0.5.3-cp312-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 76de449c460660064049fe028f9585fa4e109ff4092ded24d9b85a5ba3cfb596
MD5 74b063e815db98debd36627f77bdb450
BLAKE2b-256 c424e1f90cbcb29163aeec9200766887bb277074063bf141b0e7eb6142407f85

See more details on using hashes here.

Provenance

The following attestation bundles were made for search_rdf-0.5.3-cp312-abi3-win_amd64.whl:

Publisher: release.yml on bastiscode/search-rdf

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file search_rdf-0.5.3-cp312-abi3-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for search_rdf-0.5.3-cp312-abi3-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 e229ef9fec2c55000778bf5731b00948b1f3565aacd63a91e2becb44b952401f
MD5 931cf5424710fd61657e8ab6723006cc
BLAKE2b-256 6eb6829ff5753abe4fce4556bd5784a2d4f6adec1dc9f92f56d44372c2c57907

See more details on using hashes here.

Provenance

The following attestation bundles were made for search_rdf-0.5.3-cp312-abi3-manylinux_2_28_x86_64.whl:

Publisher: release.yml on bastiscode/search-rdf

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file search_rdf-0.5.3-cp312-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl.

File metadata

File hashes

Hashes for search_rdf-0.5.3-cp312-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl
Algorithm Hash digest
SHA256 ded528d7ed8097a53d16a6ab78300bcd477b5ca73df6c1762c72cc881e3a49d2
MD5 dfbeb380a79e4d4b034d52ffa3e125c7
BLAKE2b-256 2b1b7539bcd74fcd14fcc376f5c80adfe2b8a6cc85db664d91bb65fc53c9b39d

See more details on using hashes here.

Provenance

The following attestation bundles were made for search_rdf-0.5.3-cp312-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl:

Publisher: release.yml on bastiscode/search-rdf

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page