Skip to main content

Python SDK for semantic search with on-device AI capabilities

Project description

Moss client library for Python

inferedge-moss enables private, on-device semantic search in your Python applications with cloud storage capabilities.

Built for developers who want instant, memory-efficient, privacy-first AI features with seamless cloud integration.

✨ Features

  • On-Device Vector Search - Sub-millisecond retrieval with zero network latency
  • 🔍 Semantic, Keyword & Hybrid Search - Embedding search blended with Keyword matching
  • ☁️ Cloud Storage Integration - Automatic index synchronization with cloud storage
  • 📦 Multi-Index Support - Manage multiple isolated search spaces
  • 🛡️ Privacy-First by Design - Computation happens locally, only indexes sync to cloud
  • 🚀 High-Performance Rust Core - Built on optimized Rust bindings for maximum speed
  • 🧠 Custom Embedding Overrides - Provide your own document and query vectors when you need full control

📦 Installation

pip install inferedge-moss

🚀 Quick Start

import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions

async def main():
    # Initialize search client with project credentials
    client = MossClient("your-project-id", "your-project-key")

    # Prepare documents to index
    documents = [
        DocumentInfo(
            id="doc1",
            text="How do I track my order? You can track your order by logging into your account.",
            metadata={"category": "shipping"}
        ),
        DocumentInfo(
            id="doc2", 
            text="What is your return policy? We offer a 30-day return policy for most items.",
            metadata={"category": "returns"}
        ),
        DocumentInfo(
            id="doc3",
            text="How can I change my shipping address? Contact our customer service team.",
            metadata={"category": "support"}
        )
    ]

    # Create an index with documents (syncs to cloud)
    index_name = "faqs"
    await client.create_index(index_name, documents)  # Defaults to moss-minilm
    print("Index created and synced to cloud!")

    # Load the index (from cloud or local cache)
    await client.load_index(index_name)

    # Search the index
    result = await client.query(
        index_name,
        "How do I return a damaged product?",
        QueryOptions(top_k=3, alpha=0.6),
    )

    # Display results
    print(f"Query: {result.query}")
    for doc in result.docs:
        print(f"Score: {doc.score:.4f}")
        print(f"ID: {doc.id}")
        print(f"Text: {doc.text}")
        print("---")

asyncio.run(main())

🔥 Example Use Cases

  • Smart knowledge base search with cloud backup
  • Realtime Voice AI agents with persistent indexes
  • Personal note-taking search with sync across devices
  • Private in-app AI features with cloud storage
  • Local semantic search in edge devices with cloud fallback

Available Models

  • moss-minilm: Lightweight model optimized for speed and efficiency
  • moss-mediumlm: Balanced model offering higher accuracy with reasonable performance

🔧 Getting Started

Prerequisites

  • Python 3.8 or higher
  • Valid InferEdge project credentials

Environment Setup

  1. Install the package:
pip install inferedge-moss
  1. Get your credentials:

Sign up at InferEdge Platform to get your project_id and project_key.

  1. Set up environment variables (optional):
export MOSS_PROJECT_ID="your-project-id"
export MOSS_PROJECT_KEY="your-project-key"

Basic Usage

import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions

async def main():
    # Initialize client
    client = MossClient("your-project-id", "your-project-key")
    
    # Create and populate an index
    documents = [
        DocumentInfo(id="1", text="Python is a programming language"),
        DocumentInfo(id="2", text="Machine learning with Python is popular"),
    ]
    
    await client.create_index("my-docs", documents)
    await client.load_index("my-docs")
    
    # Search
    results = await client.query(
        "my-docs",
        "programming language",
        QueryOptions(alpha=1.0),
    )
    for doc in results.docs:
        print(f"{doc.id}: {doc.text} (score: {doc.score:.3f})")

asyncio.run(main())

Hybrid Search Controls

alpha lets you decide how much weight to give semantic similarity versus keyword relevance when running query():

# Pure keyword search
await client.query("my-docs", "programming language", QueryOptions(alpha=0.0))

# Mixed results (default 0.8 => semantic heavy)
await client.query("my-docs", "programming language")

# Pure embedding search
await client.query("my-docs", "programming language", QueryOptions(alpha=1.0))

Pick any value between 0.0 and 1.0 to tune the blend for your use case.

Metadata filtering

You can pass a metadata filter directly to query() after loading an index locally:

results = await client.query(
    "my-docs",
    "running shoes",
    QueryOptions(top_k=5, alpha=0.6),
    filter={
        "$and": [
            {"field": "category", "condition": {"$eq": "shoes"}},
            {"field": "price", "condition": {"$lt": "100"}},
        ]
    },
)

For a complete runnable example, see python/user-facing-sdk/samples/metadata_filtering.py.

🧠 Providing custom embeddings

Already using your own embedding model? Supply vectors directly when managing indexes and queries:

import asyncio

from inferedge_moss import DocumentInfo, MossClient, QueryOptions


def my_embedding_model(text: str) -> list[float]:
    """Placeholder for your custom embedding generator."""
    ...


async def main() -> None:
    client = MossClient("your-project-id", "your-project-key")

    documents = [
        DocumentInfo(
            id="doc-1",
            text="Attach a caller-provided embedding.",
            embedding=my_embedding_model("doc-1"),
        ),
        DocumentInfo(
            id="doc-2",
            text="Fallback to the built-in model when the field is omitted.",
            embedding=my_embedding_model("doc-2"),
        ),
    ]

    await client.create_index("custom-embeddings", documents)  # Defaults to moss-minilm
    await client.load_index("custom-embeddings")

    results = await client.query(
        "custom-embeddings",
        "<query text>",
        QueryOptions(embedding=my_embedding_model("<query text>"), top_k=10),
    )

    print(results.docs[0].id, results.docs[0].score)


asyncio.run(main())

Leaving the model argument undefined defaults to moss-minilm. Pass QueryOptions to reuse your own embeddings or to override top_k on a per-query basis.

📄 License

This package is licensed under the PolyForm Shield License 1.0.0.

  • ✅ Free for testing, evaluation, internal use, and modifications.
  • ❌ Not permitted for production or competing commercial use.
  • 📩 For commercial licenses, contact: contact@usemoss.dev

📬 Contact

For support, commercial licensing, or partnership inquiries, contact us: contact@usemoss.dev

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inferedge_moss-1.0.0b19.tar.gz (32.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

inferedge_moss-1.0.0b19-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file inferedge_moss-1.0.0b19.tar.gz.

File metadata

  • Download URL: inferedge_moss-1.0.0b19.tar.gz
  • Upload date:
  • Size: 32.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for inferedge_moss-1.0.0b19.tar.gz
Algorithm Hash digest
SHA256 e56f66d864c79cdd839643c66151c78d4340dba26e25e9c9140f8d9c6ba2f590
MD5 f3096ee8d9a0c0aed18047209bd86d88
BLAKE2b-256 f93c1659d094a782bc1e10718c5123852e0c13fcdf2bd997b1d8818ebbe4be36

See more details on using hashes here.

File details

Details for the file inferedge_moss-1.0.0b19-py3-none-any.whl.

File metadata

File hashes

Hashes for inferedge_moss-1.0.0b19-py3-none-any.whl
Algorithm Hash digest
SHA256 6749886994c28f441229a9b07e8680d70a1411dc370d14df9416a412bf26b6ed
MD5 126d99c3f013a1d7ec257c2b636beab8
BLAKE2b-256 d2ae8de55fef22a2d1cf4b454ce60220a7230d1537c3e33625fcc0e2dbdcdbc4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page