Python SDK for semantic search with on-device AI capabilities
Project description
Moss client library for Python
inferedge-moss enables private, on-device semantic search in your Python applications with cloud storage capabilities.
Built for developers who want instant, memory-efficient, privacy-first AI features with seamless cloud integration.
✨ Features
- ⚡ On-Device Vector Search - Sub-millisecond retrieval with zero network latency
- 🔍 Semantic, Keyword & Hybrid Search - Embedding search blended with Keyword matching
- ☁️ Cloud Storage Integration - Automatic index synchronization with cloud storage
- 📦 Multi-Index Support - Manage multiple isolated search spaces
- 🛡️ Privacy-First by Design - Computation happens locally, only indexes sync to cloud
- 🚀 High-Performance Rust Core - Built on optimized Rust bindings for maximum speed
- 🧠 Custom Embedding Overrides - Provide your own document and query vectors when you need full control
📦 Installation
pip install inferedge-moss
🚀 Quick Start
import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions
async def main():
# Initialize search client with project credentials
client = MossClient("your-project-id", "your-project-key")
# Prepare documents to index
documents = [
DocumentInfo(
id="doc1",
text="How do I track my order? You can track your order by logging into your account.",
metadata={"category": "shipping"}
),
DocumentInfo(
id="doc2",
text="What is your return policy? We offer a 30-day return policy for most items.",
metadata={"category": "returns"}
),
DocumentInfo(
id="doc3",
text="How can I change my shipping address? Contact our customer service team.",
metadata={"category": "support"}
)
]
# Create an index with documents (syncs to cloud)
index_name = "faqs"
await client.create_index(index_name, documents) # Defaults to moss-minilm
print("Index created and synced to cloud!")
# Load the index (from cloud or local cache)
await client.load_index(index_name)
# Search the index
result = await client.query(
index_name,
"How do I return a damaged product?",
QueryOptions(top_k=3, alpha=0.6),
)
# Display results
print(f"Query: {result.query}")
for doc in result.docs:
print(f"Score: {doc.score:.4f}")
print(f"ID: {doc.id}")
print(f"Text: {doc.text}")
print("---")
asyncio.run(main())
🔥 Example Use Cases
- Smart knowledge base search with cloud backup
- Realtime Voice AI agents with persistent indexes
- Personal note-taking search with sync across devices
- Private in-app AI features with cloud storage
- Local semantic search in edge devices with cloud fallback
Available Models
moss-minilm: Lightweight model optimized for speed and efficiencymoss-mediumlm: Balanced model offering higher accuracy with reasonable performance
🔧 Getting Started
Prerequisites
- Python 3.8 or higher
- Valid InferEdge project credentials
Environment Setup
- Install the package:
pip install inferedge-moss
- Get your credentials:
Sign up at InferEdge Platform to get your project_id and project_key.
- Set up environment variables (optional):
export MOSS_PROJECT_ID="your-project-id"
export MOSS_PROJECT_KEY="your-project-key"
Basic Usage
import asyncio
from inferedge_moss import MossClient, DocumentInfo, QueryOptions
async def main():
# Initialize client
client = MossClient("your-project-id", "your-project-key")
# Create and populate an index
documents = [
DocumentInfo(id="1", text="Python is a programming language"),
DocumentInfo(id="2", text="Machine learning with Python is popular"),
]
await client.create_index("my-docs", documents)
await client.load_index("my-docs")
# Search
results = await client.query(
"my-docs",
"programming language",
QueryOptions(alpha=1.0),
)
for doc in results.docs:
print(f"{doc.id}: {doc.text} (score: {doc.score:.3f})")
asyncio.run(main())
Hybrid Search Controls
alpha lets you decide how much weight to give semantic similarity versus keyword relevance when running query():
# Pure keyword search
await client.query("my-docs", "programming language", QueryOptions(alpha=0.0))
# Mixed results (default 0.8 => semantic heavy)
await client.query("my-docs", "programming language")
# Pure embedding search
await client.query("my-docs", "programming language", QueryOptions(alpha=1.0))
Pick any value between 0.0 and 1.0 to tune the blend for your use case.
🧠 Providing custom embeddings
Already using your own embedding model? Supply vectors directly when managing indexes and queries:
import asyncio
from inferedge_moss import DocumentInfo, MossClient, QueryOptions
def my_embedding_model(text: str) -> list[float]:
"""Placeholder for your custom embedding generator."""
...
async def main() -> None:
client = MossClient("your-project-id", "your-project-key")
documents = [
DocumentInfo(
id="doc-1",
text="Attach a caller-provided embedding.",
embedding=my_embedding_model("doc-1"),
),
DocumentInfo(
id="doc-2",
text="Fallback to the built-in model when the field is omitted.",
embedding=my_embedding_model("doc-2"),
),
]
await client.create_index("custom-embeddings", documents) # Defaults to moss-minilm
await client.load_index("custom-embeddings")
results = await client.query(
"custom-embeddings",
"<query text>",
QueryOptions(embedding=my_embedding_model("<query text>"), top_k=10),
)
print(results.docs[0].id, results.docs[0].score)
asyncio.run(main())
Leaving the model argument undefined defaults to moss-minilm.
Pass QueryOptions to reuse your own embeddings or to override top_k on a per-query basis.
📄 License
This package is licensed under the PolyForm Shield License 1.0.0.
- ✅ Free for testing, evaluation, internal use, and modifications.
- ❌ Not permitted for production or competing commercial use.
- 📩 For commercial licenses, contact: contact@usemoss.dev
📬 Contact
For support, commercial licensing, or partnership inquiries, contact us: contact@usemoss.dev
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file inferedge_moss-1.0.0b12.tar.gz.
File metadata
- Download URL: inferedge_moss-1.0.0b12.tar.gz
- Upload date:
- Size: 30.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f904e7bdb2363977cbcae5fbb54eb111c07aadeede6a842bb938d1f18b13f5fe
|
|
| MD5 |
b18179e187234461d43098afe94ff2e0
|
|
| BLAKE2b-256 |
217a12e05a4d96e0c194851a88be1f75e805b1789954d2d91f65bd0052ea4619
|
File details
Details for the file inferedge_moss-1.0.0b12-py3-none-any.whl.
File metadata
- Download URL: inferedge_moss-1.0.0b12-py3-none-any.whl
- Upload date:
- Size: 21.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7c9fed5141d21b5a6d7bf659a9cf6c4db3c31377fdcfacebed51af5a658f7f4
|
|
| MD5 |
e6cde142aeb9db257c0e8aab786d92b3
|
|
| BLAKE2b-256 |
8f6955aa84ecbdf18d01415bc4958b198732675726ba6123238ab864ec1899f1
|