Skip to main content

Open-source universal recommendation infrastructure for any application

Project description

NeuroMesh AI

Open-Source Universal Recommendation Infrastructure

Production-grade recommendation engine SDK for any application.

Python FastAPI License: MIT PyPI CI

Stripe for recommendations. TensorFlow for recommendation systems. Supabase for recommendation infrastructure.


What is NeuroMesh AI?

NeuroMesh AI is an open-source, modular, production-ready recommendation infrastructure platform. It provides intelligent recommendation capabilities to any application with minimal integration effort — whether you're building an ecommerce store, a video platform, a music app, an ERP system, or a learning platform.

Most recommendation systems you find online are incomplete, tutorial-based, and not production-ready. NeuroMesh AI solves this by providing:

  • A universal recommendation SDK that works with any data schema
  • A scalable REST API you can launch with one line of code
  • Multiple recommendation engines (TF-IDF, Semantic Embeddings, Collaborative Filtering, Hybrid)
  • Explainable AI — know why each recommendation was made
  • Real-time incremental learning — update recommendations without full retraining
  • Vector database integration — FAISS, Qdrant, ChromaDB
  • Enterprise integrations — Odoo ERP/POS, Shopify, custom connectors

Architecture

                    ┌───────────────────────┐
                    │      Client Apps      │
                    │  Ecommerce / Video /  │
                    │  Music / ERP / POS    │
                    └──────────┬────────────┘
                               │
                               ▼
                ┌────────────────────────────┐
                │       NeuroMesh SDK        │
                │   from neuromesh import   │
                │        Recommender         │
                └──────────┬─────────────────┘
                           │
     ┌─────────────────────┼──────────────────────┐
     ▼                     ▼                      ▼
┌──────────────┐  ┌─────────────────┐  ┌──────────────────┐
│ API Gateway  │  │ Training Engine │  │  Recommendation  │
│  (FastAPI)   │  │                 │  │     Engine       │
└──────┬───────┘  └────────┬────────┘  └────────┬─────────┘
       │                   │                    │
       ▼                   ▼                    ▼
┌──────────────┐  ┌─────────────────┐  ┌──────────────────┐
│  REST APIs   │  │ Feature Builder │  │  Hybrid Ranker   │
│  Auth + RL   │  │ Vectorization   │  │ TF-IDF + Embed + │
│              │  │                 │  │  Collab + Trend  │
└──────┬───────┘  └────────┬────────┘  └────────┬─────────┘
       │                   │                    │
       ▼                   ▼                    ▼
┌──────────────┐  ┌─────────────────┐  ┌──────────────────┐
│ Redis Cache  │  │ Embedding Layer │  │  Vector Database │
│              │  │ sentence-transf │  │  FAISS / Qdrant  │
└──────────────┘  └─────────────────┘  └──────────────────┘

Quick Start

Install

pip install neuromesh-ai

30-Second Example

from neuromesh import Recommender

# Your items — any schema, only "id" is required
items = [
    {"id": "1", "title": "Gaming Laptop", "description": "RTX 4080, 32GB RAM", "category": "electronics", "tags": ["gaming", "laptop"]},
    {"id": "2", "title": "Gaming Mouse", "description": "High DPI wireless gaming mouse", "category": "electronics", "tags": ["gaming", "peripherals"]},
    {"id": "3", "title": "Mechanical Keyboard", "description": "RGB mechanical gaming keyboard", "category": "electronics", "tags": ["gaming", "keyboard"]},
    {"id": "4", "title": "Python Cookbook", "description": "Advanced Python programming recipes", "category": "books", "tags": ["python", "programming"]},
    {"id": "5", "title": "JavaScript Guide", "description": "Modern JavaScript for developers", "category": "books", "tags": ["javascript", "programming"]},
]

# Initialize and train
rec = Recommender(engine="tfidf")
rec.train(items)

# Get recommendations
results = rec.recommend(item_id="1", top_k=3)
for r in results:
    print(f"Rank {r.rank}: {r.item_id} (score: {r.score:.3f})")

# Output:
# Rank 1: 2 (score: 0.821)
# Rank 2: 3 (score: 0.743)
# Rank 3: 4 (score: 0.102)

Launch as REST API

# One line to serve
rec.serve(host="0.0.0.0", port=8000)

# Now call it from anywhere
# POST http://localhost:8000/recommend
# {"item_id": "1", "top_k": 3}

Recommendation Engines

NeuroMesh supports multiple recommendation strategies — pick the one that fits your use case, or combine them all with the hybrid engine.

TF-IDF Engine (Content-Based)

Uses term frequency–inverse document frequency to find items with similar text features (title, description, tags, category).

rec = Recommender(engine="tfidf")
rec.train(items)
results = rec.recommend(item_id="item123", top_k=10)

Best for: cold-start scenarios, catalogs with rich text descriptions.


Embedding Engine (Semantic)

Uses sentence transformers to understand meaning, not just keywords. "Running shoes" and "athletic footwear" are recognized as similar even without shared words.

rec = Recommender(engine="embedding", model_name="all-MiniLM-L6-v2")
rec.train(items)

# Also supports free-text search
results = rec.encode_query("wireless gaming accessories")

Best for: semantic similarity, multilingual content, short descriptions.


Collaborative Filtering Engine

Learns from user behavior — views, clicks, purchases. Finds items that users with similar tastes also liked.

rec = Recommender(engine="collaborative")
rec.train(items, interactions=interaction_history)
results = rec.recommend_for_user(user_id="user456", top_k=10)

Best for: personalization when you have user interaction data.


Trending Engine

Ranks items by popularity and recency. Combines view velocity, purchase count, and time decay.

rec = Recommender(engine="trending")
rec.train(items)
results = rec.trending(top_k=10)

Best for: homepage recommendations, discovery feeds.


Hybrid Engine (Recommended for Production)

Combines all signals with configurable weights for the best of every strategy.

rec = Recommender(
    engine="hybrid",
    weights={
        "semantic": 0.40,
        "collaborative": 0.30,
        "popularity": 0.20,
        "freshness": 0.10,
    }
)
rec.train(items, interactions=interaction_history)
results = rec.recommend(item_id="item123", top_k=10)

Explainable AI

Know exactly why each recommendation was made. Critical for trust, debugging, and enterprise adoption.

explanation = rec.explain(item_id1="item123", item_id2="item456")
{
  "item_1": "Gaming Laptop RTX 4080",
  "item_2": "Gaming Desktop RTX 4070",
  "similarity_score": 0.87,
  "reasons": [
    {"factor": "embedding_similarity", "weight": 0.84, "detail": "84% semantic similarity"},
    {"factor": "category_match", "weight": 1.0, "detail": "Both in 'electronics' category"},
    {"factor": "tag_overlap", "weight": 0.6, "detail": "Shared tags: gaming, RTX"},
    {"factor": "co_interaction", "weight": 0.72, "detail": "Frequently viewed together by 1,240 users"}
  ]
}

Real-Time Incremental Learning

Add items and interactions without retraining the entire model.

# Add a new item — live indexed into the vector store
rec.add_item({
    "id": "new_item",
    "title": "New Product",
    "description": "...",
    "category": "electronics"
})

# Record a user interaction — updates collaborative model
rec.add_interaction(
    user_id="user456",
    item_id="item123",
    action="purchase",
    score=1.0
)

REST API

Launch the full REST API instantly:

rec.serve(host="0.0.0.0", port=8000)
Method Endpoint Description
POST /train Train the recommendation model
POST /recommend Item-based recommendations
POST /recommend/user User-personalized recommendations
POST /similar Find similar items
POST /trending Get trending items
POST /explain Explain why two items are similar
POST /add-item Add item (incremental)
POST /add-interaction Record user interaction
GET /health Health check
GET /metrics Prometheus metrics

Interactive API docs available at http://localhost:8000/docs

Authentication

All endpoints (except /health and /metrics) require an API key:

curl -X POST http://localhost:8000/recommend \
  -H "X-API-Key: your-api-key" \
  -H "Content-Type: application/json" \
  -d '{"item_id": "item123", "top_k": 5}'

Schema-Agnostic Design

NeuroMesh works with any item schema. Only the id field is required — everything else is used as features automatically.

# Ecommerce products
{"id": "p1", "title": "Gaming Laptop", "category": "electronics", "price": 1299.99, "tags": ["gaming"]}

# Videos
{"id": "v1", "title": "Python Tutorial", "duration": 1800, "channel": "TechChannel", "tags": ["python"]}

# Music tracks
{"id": "m1", "title": "Blinding Lights", "artist": "The Weeknd", "genre": "pop", "bpm": 171}

# Blog articles
{"id": "a1", "title": "Getting Started with Docker", "author": "John", "tags": ["devops", "docker"]}

# ERP products (Odoo)
{"id": "o1", "name": "Office Chair", "category": "furniture", "qty_on_hand": 50, "sales_count": 120}

Vector Database Support

Database Mode Best For
FAISS In-memory Local dev, single-machine prod
Qdrant Client-server Scalable production deployments
ChromaDB Embedded/HTTP Developer-friendly, easy setup
pgvector PostgreSQL If you already use Postgres

Configure via environment variable:

VECTOR_STORE=faiss   # or qdrant, chroma

Integrations

Odoo ERP/POS

from neuromesh.integrations.odoo import OdooConnector

connector = OdooConnector()
connector.connect(url="http://odoo.company.com", db="mydb", username="admin", password="...")

# Fetch product catalog from Odoo
products = connector.fetch_products()

# Train on Odoo data
rec.train(products, interactions=connector.fetch_sale_orders())

# Cross-sell at POS
suggestions = connector.recommend_cross_sell(product_id="product.product,42", top_k=5)

# Dead stock prediction
dead_stock = connector.dead_stock_prediction()

Shopify

from neuromesh.integrations.shopify import ShopifyConnector

connector = ShopifyConnector(shop_url="mystore.myshopify.com", access_token="...")
products = connector.fetch_products()
rec.train(products, interactions=connector.fetch_orders())

Custom Data Sources

from neuromesh.integrations.custom import CustomConnector

connector = CustomConnector()
items = connector.load_items_from_csv("products.csv")
items = connector.load_items_from_json("products.json")
items = connector.load_items_from_dataframe(df)

Docker

Run the full stack with Docker Compose:

git clone https://github.com/TheAmitChandra/NeuroMesh-AI.git
cd NeuroMesh-AI

cp .env.example .env
# Edit .env with your config

docker compose up

Services started:

  • api — NeuroMesh FastAPI server on port 8000
  • redis — Recommendation cache
  • postgres — Persistent storage with pgvector
  • qdrant — Vector database on port 6333

Configuration

All configuration is via environment variables. Copy .env.example to .env:

# Core
NEUROMESH_ENV=development
NEUROMESH_LOG_LEVEL=INFO

# API
NEUROMESH_HOST=0.0.0.0
NEUROMESH_PORT=8000
NEUROMESH_API_KEY=your-secret-api-key

# Database
DATABASE_URL=postgresql+asyncpg://user:pass@localhost:5432/neuromesh

# Redis
REDIS_URL=redis://localhost:6379/0

# Vector Store
VECTOR_STORE=faiss

# Embeddings
EMBEDDING_MODEL=all-MiniLM-L6-v2

# Rate Limiting
RATE_LIMIT_PER_MINUTE=60

Installation from Source

git clone https://github.com/TheAmitChandra/NeuroMesh-AI.git
cd NeuroMesh-AI

python -m venv venv
venv\Scripts\activate      # Windows
# source venv/bin/activate  # macOS/Linux

pip install -e ".[dev]"

Run tests:

pytest --cov=neuromesh --cov-report=term-missing

Tech Stack

Layer Technology
Language Python 3.11+
API Framework FastAPI + Uvicorn
ML — Content scikit-learn (TF-IDF)
ML — Semantic sentence-transformers
ML — Collab implicit (ALS)
Vector DB FAISS / Qdrant / ChromaDB
Cache Redis
Database PostgreSQL + pgvector
ORM SQLAlchemy 2.x (async)
Validation Pydantic v2
Auth API Key (X-API-Key header)
Rate Limiting slowapi
Frontend React 18 + Vite + Tailwind
Charts Recharts
Testing pytest + pytest-asyncio
CI/CD GitHub Actions
Containers Docker + Docker Compose
Docs MkDocs + Material theme
Packaging PyPI (pyproject.toml + twine)

Benchmarks

Performance measured on CPU (no GPU), Python 3.11, all engines using top_k=10. Results from benchmarks/results/.

Fit Time

Catalogue Size TF-IDF (ms) Embedding (ms) Hybrid (ms)
100 8.67 23.4 37.48
1,000 79.31 84.61 119.14
10,000 665.34 1,100.85 1,581.55

Recommend Latency & Throughput

Catalogue Size TF-IDF (ms) TF-IDF (rps) Embedding (ms) Embedding (rps) Hybrid (ms) Hybrid (rps)
100 1.02 983 8.10 123 10.22 98
1,000 1.50 667 0.49 2,049 3.23 310
10,000 13.10 76 0.42 2,409 17.92 56

Key insight: FAISS ANN search makes the Embedding engine faster at scale (sub-ms at 10k items), while TF-IDF degrades linearly with corpus size. The Hybrid engine balances quality vs. throughput.

Hybrid Weight Presets (1,000 items)

Preset Recommend Latency (ms)
content-only 2.50
semantic-only 0.87
balanced 50/50 2.78
default weights 4.11

Run benchmarks yourself:

python benchmarks/bench_tfidf.py
python benchmarks/bench_embedding.py
python benchmarks/bench_hybrid.py

Phase 1 — Foundation + TF-IDF SDK ✅ Complete

  • Project planning and architecture
  • Project scaffolding and package setup
  • Configuration and logging layer
  • Data preprocessing and feature builder
  • TF-IDF content-based engine
  • Trending engine
  • Ranker and scorer
  • Main Recommender class
  • FastAPI serving layer (all endpoints)
  • Unit + integration tests (≥ 80% coverage)
  • PyPI packaging + GitHub Actions CI

Phase 2 — Embeddings + Vector DB ✅ Complete

  • Sentence transformer encoder
  • FAISS / Qdrant / ChromaDB vector stores
  • Semantic embedding engine
  • Collaborative filtering engine (ALS)
  • Hybrid recommendation engine
  • Explainability layer
  • Real-time incremental learning

Phase 3 — Production Hardening ✅ Complete

  • Redis caching layer
  • PostgreSQL persistence layer
  • Docker + Docker Compose
  • GitHub Actions Docker CI/CD
  • MkDocs documentation site

Phase 4 — Dashboard + Integrations ✅ Complete

  • React analytics dashboard (Overview, Recs Log, Item Catalog, Users, Trending, Engines)
  • Odoo ERP/POS integration
  • Shopify integration
  • Custom data connectors (CSV, JSON, DataFrame)
  • Performance benchmarking suite
  • Example scripts (ecommerce, video, music, Odoo)

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature
  3. Make your changes with conventional commits: feat: add your feature
  4. Push and open a pull request to main

Please ensure:

  • All tests pass (pytest)
  • Code is formatted (black .)
  • Imports are sorted (isort .)
  • Type hints are valid (mypy neuromesh/)

License

MIT License — see LICENSE for details.


Author

Built by Amit Chandra


NeuroMesh AI — Recommendation infrastructure for the modern web.

Documentation · Issues · Discussions

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuromesh_ai-0.1.0.tar.gz (73.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuromesh_ai-0.1.0-py3-none-any.whl (86.4 kB view details)

Uploaded Python 3

File details

Details for the file neuromesh_ai-0.1.0.tar.gz.

File metadata

  • Download URL: neuromesh_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 73.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neuromesh_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1674a2985a4ad3bb023dee9ea6ec60294d3d536734f926ab7983eefb6358b860
MD5 402540719c12c4686a56687883b42a9c
BLAKE2b-256 805298280d86d35f9c08c044765f5837ba9462ad425feaa8668c33768050b714

See more details on using hashes here.

File details

Details for the file neuromesh_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: neuromesh_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 86.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neuromesh_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3db1e7c1ab4c57d92d05904a67194831dde5cc79e184ff830d26cb05ba82b590
MD5 7f9ad284bfe2b2ebfd3d784c2ad23aee
BLAKE2b-256 7c406075501632e5011fa514e6d7b012e88c2c4d9918bf309311b10bd3a4fcf0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page