Skip to main content

A micro-framework for building with LLMs, inspired by LangChain.

Project description

Mini-Chain

Mini-Chain is a micro-framework for building applications with Large Language Models, inspired by LangChain. Its core principle is transparency and modularity, providing a "glass-box" design for engineers who value control and clarity.

Core Features

  • Modular Components: Swappable classes for Chat Models, Embeddings, Memory, and more.
  • Local & Cloud Ready: Supports both local models (via LM Studio) and cloud services (Azure).
  • Modern Tooling: Built with Pydantic for type-safety and Jinja2 for powerful templating.
  • GPU Acceleration: Optional faiss-gpu support for high-performance indexing.

Installation

pip install minichain-ai
#For Local FAISS (CPU) Support:
pip install minichain-ai[local]
#For NVIDIA GPU FAISS Support:
pip install minichain-ai[gpu]
#For Azure Support (Azure AI Search, Azure OpenAI):
pip install minichain-ai[azure]
#To install everything:
pip install minichain-ai[all]

Quick Start Here is the simplest possible RAG pipeline with Mini-Chain:

# examples/01_hello_world_local.py
"""
Example 1: The absolute simplest way to use Mini-Chain.

This script demonstrates the most fundamental component: connecting to a
local language model (via LM Studio) and getting a response.
"""
import sys
import os
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../src')))

from minichain.chat_models import LocalChatModel

# 1. Initialize the LocalChatModel
# This connects to your LM Studio server running on the default port.
try:
    local_model = LocalChatModel()
    print("✅ Successfully connected to local model server.")
except Exception as e:
    print(f"❌ Could not connect to local model server. Is LM Studio running? Error: {e}")
    sys.exit(1)

# 2. Define a prompt and get a response
prompt = "In one sentence, what is the purpose of a CPU?"
print(f"\nUser Prompt: {prompt}")

response = local_model.invoke(prompt)

print("\nAI Response:")
print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

minichain_ai-0.1.0.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

minichain_ai-0.1.0-py3-none-any.whl (29.8 kB view details)

Uploaded Python 3

File details

Details for the file minichain_ai-0.1.0.tar.gz.

File metadata

  • Download URL: minichain_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for minichain_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1171e56f7991ae25e0501007c5abe2ce12927a56c4fb3500d70375a37cb021a4
MD5 a0b8928defd86d50faf2f87c83f83cad
BLAKE2b-256 7f3dda5783b49fea1f545ccb70d56392c84aa113f128c271a56a8d4c027decb9

See more details on using hashes here.

File details

Details for the file minichain_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: minichain_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 29.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.18

File hashes

Hashes for minichain_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9be99f9b3c918d16e1ed7dd74912da2a06f690aecd6f1c0df4e4aa384a83965e
MD5 5fcc094645f14175e70078b9ce600dfa
BLAKE2b-256 42ad0f8f5db879afa38e99d99ac0ba77b2554c68736aee0fe6bedcba06ba7d4f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page