Skip to main content

A local-first LLM orchestration library

Project description

Adaptera 🌌

A local-first LLM orchestration library with native support for Hugging Face, PEFT/LoRA, QLoRA, and API models — without hiding the model.


Note: This project is in its early development phase and may undergo significant changes. However, the core goal of providing local LLM processing will remain consistent. Once the agentic part of the module is stable, we will work on making a fine-tuner for it so that this library can be used as a quick way of prototyping local agentic models.

Feel free to contribute, please do not spam pull requests. Any and all help is deeply appreciated.


Features

  • Local-First: Built for running LLMs on your own hardware efficiently.
  • Native PEFT/QLoRA: Seamless integration with Hugging Face's PEFT for efficient model loading.
  • Persistent Memory: Vector-based memory using FAISS with automatic text embedding (SLM).
  • Strict ReAct Agents: Deterministic agent loops using JSON-based tool calls.
  • Model Transparency: Easy access to the underlying Hugging Face model and tokenizer.

Installation


As of now

  • Python module for this has not been released yet, install using github like this:

Using pip

# Clone the repository
git clone https://github.com/Sylo3285/Adaptera
cd Adaptera

# Install dependencies in editable mode
pip install .

Using Anaconda/Miniforge

conda activate < ENV NAME >

# Clone the repository
git clone https://github.com/Sylo3285/Adaptera
cd Adaptera

# Install dependencies in editable mode
pip install .

(Note: Requires Python 3.12+)

Quick Start

from adaptera import Agent, AdapteraModel, VectorDB, Tool

# 1. Initialize Vector Memory
db = VectorDB(index_file="memory.index")

# 2. Load a Model (with 4-bit quantization)
model = AdapteraModel(
    model_name="unsloth/Llama-3.2-3B-Instruct",
    quantization="4bit",
    vector_db=db
)

# 3. Define Tools
def add(a, b):
    """Adds two numbers together"""
    return a + b

tools = [
    Tool(name="add", func=add, description="Adds two numbers together. Input: 'a,b'")
]

# 4. Create and Run Agent
agent = Agent(model, tools=tools)
print(agent.run("What is 15 + 27?"))

Project Structure

  • adaptera/chains/: Agentic workflows and ReAct implementations.
  • adaptera/model/: Hugging Face model loading and generation wrappers.
  • adaptera/memory/: FAISS-backed persistent vector storage.
  • adaptera/tools/: Tool registry and definition system.

Non-goals

This library does not aim to be a full ML framework or replace existing tools like LangChain. It focuses on providing a clean, minimal interface for local-first LLM orchestration.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adaptera-0.1.0.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

adaptera-0.1.0-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file adaptera-0.1.0.tar.gz.

File metadata

  • Download URL: adaptera-0.1.0.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for adaptera-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0ff4ea36ff107bd6089d051b57afeaa3737f22c53071e74ffcd433a055b4dfe9
MD5 866fc60d03df1ac493a6c8a089bef191
BLAKE2b-256 d5b37a7469a2f93ba8257702154a98d1cf63bafd65480b3889fbe7f6ee368925

See more details on using hashes here.

File details

Details for the file adaptera-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: adaptera-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for adaptera-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b9114e2391549df0996f7fd57bfa162d553df67c04065a95a0e429e284e1cded
MD5 f163ffd442afdf9c1e9c708eb44fa13c
BLAKE2b-256 98f5d67dae5b96850c823306b838524c14d7768b6f2bccccb91ac037db96b36c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page