A local-first LLM orchestration library
Project description
Adaptera 🌌
A local-first LLM orchestration library with native support for Hugging Face, PEFT/LoRA, QLoRA, and API models — without hiding the model.
Note: This project is in its early development phase and may undergo significant changes. However, the core goal of providing local LLM processing will remain consistent. Once the agentic part of the module is stable, we will work on making a fine-tuner for it so that this library can be used as a quick way of prototyping local agentic models.
Feel free to contribute, please do not spam pull requests. Any and all help is deeply appreciated.
Features
- Local-First: Built for running LLMs on your own hardware efficiently.
- Native PEFT/QLoRA: Seamless integration with Hugging Face's PEFT for efficient model loading.
- Persistent Memory: Vector-based memory using FAISS with automatic text embedding (SLM).
- Strict ReAct Agents: Deterministic agent loops using JSON-based tool calls.
- Model Transparency: Easy access to the underlying Hugging Face model and tokenizer.
Installation
Using python
pip install adaptera
Using Anaconda/Miniforge
conda activate < ENV NAME >
pip install adaptera
(Note: Requires Python 3.12+)
Quick Start
from adaptera import Agent, AdapteraModel, VectorDB, Tool
# 1. Initialize Vector Memory
db = VectorDB(index_file="memory.index")
# 2. Load a Model (with 4-bit quantization)
model = AdapteraModel(
model_name="unsloth/Llama-3.2-3B-Instruct",
quantization="4bit",
vector_db=db
)
# 3. Define Tools
def add(a, b):
"""Adds two numbers together"""
return a + b
tools = [
Tool(name="add", func=add, description="Adds two numbers together. Input: 'a,b'")
]
# 4. Create and Run Agent
agent = Agent(model, tools=tools)
print(agent.run("What is 15 + 27?"))
Project Structure
adaptera/chains/: Agentic workflows and ReAct implementations.adaptera/model/: Hugging Face model loading and generation wrappers.adaptera/memory/: FAISS-backed persistent vector storage.adaptera/tools/: Tool registry and definition system.
Non-goals
This library does not aim to be a full ML framework or replace existing tools like LangChain. It focuses on providing a clean, minimal interface for local-first LLM orchestration.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file adaptera-0.1.1.tar.gz.
File metadata
- Download URL: adaptera-0.1.1.tar.gz
- Upload date:
- Size: 11.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3adb84a4c3433bf9a68110f28e3cd1dc99d7bdf9eb122a3d932958e30a55dfc1
|
|
| MD5 |
3b1dd02ced36d638f75a08c31319fdf0
|
|
| BLAKE2b-256 |
bba5b2f3da70e4bc99e2b95c3e648c39631f41238b74c172189dca35de93572e
|
File details
Details for the file adaptera-0.1.1-py3-none-any.whl.
File metadata
- Download URL: adaptera-0.1.1-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
472154e8f4a2ae2ad307a6ec2a150b3d385cb9189ec12114a13bc253e0d7ddb7
|
|
| MD5 |
d56432c05101008e87fd84de2b2c8cd8
|
|
| BLAKE2b-256 |
95bb117f4964895f1210befbd33ca86fc34acb6f37537d0d9639c8b66a398d29
|