Skip to main content

A local-first LLM orchestration library

Project description

Adaptera 🌌

A local-first LLM orchestration library with native support for Hugging Face, PEFT/LoRA, QLoRA — without hiding the model and giving advanced users the full control.

Adaptera is a thin orchestration layer — not an abstraction barrier.

Why Adaptera?

Use Adaptera if you:

  • Want full control over your model (no hidden layers)
  • Prefer explicit agent logic over auto-magic chains
  • Are working locally with Hugging Face / quantized models

Avoid Adaptera if you:

  • Want plug-and-play integrations (use LangChain)
  • Need production-ready pipelines out of the box

Note:

⚠️ Status

  • Early development — APIs may change
  • Currently focusing on agentic systems (v0.1.3)

⚠️ LM Studio Support

  • Inference only
  • Agent integration coming soon

🤝 Contributing

  • Contributions welcome
  • Please avoid low-quality / spam PRs

Features

  • Local-First: Built for running LLMs on your own hardware efficiently.
  • Native PEFT/QLoRA: Seamless integration with Hugging Face's PEFT for efficient model loading.
  • Persistent Memory: Vector-based memory using FAISS with automatic text embedding (SLM).
  • Strict ReAct Agents: Deterministic agent loops using JSON-based tool calls.
  • Model Transparency: Easy access to the underlying Hugging Face model and tokenizer.

Installation

Using python

pip install adaptera

Using Anaconda/Miniforge

conda activate < ENV NAME >
pip install adaptera

(Note: Requires Python 3.12+)

Quick Start

#setup imports and optional vector DB
from adaptera import Agent, AdapteraHFModel, VectorDB, Tool

#Optional
db = VectorDB()
db.add("Information to be added into the vector db to be checked, it will automatically searched up by the model when generating responses")
model = AdapteraHFModel(
    model_name ="unsloth/Llama-3.2-3B-Instruct",
    quantization="4bit",
    vector_db=db #optional
)

model.generate("What is an apple?")

# OR

model = AdapteraLMSModel()
model.generate("What is an apple")
#define functions for the agent
def add(a,b):
    "Adds 2 numbers together"
    print(f"Adding {a} and {b} via tool call")
    return a + b

def subtract(a,b):
    "Subtracts b from a"
    print(f"Subtracting {b} from {a} via tool call")
    return a - b

tools = [
    Tool(name="add", func=add, description="Adds two numbers together. Input should be in the format: 'a,b' where a and b are numbers."),
    Tool(name="subtract", func=subtract, description="Subtracts b from a. Input should be in the format: 'a,b' where a and b are numbers.")
] 
#Setup and run the agent
agent = Agent(
    "AddSubtract_Agent",
    model, 
    tools=tools,
    description = "An agent for only addition and subtraction tasks."
)

agent.run("what is 1+1?")
# MULTI AGENT setup
# The coordinator model routes tasks between agents based on their descriptions.
def add(a,b):
    "Adds 2 numbers together"
    print(f"Adding {a} and {b} via tool call")
    return a + b

def subtract(a,b):
    "Subtracts b from a"
    print(f"Subtracting {b} from {a} via tool call")
    return a - b

def multiply(a,b):
    "Multiplies 2 numbers together"
    print(f"Multiplying {a} and {b} via tool call")
    return a * b

def divide(a,b):
    "Divides a by b"
    print(f"Dividing {a} by {b} via tool call")
    if b == 0:
        return "Error: Division by zero"
    return a / b

tools1 = [
    Tool(name="add", func=add, description="Adds two numbers together. Input should be in the format: 'a,b' where a and b are numbers."),
    Tool(name="subtract", func=subtract, description="Subtracts b from a. Input should be in the format: 'a,b' where a and b are numbers.")
] 

tools2 = [
    Tool(name="multiply", func=multiply, description="Multiplies two numbers together. Input should be in the format: 'a,b' where a and b are numbers."),
    Tool(name="divide", func=divide, description="Divides a by b. Input should be in the format: 'a,b' where a and b are numbers."),
]

agent1 = Agent(
    "AddSubtract_Agent",
    model, 
    tools=tools1,
    description = "An agent for only addition and subtraction tasks."
)

agent2 = Agent(
    "MultiplyDivide_Agent",
    model, 
    tools=tools2,
    description = "An agent only for multiplication and division tasks."
)

#create a multi agent system

MOA = MultiAgent(agents=[agent1,agent2],coordinator_model=model)
response = MOA.run("What is the result of (15 + 5) * (10 - 2)?")
print("Final Response from MultiAgent System: ",response)

Project Structure

  • adaptera/chains/: Agentic workflows and ReAct implementations.
  • adaptera/model/: Hugging Face model loading and generation wrappers.
  • adaptera/memory/: FAISS-backed persistent vector storage.
  • adaptera/tools/: Tool registry and definition system.
  • adaptera/experimental/: Experimental features do not use these in production , these will be changed / deleted later on.

Non-goals

This library does not aim to be a full ML framework or replace existing tools like LangChain. It focuses on providing a clean, minimal interface for local-first LLM orchestration.

Notes for devlopers

in order to complie adaptera into a python package , please run:

python -m build

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adaptera-0.1.3.tar.gz (22.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

adaptera-0.1.3-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file adaptera-0.1.3.tar.gz.

File metadata

  • Download URL: adaptera-0.1.3.tar.gz
  • Upload date:
  • Size: 22.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for adaptera-0.1.3.tar.gz
Algorithm Hash digest
SHA256 9df87d5a6b9a7e848365797bf23994d4c10675c5a9439bc74743330ad13b858e
MD5 af5934c9fa0aec510f5de5e64a72e5cf
BLAKE2b-256 63216170a946826b44ab33b8bc1eeb5e39dfe77b1e9570028267b306ec858efe

See more details on using hashes here.

File details

Details for the file adaptera-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: adaptera-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 24.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for adaptera-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9672e65119dab2b07da430537a9ef21e6f4174a798bb825f5376cde010661a36
MD5 4a4f48238c18ecb737e78792a848e9be
BLAKE2b-256 f106d415210c107c9ba5ad432a2f0791206859ba2028e90a5f21405af9343ae0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page