Skip to main content

The Simplest LLM Application Framework

Project description

The Simplest LLM Application Framework


LangDict

LangDict is a framework for developing Compound AI System using only Python dictionary. This framework provides an intuitive usage for developing LLM Application for production.

Developing an LLM Application simply means adding API calls. Therefore, LangDict was created with the design philosophy that LLM Applications can be constructed from specifications, not complex functionality.

Create your own LLM Application with minimal understanding of other libraries and frameworks.

LangDict focuses on the intuitive interface, modularity, extensibility, and reusability of PyTorch's nn.Module. Agent and Compound AI Systems can be easily developed from a composition of these modules.

Features

LLM Applicaiton framework for simple, intuitive, dictionary-based development
chitchat = LangDict.from_dict({
    "messages": [
        ("system", "You are a helpful AI bot. Your name is {name}."),
        ("human", "Hello, how are you doing?"),
        ("ai", "I'm doing well, thanks!"),
        ("human", "{user_input}"),
    ],
    "llm": {
        "model": "gpt-4o-mini",
        "max_tokens": 200
    },
    "output": {
        "type": "string"
    }
})
# format placeholder is key of input dictionary
chitchat({
    "name": "LangDict",
    "user_input": "What is your name?"
})
Stream / Batch / Async compatibility
rag = RAG()

single_inputs = {
    "conversation": [{"role": "user", "content": "How old is Obama?"}]
}
# invoke
rag(single_inputs)

# stream
rag(single_inputs, stream=True)

# batch
batch_inputs = [{ ...  }, { ...}, ...]
rag(batch_inputs, batch=True)
Modularity: Extensibility, Modifiability, Reusability
class RAG(Module):

    def __init__(self, docs: List[str]):
        super().__init__()
        self.query_rewrite = LangDictModule.from_dict({ ... })  # Module
        self.search = SimpleKeywordSearch(docs=docs)  # Module
        self.answer = LangDictModule.from_dict({ ... })  # Module

    def forward(self, inputs: Dict):
        query_rewrite_result = self.query_rewrite({
            "conversation": inputs["conversation"],
        })
        doc = self.search(query_rewrite_result)
        return self.answer({
            "conversation": inputs["conversation"],
            "context": doc,
        })
Easy to change trace options (Console, Langfuse)
# Apply Trace option to all modules
rag = RAG()

# Console Trace
rag.trace(backend="console")

# Langfuse
rag.trace(backend="langfuse")

Quick Start

Install LangDict:

$ pip install langdict

Example

Chitchat (LangDict)

  • Create LLM functions based on your specification.
from langdict import LangDict


chitchat_spec = {
    "messages": [
        ("system", "You are a helpful AI bot. Your name is {name}."),
        ("human", "Hello, how are you doing?"),
        ("ai", "I'm doing well, thanks!"),
        ("human", "{user_input}"),
    ],
    "llm": {
        "model": "gpt-4o-mini",
        "max_tokens": 200
    },
    "output": {
        "type": "string"
    }
}
chitchat = LangDict.from_dict(chitchat_spec)
chitchat({
    "name": "LangDict",
    "user_input": "What is your name?"
})
>>> 'My name is LangDict. How can I assist you today?'

Module (+ LangDictModule, stream, trace)

  • Develop Compound AI System based on modules and get observability with a single line of code.
from typing import Any, Dict, List

from langdict import Module, LangDictModule


class RAG(Module):

    def __init__(self, docs: List[str]):
        super().__init__()
        self.query_rewrite = LangDictModule.from_dict(query_rewrite_spec)
        self.search = SimpleRetriever(docs=docs)  # Module
        self.answer = LangDictModule.from_dict(answer_spec)

    def forward(self, inputs: Dict[str, Any]):
        query_rewrite_result = self.query_rewrite({
            "conversation": inputs["conversation"],
        })
        doc = self.search(query_rewrite_result)
        return self.answer({
            "conversation": inputs["conversation"],
            "context": doc,
        })

rag = RAG()
inputs = {
    "conversation": [{"role": "user", "content": "How old is Obama?"}]
}

rag(inputs)
>>> 'Barack Obama was born on August 4, 1961. As of now, in October 2023, he is 62 years old.'

# Stream
for token in rag(inputs, stream=True):
    print(f"token > {token}")
>>>
token > Bar
token > ack
token >  Obama
token >  was
token >  born
token >  on
token >  August
token >  
token > 4
...

# Trace
rag.trace(backend="langfuse")

Dependencies

LangDict requires the following:

  • LangChain - LangDict consists of PromptTemplate + LLM + Output Parser.
    • langchain
    • langchain-core
  • LiteLLM - Call 100+ LLM APIs in OpenAI format.

Optional

  • Langfuse - If you use langfuse with the Trace option, you need to install it separately.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

langdict-0.0.1rc1-py3-none-any.whl (23.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page