Skip to main content

A python package for developing AI applications with local LLMs.

Project description

LLMFlex

PyPI PyPI - License GitHub Repo stars

A python package for developing AI applications with local LLMs

LLMFlex is a python package that allows python developers to work with different large language models (LLM) and do prompt engineering with a simple interface. It favours free and local resources instead of using paid APIs to develop truly local and private AI-powered solutions.

It provides classes to load LLM models, embedding models, and vector databases to create LLM powered applications with your own prompt engineering and RAG techniques. With a one-liner command, you can load a chatbot interface to chat with the LLM or serve a model as OpenAI API as well.

Installing LLMFlex

Creating a virtual environment before installing the package is highly recommended. Also make sure you have installed Pytorch and llama-cpp-python with the correct installation method according to your hardware configuration before installing LLMFlex. Please visit the links provided for the respective packages for more detailed installation guides.

After you have done the above steps, you can easily install llmflex with pip.

pip install llmflex

You can also install ExLlamaV2, AutoAWQ, and AutoGPTQ if you have CUDA devices. Please visit the links provided for the respective packages for more detailed installation guides.

Features

1. Multilple LLMs with different generation configurations from one model

Unlike Langchain, you can create multiple LLMs with different temperature, max new tokens, stop words etc. with the same underlying model without loading the model several times using the LlmFactory class. This can be useful when you create your own agent with different LLM tasks which requires different configurations.

2. Langchain compatibility with enhanced performances

All the LLMs created with LlmFactory are langchain compatible, and can be seamlessly integrated in your existing Langchain code. All the LLM classes are re-implementations of some langchain LLM classes which support more efficient streaming and stop words management, all with a unified interface.

3. Multiple model formats support

Multiple model formats are all supported, and the loading process are all handled in the LlmFactory class, so it is just plug and play. Supported formats:

  • PyTorch, AWQ, GPTQ (uvia transformers)
  • GGUF (via llama-cpp-python)
  • OpenAI API (Work with any local servers that serve models with OpenAI API format)
  • EXL2 (via exllamav2)

4. Custom tools

A base class BaseTool for creating llm powered tools. A BrowserTool powered by DuckDuckGo is implemented as an example.

5. LLM Agents

An Agent class is provided. You can pass your tools and LLM to initialise the agent, after giving the agent a task, the agent will work out the magic for you with the given tools.

6. Embedding Toolkits

Bundled classes for using embedding models which contains the embedding model and a tokens-count-based text splitter using the embedding model.

7. Vector database

Utilising Embedding toolkits and FAISS, a FaissVectorDatabase class can allow you to store and search texts for your RAG tasks.

8. Chat memories

Chat memory classes for storing chat memory on disk.

  1. BaseChatMemory
    Memory class without using any embedding models or vector databases.

  2. LongShortTermChatMemory
    Memory class using an underlying FaissVectorDatabase to maintain long term memory along with the most recent memory.

9. Prompt template

A PromptTemplate class is implemented to format your prompt with different prompt formats for models from different sources. Some presets like Llama3, ChatML, Vicuna, and more are already implemented, but you can alway add your own prompt format template.

10. Chatbot frontend interface

A streamlit webapp is provided for local AI chatbot usage. Function calling and RAG on your own documents are supported on the webapp. You can also steer the response of the LLM by providing the beginning text for the response.

Using LLMFlex

1. Create LLMs

This is how you can start with any text generation model on HuggingFace with your machine.

from llmflex import LlmFactory

# Load the model from Huggingface
model = LlmFactory("TheBloke/OpenHermes-2.5-Mistral-7B-GGUF")

# Create a llm
llm = model(temperature=0.7, max_new_tokens=512)

# Use the LLM for your task
prompt = "Q: What is the colour of an apple? A:"
ans = llm.invoke(prompt, stop=['Q:'])
print(ans)

# Or if you prefer to generate the output with token streamming.
for token in llm.stream(prompt, stop=['Q:']):
    print(token, end="")

2. Load embeddings toolkit and create vector database

To load an embedding model and use a vector database:

from llmflex.Embeddings import HuggingfaceEmbeddingsToolkit
from llmflex.VectorDBs import FaissVectorDatabase

# Loading the embedding model toolkit
embeddings = HuggingfaceEmbeddingsToolkit(model_id="thenlper/gte-small")

# Create a vector database
food = ["Apple", "Banana", "Pork"]
vectordb = FaissVectorDatabase.from_texts(embeddings=embeddings, texts=food)

# Do semantic search on the vector database
print(vectordb.search("Beef"))

3. Use tools

A BrowserTool class is implemented as an example to build a tool with LLMFlex. The tool is using DuckDuckGo by default. Here is how you can use it:

from llmflex.Tools import BrowserTool
from llmflex.Rankers import FlashrankRanker

# Create a reranker
ranker = FlashrankRanker()

# Create a broswer tool with the embeddings toolkit created earlier
tool = BrowserTool(embeddings=embeddings, llm=llm, ranker=ranker)

# Run the tool
tool(search_query='Install python')

4. Running an agent

Use the one-shot ReAct agent to go through more complicated workflows.

from llmflex.Agents import Agent

agent = Agent(llm=llm, tools=[tool], prompt_template=model.prompt_template)
agent.run("Do some research online to find out the latest trends about Generative AI.")

5. Chat with the model in a Streamlit web app

If you just want a GUI to start chatting with your LLM model with both long term and short term memory, type this command in the terminal:

llmflex interface

If you want to configure the llm model, embedding model, text splitter, and reranker, create a config file and modify it first:

# Create a config file for the webapp
llmflex create-app-config

after modifying the file, run the following:

llmflex interface --config_dir chatbot_config.yaml

You will see a streamlit frontend, use it to chat with the LLM model.

Now you can upload your text files to create knowledge bases and talk about your documents with your AI assistant.

For further details on how to configure your yaml, please read the documentation provided. Streamlit GUI

Documentations

Python documentation for all the classes, methods, and functions is provided in the ./docs directory in this repository.

License

This project is licensed under the terms of the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmflex-0.1.14.tar.gz (73.5 kB view details)

Uploaded Source

File details

Details for the file llmflex-0.1.14.tar.gz.

File metadata

  • Download URL: llmflex-0.1.14.tar.gz
  • Upload date:
  • Size: 73.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for llmflex-0.1.14.tar.gz
Algorithm Hash digest
SHA256 b92dbea4032d0cd543bbdf867667faf143b531ffad8d43bfcc88f605042e8234
MD5 69cf916f5b310398edf666c92d785911
BLAKE2b-256 05eaebe1255feb6a779eb595ddf07d1680a586ba49ce04f6ed56c98b5c490a99

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page