Skip to main content

LLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs).

Project description

LLMChatLinker

LLMChatLinker is a Middleware SDK designed to facilitate interaction between clients and Large Language Models (LLMs). The SDK acts as an intermediary between the client and the LLM(s), allowing multiple users to communicate with the LLM(s) simultaneously through the client's front-end.

Architecture

The interaction flow follows the fetch-decode-execute-store cycle, similar to the architecture of a CPU (CISC/RISC):

  1. Fetch: The Orchestrator (acting as the CPU) fetches the instructions from the instruction queue.
  2. Decode: The Control Unit decodes the fetched instructions. Depending on the instruction type, it decodes the instruction to be directly executed later.
  3. Execute: The decoded instruction is executed by the relevant manage unit (User, Chat, LLM, or Database Manage Unit).
  4. Store: The result from the execution is stored back in the result queue, ensuring each user request is immediately linked to its corresponding result.

Components

  • Client: The front-end that interacts with users.
  • Middleware SDK (LLMChatLinker): Facilitates communication between the client and LLM(s). Composed of various units that mimic the functionality of CPU components:
    • Orchestrator: Acts as the CPU, fetching, decoding, executing, and storing instructions.
    • Control Unit: Decodes instructions fetched by the Orchestrator.
    • User Manage Unit: Manages user-related instructions.
    • Chat Manage Unit: Manages chat-related instructions.
    • LLM Manage Unit: Manages LLM-related instructions.
    • Database Manage Unit: Manages database interactions.
  • LLM(s): Large Language Models providing responses to user queries.

Features

  • Fetch-Decode-Execute-Store Cycle: Adopts the CPU-like mechanism to process instructions efficiently.
  • User Management: Create, update, delete, and list users.
  • Chat Management: Create, update, delete, load, and list chats.
  • LLM Management: Add, update, delete, list LLM providers and LLMs.
  • Instruction Management: Enable/disable instruction recording, delete and list instruction records.

Quick Start Guide

Prerequisites

  • Docker

Setting Up the Environment

  1. Clone the repository:

    git clone https://github.com/cjlee7128/LLMChatLinker.git
    cd LLMChatLinker
    
  2. Environment Configuration:

    The repository includes a .env.example file which contains example environment variables. You need to create a .env file for Docker Compose to use these settings:

    cp .env.example .env
    

    Repeat the same process for the llmchatlinker-frontend:

    cd llmchatlinker-frontend
    cp .env.example .env
    cd ..
    
  3. Run services using Docker Compose:

    Ensure that you have a properly configured docker-compose.yml file. This file should specify all necessary services like PostgreSQL, RabbitMQ, and any additional dependencies.

    Start all services defined in docker-compose.yml:

    docker compose up --build -d
    
  4. Pull and run MLModelScope API agent individually:

    Execute the following command to start the MLModelScope API agent:

    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest
    

    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:

    docker run -d -e HF_HOME=/root/.cache/huggingface \
      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest
    

    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:

    curl http://localhost:15555/api/chat \
      -H "Content-Type: application/json" \
      -d '{
          "model": "llama_3_2_1b_instruct",
          "messages": [
              {"role": "user", "content": "What is the longest river in the world?"}
          ]
      }'
    

    The above command will download the llama_3_2_1b_instruct model if it does not exist and generate a response for the given user message.

Accessing the Front-end

After successfully running the docker compose command, you can access the front-end application via your web browser. Open the following URL:

http://localhost:<FRONTEND_PORT>

Replace <FRONTEND_PORT> with the actual port number specified in the .env file within the LLMChatLinker directory. This value should correspond to the port binding for your front-end application.

Important Notes

  • Environment Variables: Both backend and frontend parts of the application rely on certain environment variables. Ensure your .env files have correct values for seamless deployment.

  • Docker Compose: It's crucial that your docker-compose.yml is configured correctly with all the required services. If you need additional environment-specific settings, update your .env files before running docker compose up --build -d.

Deployment without Front-end (Optional)

The LLMChatLinker can be deployed without the front-end application in a Python environment.

If you want to deploy the LLMChatLinker without the front-end, you can follow the steps below:

  1. Clone the repository:

    git clone https://github.com/cjlee7128/LLMChatLinker.git
    cd LLMChatLinker
    
  2. Run PostgreSQL service using Docker:

    docker run --name my_postgres -e POSTGRES_USER=myuser -e POSTGRES_PASSWORD=mypassword -e POSTGRES_DB=mydatabase -p 5433:5432 -d postgres:16
    
  3. Run RabbitMQ service using Docker:

    docker run -d -e RABBITMQ_DEFAULT_USER=myuser -e RABBITMQ_DEFAULT_PASS=mypassword --name rabbitmq -p 5673:5672 -p 15673:15672 rabbitmq:3-management
    
  4. Run MLModelScope API agent using Docker:

    Execute the following command to start the MLModelScope API agent:

    docker run -d -p 15555:15555 xlabub/pytorch-agent-api:latest
    

    If you want MLModelScope API agent not to download huggingface models every time, you can use the following command:

    docker run -d -e HF_HOME=/root/.cache/huggingface \
      -p 15555:15555 -v ~/.cache/huggingface:/root/.cache/huggingface xlabub/pytorch-agent-api:latest
    

    (Recommended) After running the MLModelScope API agent, you can query the API to download the models if not existing and to reduce the next response time even if the model is already downloaded. For example:

    curl http://localhost:15555/api/chat \
      -H "Content-Type: application/json" \
      -d '{
          "model": "llama_3_2_1b_instruct",
          "messages": [
              {"role": "user", "content": "What is the longest river in the world?"}
          ]
      }'
    

    The above command will download the llama_3_2_1b_instruct model if it does not exist and generate a response for the given user message.

  5. Run the LLMChatLinker service without using api:

    python -m llmchatlinker.main_without_api 
    
  6. (Optional) Run Example Scripts:

    You can run the example scripts provided in the examples directory to interact with the LLMChatLinker service.

    python -m examples.1_create_user
    python -m examples.2_create_chat
    python -m examples.3_add_llm_provider
    python -m examples.4_add_llm
    python -m examples.5_generate_llm_response
    python -m examples.12_llm_response_regenerate
    

    Make sure to replace the placeholders with the actual IDs generated during the execution of the previous scripts.

Usage

Instructions

User-related Instructions

  • USER_CREATE: Create a new user.
  • USER_UPDATE: Update an existing user.
  • USER_DELETE: Delete an existing user.
  • USER_LIST: List all users.
  • USER_GET: Get a user by username or ID.
  • USER_INSTRUCTION_RECORDING_ENABLE: Enable instruction recording for a user.
  • USER_INSTRUCTION_RECORDING_DISABLE: Disable instruction recording for a user.
  • USER_INSTRUCTION_RECORDS_DELETE: Delete all instruction records for a user.
  • USER_INSTRUCTION_RECORDS_LIST: List all instruction records for a user.

Chat-related Instructions

  • CHAT_CREATE: Create a new chat.
  • CHAT_UPDATE: Update an existing chat.
  • CHAT_DELETE: Delete an existing chat.
  • CHAT_LOAD: Load an existing chat.
  • CHAT_LIST: List all chats.
  • CHAT_LIST_BY_USER: List all chats for a user.

LLM-related Instructions

  • LLM_RESPONSE_GENERATE: Generate a response from the LLM.
  • LLM_RESPONSE_REGENERATE: Regenerate a response from the LLM.
  • LLM_PROVIDER_ADD: Add a new LLM provider.
  • LLM_PROVIDER_UPDATE: Update an existing LLM provider.
  • LLM_PROVIDER_DELETE: Delete an LLM provider.
  • LLM_PROVIDER_LIST: List all LLM providers.
  • LLM_ADD: Add a new LLM.
  • LLM_UPDATE: Update an existing LLM.
  • LLM_DELETE: Delete an LLM.
  • LLM_LIST: List all LLMs.
  • LLM_LIST_BY_PROVIDER: List all LLMs for a provider.

Examples

Below are some example usage scripts to interact with LLMChatLinker.

1. Create a User

User creation requires a username and a profile. The response will contain the user ID. Make sure to replace {USER_ID} with the actual user ID in the subsequent instructions.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "USER_CREATE",
        "data": {
            "username": "john_doe",
            "profile": "Sample profile"
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] User ID: {response['data']['user']['user_id']}")

if __name__ == "__main__":
    main()

2. Create a Chat

Chat creation requires a title and a list of user_ids. The response will contain the chat ID. Make sure to replace {CHAT_ID} with the actual chat ID in the subsequent instructions.

You can get the User ID from the response of the previous instruction.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "CHAT_CREATE",
        "data": {
            "title": "Sample Chat",
            "user_ids": ["{USER_ID}"]
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] Chat ID: {response['data']['chat']['chat_id']}")

if __name__ == "__main__":
    main()

3. Add an LLM Provider

LLM Provider addition requires a name and an API endpoint. The response will contain the LLM Provider ID. Make sure to replace {LLM_PROVIDER_ID} with the actual LLM Provider ID in the subsequent instructions.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "LLM_PROVIDER_ADD",
        "data": {
            "name": "MLModelScope",
            "api_endpoint": "http://localhost:15555/api/chat"
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] LLM Provider ID: {response['data']['provider']['provider_id']}")

if __name__ == "__main__":
    main()

4. Add an LLM

LLM addition requires an LLM Provider ID and an LLM name. The response will contain the LLM ID. Make sure to replace {LLM_ID} with the actual LLM ID in the subsequent instructions.

You can get the LLM Provider ID from the response of the previous instruction.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "LLM_ADD",
        "data": {
            "provider_id": "{LLM_PROVIDER_ID}",
            "llm_name": "llama_3_2_1b_instruct"
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] LLM ID: {response['data']['llm']['llm_id']}")

if __name__ == "__main__":
    main()

5. Generate an LLM Response

LLM response generation requires a User ID, Chat ID, LLM Provider ID, LLM ID, and user input. The response will contain the message ID. Make sure to replace {MESSAGE_ID} with the actual message ID in the subsequent instructions.

You can get the User ID, Chat ID, LLM Provider ID, and LLM ID from the responses of the previous instructions.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "LLM_RESPONSE_GENERATE",
        "data": {
            "user_id": "{USER_ID}",
            "chat_id": "{CHAT_ID}",
            "provider_id": "{LLM_PROVIDER_ID}",
            "llm_id": "{LLM_ID}",
            "user_input": "What is the longest river in the world?"
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] Message ID: {response['data']['llm_response']['message_id']}")

if __name__ == "__main__":
    main()

6. Regenerate an LLM Response

LLM response reg-eneration requires a Message ID. The response will contain the re-generated response.

You can get the Message ID from the response of the previous instruction.

from llmchatlinker.message_queue import publish_message
import json

def main():
    instruction = {
        "type": "LLM_RESPONSE_REGENERATE",
        "data": {
            "message_id": "{MESSAGE_ID}"  # ID of the original user message to regenerate response for
        }
    }

    response = publish_message(json.dumps(instruction))
    print(f" [x] Received {response}")
    response = json.loads(response.decode('utf-8'))
    print(f" [x] Message ID: {response['data']['llm_response']['message_id']}")

if __name__ == "__main__":
    main()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmchatlinker-0.1.2.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmchatlinker-0.1.2-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file llmchatlinker-0.1.2.tar.gz.

File metadata

  • Download URL: llmchatlinker-0.1.2.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.8.20

File hashes

Hashes for llmchatlinker-0.1.2.tar.gz
Algorithm Hash digest
SHA256 230bdf4f04e1c4a4e01bc3eec00b3704f72f526e23984aa44a5157de31155cc5
MD5 a28433dd4753fd2a2ae11eec43279390
BLAKE2b-256 4987aa0b32e47c201e84b56397b7e9c639fdb6129218d30ade94f02ab683f99a

See more details on using hashes here.

File details

Details for the file llmchatlinker-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: llmchatlinker-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 26.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.8.20

File hashes

Hashes for llmchatlinker-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 35e1e837bd1e731971dcd2e0318b8e427b80517611952176f8ebdb5461157350
MD5 ce2fd7d8015b9b4100886c1a4aa3a88d
BLAKE2b-256 67714b0b891963f3567c25fe929cfc71c40270226b139c9a979d6d4aebce7b0d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page