Skip to main content

No project description provided

Project description

LLM API Documentation

This API allows interaction with a distributed LLM architecture using RabbitMQ and Redis. Requests are processed asynchronously by a worker system (LLM-core) that generates responses and saves them to Redis. The API retrieves results from Redis and sends them back to the user.


Endpoints

/generate

  • Method: POST
  • Description: Sends a prompt for single message generation.
  • Request Body:
    {
      "job_id": "string",
      "meta": {
        "temperature": 0.2,
        "tokens_limit": 8096,
        "stop_words": [
          "string"
        ],
        "model": "string"
      },
      "content": "string"
    }
    
    • job_id (string): Unique identifier for the task.
    • meta (object): Metadata for generation:
      • temperature (float): The degree of randomness in generation (default 0.2).
      • tokens_limit (integer): Maximum tokens for the response (default 8096).
      • stop_words (list of strings): Words to stop generation.
      • model (string): Model to use for generation.
    • content (string): The input text for generation.
  • Response:
    {
      "content": "string"
    }
    
    • content (string): The generated text.

/chat_completion

  • Method: POST
  • Description: Sends a conversation history for chat-based completions.
  • Request Body:
    {
      "job_id": "string",
      "meta": {
        "temperature": 0.2,
        "tokens_limit": 8096,
        "stop_words": [
          "string"
        ],
        "model": "string"
      },
      "messages": [
        {
          "role": "string",
          "content": "string"
        }
      ]
    }
    
    • job_id (string): Unique identifier for the task.
    • meta (object): Metadata for chat completion:
      • temperature (float): The degree of randomness in responses (default 0.2).
      • tokens_limit (integer): Maximum tokens for the response (default 8096).
      • stop_words (list of strings): Words to stop the generation.
      • model (string): Model to use for chat completion.
    • messages (list of objects): Conversation history:
      • role (string): Role of the message sender ("user", "assistant", etc.).
      • content (string): Message content.
  • Response:
    {
      "content": "string"
    }
    
    • content (string): The generated response.

Environment Variables

These variables must be configured and synchronized with the LLM-core system:

RabbitMQ Configuration

  • RABBIT_MQ_HOST: RabbitMQ server hostname or IP.
  • RABBIT_MQ_PORT: RabbitMQ server port.
  • RABBIT_MQ_LOGIN: RabbitMQ login username.
  • RABBIT_MQ_PASSWORD: RabbitMQ login password.
  • QUEUE_NAME: Name of the RabbitMQ queue to process tasks.

Redis Configuration

  • REDIS_HOST: Redis server hostname or IP.
  • REDIS_PORT: Redis server port.
  • REDIS_PREFIX: Key prefix for task results in Redis.

Internal LLM-core Configuration

  • INNER_LLM_URL: URL for the LLM-core worker service.

Example .env File

# API
CELERY_BROKER_URL=amqp://admin:admin@127.0.0.1:5672/
CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0
REDIS_HOST=redis
REDIS_PORT=6379
RABBIT_MQ_HOST=rabbitmq
RABBIT_MQ_PORT=5672
RABBIT_MQ_LOGIN=admin
RABBIT_MQ_PASSWORD=admin
WEB_RABBIT_MQ=15672
API_PORT=6672

# RabbitMQ
RABBITMQ_DEFAULT_USER=admin
RABBITMQ_DEFAULT_PASS=admin

System Architecture

Below is the architecture diagram for the interaction between API, RabbitMQ, LLM-core, and Redis:

+-------------------+       +-----------------+       +----------------+       +-------------------+
|                   |       |                 |       |                |       |                   |
|       API         +------>+    RabbitMQ     +------>+    LLM-core    +------>+      Redis         |
|                   |       |                 |       |                |       |                   |
+-------------------+       +-----------------+       +----------------+       +-------------------+
        ^                             ^                                ^
        |                             |                                |
        |      Requests are queued    |    Worker retrieves tasks     | Results are stored in Redis
        |      Results are polled     |                                |
        +-----------------------------+--------------------------------+

Flow

  1. API:

    • Receives requests via endpoints (/generate, /chat_completion).
    • Publishes tasks to RabbitMQ.
    • Polls Redis for results based on task IDs.
  2. RabbitMQ:

    • Acts as a queue for task distribution.
    • LLM-core workers subscribe to queues to process tasks.
  3. LLM-core:

    • Retrieves tasks from RabbitMQ.
    • Processes prompts or chat completions using LLM models.
    • Stores results in Redis.
  4. Redis:

    • Acts as the result storage.
    • API retrieves results from Redis when tasks are completed.

Usage

Running the API

  1. Configure environment variables in the .env file.
  2. Start the API using:
app = FastAPI()

config = Config.read_from_env()

app.include_router(get_router(config))

Running the API Locally (without Docker)

To run the API locally using Uvicorn, use the following command:

uvicorn protollm_api.backend.main:app --host 127.0.0.1 --port 8000 --reload

Or use this main file:

app = FastAPI()

config = Config.read_from_env()

app.include_router(get_router(config))
    
if __name__ == "__main__":
    uvicorn.run("protollm_api.backend.main:app", host="127.0.0.1", port=8000, reload=True)

Example Request

Generate

curl -X POST "http://localhost:8000/generate" -H "Content-Type: application/json" -d '{
  "job_id": "12345",
  "meta": {
    "temperature": 0.5,
    "tokens_limit": 1000,
    "stop_words": ["stop"],
    "model": "gpt-model"
  },
  "content": "What is AI?"
}'

Chat Completion

curl -X POST "http://localhost:8000/chat_completion" -H "Content-Type: application/json" -d '{
  "job_id": "12345",
  "meta": {
    "temperature": 0.5,
    "tokens_limit": 1000,
    "stop_words": ["stop"],
    "model": "gpt-model"
  },
  "messages": [
    {"role": "user", "content": "What is AI?"},
    {"role": "assistant", "content": "Artificial Intelligence is..."}
  ]
}'

Notes

  • Ensure that RABBIT_MQ_HOST, RABBIT_MQ_PORT, REDIS_HOST, and other variables are synchronized between the API and LLM-core containers.
  • The system supports distributed scaling by adding more LLM-core workers to the RabbitMQ queue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

protollm_api-1.0.1.tar.gz (4.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

protollm_api-1.0.1-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file protollm_api-1.0.1.tar.gz.

File metadata

  • Download URL: protollm_api-1.0.1.tar.gz
  • Upload date:
  • Size: 4.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.10.11 Windows/10

File hashes

Hashes for protollm_api-1.0.1.tar.gz
Algorithm Hash digest
SHA256 12a3fb3d90fd439e7ef1d98730cd4e9cebfaf78f90aae7f726511f217f590bf1
MD5 ef77fc8b61a83da72f78f79ac66e8a56
BLAKE2b-256 7fba112edbb176f098b4cbc14f70fb5afba56c20c67a5a44b97a2ab72981472c

See more details on using hashes here.

File details

Details for the file protollm_api-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: protollm_api-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 6.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.1 CPython/3.10.11 Windows/10

File hashes

Hashes for protollm_api-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d28cff915ec9648aeb51146fd85c367b06aa52c5ff80e48283d9fa018cd9a6fd
MD5 c6b1287e50908127e6b8f80005b0f01f
BLAKE2b-256 db0075edae0a1c796602964caa741fe7be0dc4a8cecd4ec63758886ba0c39e45

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page