Skip to main content

A streaming chat toolkit for pre-trained large language models(LLM)

Project description

ChatStream

English | 日本語

ChatStream is a chat toolkit for pre-trained large language models.

It can be embedded in FastAPI/Starlette based web applications/web APIs to perform sequential sentence generation with pre-trained language models under load control.

Installation

pip install chatstream

Quick Start

Install required packages

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
pip install transformers
pip install "uvicorn[standard]" gunicorn 

Implementing a ChatStream server

Implement a streaming chat server for pre-trained models.

import torch
from fastapi import FastAPI, Request
from fastsession import FastSessionMiddleware, MemoryStore
from transformers import AutoTokenizer, AutoModelForCausalLM

from chatstream import ChatStream,ChatPromptTogetherRedPajamaINCITEChat as ChatPrompt

model_path = "togethercomputer/RedPajama-INCITE-Chat-3B-v1"
device = "cuda" # "cuda" / "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
model.to(device)

chat_stream = ChatStream(
    num_of_concurrent_executions=2,# max_concurrent_executions for sentence generation
    max_queue_size=5,# size of queue
    model=model,
    tokenizer=tokenizer,
    device=device,
    chat_prompt_clazz=ChatPrompt,
)

app = FastAPI()

# Specify session middleware to keep per-user ChatPrompt in the HTTP session
app.add_middleware(FastSessionMiddleware,
                   secret_key="your-session-secret-key",
                   store=MemoryStore(),
                   http_only=True,
                   secure=False,
                   )


@app.post("/chat_stream")
async def stream_api(request: Request):.
    # Just pass a FastAPI Request object to `handle_starlette_request` to automatically queue and control concurrency
    response = await chat_stream.handle_starlette_request(request)
    return response


@app.on_event("startup")
async def startup():.
    # start the queueing system by doing `start_queue_worker` at the same time the web server starts up
    await chat_stream.start_queue_worker()

Table of Contents

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatstream-0.2.0.tar.gz (33.8 kB view details)

Uploaded Source

Built Distribution

chatstream-0.2.0-py3-none-any.whl (36.1 kB view details)

Uploaded Python 3

File details

Details for the file chatstream-0.2.0.tar.gz.

File metadata

  • Download URL: chatstream-0.2.0.tar.gz
  • Upload date:
  • Size: 33.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for chatstream-0.2.0.tar.gz
Algorithm Hash digest
SHA256 7b0e3045fd6edb748e03454c5c203274d1442d7cf417dee957b7782a7d2d6fcf
MD5 a68a38ebad710943e5c5ca010f415cab
BLAKE2b-256 1c1c61d48b3a446e3cb5cd0236a786719fb2c47ce1e1020e1d1d99859bd65eb8

See more details on using hashes here.

File details

Details for the file chatstream-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: chatstream-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 36.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for chatstream-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b89184b2cbe8cf73d1ff4e4b210db27e0fa7f195408aadfc9a9319c4be06842b
MD5 3a5b39198d55ef46c3532aad5833cc36
BLAKE2b-256 014979cc3926653be1c3a37bf99ad97f1be12769657d8747216a33628d8be422

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page