Skip to main content

Stream and print responses from LangChain's language models asynchronously.

Project description

LangChain LLM Streamer

LangChain LLM Streamer is a Python package that allows you to stream and print responses from LangChain's language models in an asynchronous manner.

Installation

You can install the package using pip:

pip install langchain-llm-streamer

Usage

Here is an example of how to use the LangChain LLM Streamer:

from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI

from langchain_llm_streamer import stream_print


model = ChatOpenAI(
    api_key="***",  # Replace with your OpenAI API key
    model="gpt-3.5-turbo",
)

messages = [
    SystemMessage(content="You are a friendly AI. Please respond to the user's prompt."),
    HumanMessage(content="Tell me something about yourself.")
]

# Example usage
stream_print(model, messages)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain-llm-streamer-0.1.1.tar.gz (1.9 kB view hashes)

Uploaded Source

Built Distribution

langchain_llm_streamer-0.1.1-py3-none-any.whl (2.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page