Stream and print responses from LangChain's language models asynchronously.
Project description
LangChain LLM Streamer
LangChain LLM Streamer is a Python package that allows you to stream and print responses from LangChain's language models in an asynchronous manner.
Installation
You can install the package using pip:
pip install langchain-llm-streamer
Usage
Here is an example of how to use the LangChain LLM Streamer:
from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langchain_llm_streamer import stream_print
model = ChatOpenAI(
api_key="***", # Replace with your OpenAI API key
model="gpt-3.5-turbo",
)
messages = [
SystemMessage(content="You are a friendly AI. Please respond to the user's prompt."),
HumanMessage(content="Tell me something about yourself.")
]
# Example usage
stream_print(model, messages)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for langchain-llm-streamer-0.1.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 025ff8ca76734891a732910f76174487b388d4a170d14a0b315fd786b47fafca |
|
MD5 | ba176a90cc02488eac1b4c862175917b |
|
BLAKE2b-256 | 748e1463a820acebd2303ca9245a297ed4b404d36a72f8e67363c791630d8cd2 |
Close
Hashes for langchain_llm_streamer-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45b8375142084f93dc436fabbdbb8bbea82d802c9717a9292802511ebb356b33 |
|
MD5 | cce5ec51fd88ddb826201510e0dac2e5 |
|
BLAKE2b-256 | c83a902674ac16a61b477ffb4a425f6952647966f1c6752d399b8bc956dfe56a |