Wrappers around LLM API models and embeddings clients.
Project description
Langchain LLM API
A Langchain compatible implementation which enables the integration with LLM-API
The main reason for implementing this package is to be able to use Langchain with any model run locally.
Usage
You can install this as a python library using the command (until it's integrated with langchain itself)
pip install langchain-llm-api
To use this langchain implementation with the LLM-API:
from langchain_llm_api import LLMAPI, APIEmbeddings
llm = LLMAPI(
params={"temp": 0.2},
verbose=True
)
llm("What is the capital of France?")
Or with streaming:
from langchain_llm_api import LLMAPI, APIEmbeddings
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = LLMAPI(
params={"temp": 0.2},
verbose=True,
streaming=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])
)
llm("What is the capital of France?")
Check LLM-API for the possible models and thier params
Embeddings
to use the embeddings endpoint:
emb = APIEmbeddings(
host_name="your api host name",
params = {"n_predict": 300, "temp": 0.2, ...}
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for langchain_llm_api-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bb46a951251b448429db98ece1f6cf609103a11e38dd8d73c000244a0485462d |
|
MD5 | 24ea18de31f623cb9e452e177e635ffd |
|
BLAKE2b-256 | 6bcd79526ec075e366576f87d397e1cf636da88a7dc03e026e6bed21ad72d004 |