Skip to main content

Wrappers around LLM API models and embeddings clients.

Project description

Langchain LLM API

A Langchain compatible implementation which enables the integration with LLM-API

The main reason for implementing this package is to be able to use Langchain with any model run locally.

Usage

You can install this as a python library using the command (until it's integrated with langchain itself)

pip install langchain-llm-api

To use this langchain implementation with the LLM-API:

from langchain_llm_api import LLMAPI, APIEmbeddings

llm = LLMAPI(
    params={"temp": 0.2},
    verbose=True
)

llm("What is the capital of France?")

Or with streaming:

from langchain_llm_api import LLMAPI, APIEmbeddings
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = LLMAPI(
    params={"temp": 0.2},
    verbose=True,
    streaming=True,
    callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])
)

llm("What is the capital of France?")

Check LLM-API for the possible models and thier params

Embeddings

to use the embeddings endpoint:

emb = APIEmbeddings(
    host_name="your api host name",
    params = {"n_predict": 300, "temp": 0.2, ...}
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_llm_api-0.0.1.tar.gz (3.9 kB view hashes)

Uploaded Source

Built Distribution

langchain_llm_api-0.0.1-py3-none-any.whl (5.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page