LLM Proxy to reduce cost and complexity of using multiple LLMs
Project description
LLM Proxy
A low-code solution to efficiently manage multiple large language models
View Demo
·
Report Bug
·
Request Feature
Table of Contents
What is LLM Proxy?
LLM Proxy is a tool that sits between your application and the different LLM providers. LLM Proxy's goal is to simplify the use of multiple LLMs through a TUI while providing cost and response optimization.
Getting Started
There are 2 ways to get started. You can directly clone the repo into your project or install it as a library.
Prerequisites
- Python 3.11+
Installation
With pip:
pip install proxyllm
With poetry:
poetry add proxyllm
Run the install script for the default configuration file:
config --default-config
If you prefer poetry:
poetry run config --default-config
If the installation scripts do not work, you can visit the repo to grab a copy manually.
Note:
- Ensure that you have all of your API keys for each respective provider in the .env file (You can utilize the .env.example for reference)
- For Google's models, you will need the path to application credentials, and the project ID inside of the .env
Usage
Basic Usage
Currently, the LLM Proxy provides 2 different route types: Cost and Category.
To get started import the LLMProxy client:
from proxyllm import LLMProxy
After the setup is complete, you only need 1 line of code to get started:
llmproxy_client = LLMProxy()
Note: You will need to specify your yaml configuration file if you did not use the default name:
llmproxy_client = LLMProxy(path_to_user_configuration="llmproxy.config.yml")
To use the llmproxy, simply call the route function with your prompt:
output = llmproxy_client.route(prompt=prompt)
The route function will return a CompletionResponse:
print("RESPONSE MODEL: ", output.response_model)
print("RESPONSE: ", output.response)
print("ERRORS: ", output.errors)
response_model:
contains the model used for the requestresponse:
contains the string response from the modelerrors:
contains an array of models that failed to make a request with their respective errors
Important Note: Although parameters changed programmatically, it is best to favor the YAML configuration file. Only use the constructor parameters when you must override the YAML configuration.
Chat History
The user can pass in their own custom chat history to the proxy and it will route to the correct LLM with this chat history.
The data type of the chat history is:
chat_history: List[Dict[str,str]]
The chat history supports the following roles:
system
: Helps define the LLM's behavior. You can customize its personality or specify detailed instructions on how it should interact during conversations. It's important to note that while providing a system message is optional, if none is given, the assistant's behavior is likely to resemble that of a generic message, such as "You are a helpful assistant." If you plan on using the system role, place it in the first index ofchat_history
rather than later on in the conversation.user
: Contain requests or comments for the LLM to address.assistant
: Are the response messages by the LLMs, and can also be authored by you to showcase desired behaviors.
An example of how chat history could be formatted is as such:
chat_history = [
{
"role": "system",
"content": "you are math bot, and you're responses must be short and sweet",
},
{
"role": "user",
"content": "what is 1 + 1",
},
{
"role": "assistant",
"content": "2",
}
]
By default the chat history would be an empty array and will not need to be instantiated. But in order for you to retrieve the chat history after routing, simplyset your chat history variable to output.chat_history
and then this chat history can be passed into the proxy's route() function.
The following example shows the usage of chat history with the proxy library:
prompt = "what is 1 + 1"
proxy_client = LLMProxy(route_type="cost")
output = proxy_client.route(prompt=prompt)
chat_history = output.chat_history
prompt2 = "What was the first question that I asked you?"
output = proxy_client.route(prompt=prompt2, chat_history=chat_history)
chat_history = output.chat_history
Notes:
- When passing in a populated custom chat history, the first message in the chat history has to be either a
system
orassistant
message. The last message has to be anassistant
message. The 'user' message cannot be appended to the chat history, it has to be passed in the form of theprompt
to the proxy'sroute
function. - You must use only the following roles otherwise routing will not work:
user
,assistant
,system
- In order to keep track of your chat history, make sure to set it equal to the
output.chat_history
each time you make a call to the proxy's route() function
Roadmap
- Support for more providers
- Replicate
- Claude
- Support for multimodal Models
- Custom, optimized model for general router
- Elo Routing
- Context Injection
- Filter/Security Layer
See the open issues for a full list of proposed features (and known issues).
Contributing
LLM Proxy is open source, so we are open and grateful, for contributions. Open-source communities are what makes software great, so feel fork the repo and create a pull request with the feature tag. Thanks!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
License
Distributed under the MIT License. See LICENSE.txt
for more information.
Acknowledgments
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file proxyllm-0.2.3.tar.gz
.
File metadata
- Download URL: proxyllm-0.2.3.tar.gz
- Upload date:
- Size: 333.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.1 CPython/3.11.5 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | eea7ff7516ace3c798d1456ef29cae1480ea74603763e860832fcac78863d427 |
|
MD5 | 1b92c443ea557ac5ecd0b312201b5564 |
|
BLAKE2b-256 | 5a2dbb42ab56536c5055d5466ee869b5e8d163a91271939631f154be0a6f356a |
File details
Details for the file proxyllm-0.2.3-py3-none-any.whl
.
File metadata
- Download URL: proxyllm-0.2.3-py3-none-any.whl
- Upload date:
- Size: 344.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.1 CPython/3.11.5 Darwin/22.4.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2e8196e424a8811763d4edc084a7c7fc00656c568f22d70f72e5659f1002b1a7 |
|
MD5 | 89757476e317ae50ed109f698ad55310 |
|
BLAKE2b-256 | 98a7f4570bd3a83511b6e8de0b096cdd56a914e03d4612dc11a0dacfff899a14 |