Bridge for LLM"s
Project description
AIBridge 0.0.1
AIBridge is the python package with the support of the Multiple LLM's,User can utilised the Formatters ,prompts, varibales to get most of the LLM's Through the AIBridge
Requirement
- Python 3
Configuration
- With the help of the AIBridge we can svae the prompts in your sql/nosql data bases by default the :sqllite on the disc space
- To configure the database you have add the data in config file.
from AIBridge impprt Setconfig
#call the config method
Setconfig.set_db_confonfig(database=sql,database_name=None,database_uri=None)
#parameters:
#database-sql/nosql
#database_uri: url of the databse of your choice(all sql support for no sql(Mongo))
- If you want use the sqllite on the disc no need to configure these for db support
- Currently AI bridge support only the OPEN_AI api:
from AIBridge import SetConfig
SetConfig.set_api_key(ai_service="open_ai",key="YOUR_API_KEY",priority="high")
#priority:high/medium/low/equal
Prompt save
- Prompt save mechanism is used the save the reusable and extraordinary prompts that give you the exceptional result from LLM
from AIBridge import PromptInsertion
# save prompt
data = PromptInsertion.save_prompt(
prompt="your prompt:{{data}},context:{{context}}",
name="first_prompt",
prompt_data={"data": "what is purpose of the ozone here"},
variables={"context": "environment_context"},
)
print(data)
# parameters: prompt_data: is used to manipulatre the same prompt with diffrent context at realtime
# variables: is used to manipulate the prompt with fixed context as varibales is a specific data
#update prompt can see the prompt_data and variables, is used the get diffrent output from same prompt
data = PromptInsertion.update_prompt(
id="prompt_id",
name="updated_prompt",
prompt_data={"data": "write abouts the plastic pollution"},
variables={"context": "ocean_pollution"},
)
print(data)
#Get prompt from id
data = PromptInsertion.get_prompt(id="prompt_id")
print(data)
# pagination support for getting the all prompt
data = PromptInsertion.get_all_prompt(page=1)
print(data)
variables
- why variables?-> variables are the specific data used the get the desired and with context from ai LLM's.
- These is the example of the self consistance prompts Ref:https://www.promptingguide.ai/techniques/consistency
- Varibales methods:
from AIBridge import VariableInsertion
# save varibales
# parameters: var_key: key for the varibales
# var_value: list of the string for the context
data = VariableInsertion.save_variables(
var_key="ochean_context",
var_value=[
"Ocean pollution is a significant environmental issue that poses a threat to marine life and ecosystems"
],
)
print(data)
# update the variables
data = VariableInsertion.update_variables(
id="variable_id",
var_key="updated_string",
var_value=["updated senetece about topics"],
)
print(data)
# get Variables from id
data = VariableInsertion.get_variable(id="variable_id")
# get all Variables pagination
data = VariableInsertion.get_all_variable(page=1)
Get Response
- LLm=open_ai
- default_model="gpt-3.4-turbo-oo3"
- Max_toke count - 3500
- temprature set to 0.5 methods
from AIBridge import OpenAIService
import json
json_schema = json.dumps({"animal": ["list of animals"]})
xml_schema = "<animals><category>animal name</category></animals>"
csv = "name,category,species,age,weight,color,habitat"
data = OpenAIService.generate(
prompts=["name of the animals in the {{jungle}}"],
prompt_ids=None,
prompt_data=[{"jungle": "jungle"}],
variables=None,
output_format=["json"],
format_strcture=[json_schema],
model="gpt-3.5-turbo",
variation_count=1,
max_tokens=3500,
temperature=0.5,
message_queue=False,
)
print(data)
# Prameters
# prompts= list of the string that need to executed in session where output id dependant on each other,
# promts_ids= prompt id's list and so at a time ids will execute or prompts,
# prompt_data=[data of the every prompt id they required],
# variables=[ varibale dict of the prompt],
# output_format=["xml/json/csv/sql/"],
# format_strcture=[out put strcture of the prompt],
# model="gpt-3.5-turbo", model for completion api of the gpt
# variation_count = 1, n of the output require
# max_tokens = 3500, maximut token per out put
# temperature = 0.5, data consistecy
# message_queue=False, scalability purpose
output = {
"items": {
"response": [
{
"data": [
'{"animal": ["lion", "tiger", "elephant", "monkey", "snake", "gorilla", "leopard", "crocodile", "jaguar", "giraffe"]}'
]
}
],
"token_used": 85,
"created_at": 1689323114.9568439,
"ai_service": "open_ai",
}
}
How To Play
- Export Repo
- Install the packages: "openai","sqlalchemy","redis","pyyaml","sqlparse", "Jinja2", "pymongo", "sqlparse" and "jsonschema".
pip install openai,sqlalchemy,redis,pyyaml
Message Queue
- default Queue=redis,
Configure redis
from AIBridge import SetConfig
# set redis configuration
SetConfig.redis_config(
redis_host="localhost",
redis_port="port _for redis",
group_name="consumer gorup name",
stream_name="redis topic",
no_of_threads=1,#concurrent thread ypu want run for your application
)
- To use the Queue service set message_queue = True
from AIBridge import OpenAIService
import json
json_schema = json.dumps({"animal": ["list of animals"]})
data = OpenAIService.generate(
prompts=["name of the animals in the {{jungle}}"],
prompt_ids=None,
prompt_data=[{"jungle": "jungle"}],
variables=None,
output_format=["json"],
format_strcture=[json_schema],
message_queue=True# to activate message queue service
)
# to use the Queue service use the name set the message queue prameter = True
print(data)
*Response for above function is the id of the response stored in the databse
{'response_id': 'eaa61944-3216-4ba1-bec5-05842fb86d86'}
- Message queue is for increasing scalibilty
- for APplication server you have turn on the consumer when application getting started.
from AIBridge import MessageQ
# to start the consumer in background
MessageQ.mq_deque()
- In the non application environmen:
- you can run set message queue=True
- Run the below function to procees the Stream data in consumer
from AIBridge import MessageQ
# these for testingthe redis env in local on single page file
data = MessageQ.local_process()
print(data)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aibridge_test-0.0.0.tar.gz
(16.7 kB
view hashes)
Built Distribution
Close
Hashes for aibridge_test-0.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5f5fc48cddba7923db1eadfc461d604201e0e261dac899a9069f0a3fcb396607 |
|
MD5 | 86a3c7c0e5d8a51823cc6f8f2fbab0d1 |
|
BLAKE2b-256 | 29ebc36b75ed4c508e4dcc9062567e41ff0491254ec63eb178599abbad02d2ff |