Skip to main content

Bridge for LLM"s

Project description

AIBridge 0.0.1

AIBridge is the python package with the support of the Multiple LLM's,User can utilised the Formatters ,prompts, varibales to get most of the LLM's Through the AIBridge

Requirement

  • Python 3

Install The Test package

pip install aibridge-test

Set aibridge_config.yaml

  • Setting the the aibridge_config.yaml at your desire choice
  • In .env file set the path for file.
Example in bash we set:
export AIBRIDGE_CONFIG=C:/Users/Admin/aibridge/aibridge_config.yaml
variable name for env:AIBRIDGE_CONFIG
starter file.
&id001
data: *id001
group_name: my_consumer_group
message_queue: redis
no_of_threads: 1
open_ai:
  - key: API KEY HERE
    priority: equal
palm_api:
  - key: API KEY HERE
    priority: equal
stable_diffusion:
  - key: API KEY HERE
    priority: equal
redis_host: localhost
redis_port: Loacal Host
stream_name: my_stream
database: nosql
database_name: aibridge
database_uri: mongodb://localhost:27017
per_page: 10

Configuration

  • With the help of the AIBridge we can svae the prompts in your sql/nosql data bases by default the :sqllite on the disc space
  • To configure the database you have add the data in config file.
from AIBridge impprt Setconfig
#call the config method
Setconfig.set_db_confonfig(database=sql,database_name=None,database_uri=None)
#parameters:
#database-sql/nosql
#database_uri: url of the databse of your choice(all sql support for no sql(Mongo))
  • If you want use the sqllite on the disc no need to configure these for db support
  • Currently AI bridge support only the OPEN_AI api:
from AIBridge import SetConfig
SetConfig.set_api_key(ai_service="open_ai",key="YOUR_API_KEY",priority="high")
#priority:high/medium/low/equal

Prompt save

  • Prompt save mechanism is used the save the reusable and extraordinary prompts that give you the exceptional result from LLM
from AIBridge import PromptInsertion

# save prompt
data = PromptInsertion.save_prompt(
    prompt="your prompt:{{data}},context:{{context}}",
    name="first_prompt",
    prompt_data={"data": "what is purpose of the ozone here"},
    variables={"context": "environment_context"},
)
print(data)
# parameters: prompt_data: is used to manipulatre the same prompt  with diffrent context at realtime
# variables: is used to manipulate the prompt with fixed context as varibales is a specific data


#update prompt can see the prompt_data and variables, is used the get diffrent output from same prompt
data = PromptInsertion.update_prompt(
    id="prompt_id",
    name="updated_prompt",
    prompt_data={"data": "write abouts the plastic pollution"},
    variables={"context": "ocean_pollution"},
)
print(data)


#Get prompt from id
data = PromptInsertion.get_prompt(id="prompt_id")
print(data)

# pagination support for getting the all prompt
data = PromptInsertion.get_all_prompt(page=1)
print(data)

variables

from AIBridge import VariableInsertion

# save varibales
# parameters: var_key: key for the varibales
# var_value: list of the string for the context
data = VariableInsertion.save_variables(
    var_key="ochean_context",
    var_value=[
        "Ocean pollution is a significant environmental issue that poses a threat to marine life and ecosystems"
    ],
)
print(data)

# update the variables
data = VariableInsertion.update_variables(
    id="variable_id",
    var_key="updated_string",
    var_value=["updated senetece about topics"],
)
print(data)

# get Variables from id
data = VariableInsertion.get_variable(id="variable_id")

# get all Variables pagination
data = VariableInsertion.get_all_variable(page=1)

Get Response

  • LLm=open_ai
  • default_model="gpt-3.4-turbo-oo3"
  • Max_toke count - 3500
  • temprature set to 0.5 methods
from AIBridge import OpenAIService
import json

json_schema = json.dumps({"animal": ["list of animals"]})
xml_schema = "<animals><category>animal name</category></animals>"
csv = "name,category,species,age,weight,color,habitat"
data = OpenAIService.generate(
    prompts=["name of the animals in the  {{jungle}}"],
    prompt_ids=None,
    prompt_data=[{"jungle": "jungle"}],
    variables=None,
    output_format=["json"],
    format_strcture=[json_schema],
    model="gpt-3.5-turbo",
    variation_count=1,
    max_tokens=3500,
    temperature=0.5,
    message_queue=False,
)
print(data)
# Prameters
# prompts= list of the string that need to executed in session  where output id dependant on each other,
# promts_ids= prompt  id's list and  so at a time ids will execute or prompts,
# prompt_data=[data of the every prompt id they required],
# variables=[ varibale dict of the prompt],
# output_format=["xml/json/csv/sql/"],
# format_strcture=[out put strcture of the prompt],
# model="gpt-3.5-turbo", model for completion api of the gpt
# variation_count = 1,  n of the output require
# max_tokens = 3500, maximut token per out put
# temperature = 0.5, data consistecy
# message_queue=False, scalability purpose

output = {
    "items": {
        "response": [
            {
                "data": [
                    '{"animal": ["lion", "tiger", "elephant", "monkey", "snake", "gorilla", "leopard", "crocodile", "jaguar", "giraffe"]}'
                ]
            }
        ],
        "token_used": 85,
        "created_at": 1689323114.9568439,
        "ai_service": "open_ai",
    }
}

Message Queue

  • default Queue=redis,

Configure redis

from AIBridge import SetConfig
# set redis configuration
SetConfig.redis_config(
    redis_host="localhost",
    redis_port="port _for redis",
    group_name="consumer gorup name",
    stream_name="redis topic",
    no_of_threads=1,#concurrent thread ypu want run for your application
)
  • To use the Queue service set message_queue = True
from AIBridge import OpenAIService
import json

json_schema = json.dumps({"animal": ["list of animals"]})
data = OpenAIService.generate(
    prompts=["name of the animals in the  {{jungle}}"],
    prompt_ids=None,
    prompt_data=[{"jungle": "jungle"}],
    variables=None,
    output_format=["json"],
    format_strcture=[json_schema],
    message_queue=True# to activate message queue service
)
# to use the Queue service use the name set the message queue prameter = True
print(data)

*Response for above function is the id of the response stored in the databse

{ "response_id": "eaa61944-3216-4ba1-bec5-05842fb86d86" }
  • Message queue is for increasing scalibilty
  • for APplication server you have turn on the consumer when application getting started.
from AIBridge import MessageQ

# to start the consumer in background
MessageQ.mq_deque()
  • In the non application environmen:
    • you can run set message queue=True
  • Run the below function to procees the Stream data in consumer
from AIBridge import MessageQ
# these for testingthe redis env in local on single page file
data = MessageQ.local_process()
print(data)

###Dalle image genration###

from AIBridge.ai_services.openai_images import OpenAIImage

images = OpenAIImage.generate(
    prompts=["A sunlit indoor lounge area with a pool containing a flamingo"],
    image_data=["image loacation or image url"],
    mask_image=["image loacation or image url"],
    variation_count=1,
    process_type="edit",
)
print(images)

# prompts: is list string how many diffrent image we have to genearte
# image_data: is the lacation of the image in file or the image url
# mask_image: is the mask image with transpernet patch in the  image where we want edit the images
# variation_count: is the number of image we want to generate
# prcess type : create, edit, variation,
# create is for genrating new images
# edit the image with the mask mask is compulsary to edi the images
# variation is for genrating new images of sama type

Palm-Text APi Integration

  • To set APi key in aibridge_config.yaml file you can add the it directly in aibridge_config.yaml file in the format
   palm_api:
  - key: AIz****************************QkkA(your-api_key)
    priority: equal
    from AIBridge import SetConfig
    SetConfig.set_api_key(ai_service="palm_api",key="YOUR_API_KEY",priority="high")
    #priority:high/medium/low/equal
```python
from AIBridge import PalmText

prompt = """
write paragraph about the {{prompting}}in ai and let user know what is the{{prompting}} and how the {{prompting}} works in genrative AI
"""
json_format = """{"text": "paragraph here"}"""
data = PalmText.generate(
    prompts=[prompt],
    prompt_data=[{"prompting": "model training"}],
    output_format=["json"],
    format_strcture=[json_format],
    message_queue=True,
)
print(data)
# Prameters
# prompts= list of the string that need to executed in session  where output id dependant on each other,
# promts_ids= prompt  id's list and  so at a time ids will execute or prompts,
# prompt_data=[data of the every prompt id they required],
# variables=[ varibale dict of the prompt],
# output_format=["xml/json/csv/sql/"],
# format_strcture=[out put strcture of the prompt],
# model="models/text-bison-001", model for generate api of the palm
# variation_count = 1-8,  n of the output require
# max_tokens = default 10000, maximut token per out put, no limit for token"
# temperature = default-0.5, data consistecy
# message_queue=False, scalability purpose

Palm CHAT- API

    from AIBridge import PalmChat
    # An array of "ideal" interactions between the user and the model
    examples = [
        (
            "What's up?",
            "What isn't up?? The sun rose another day, the world is bright, anything is possible!",
        ),
        (
            "I'm kind of bored",
            "How can you be bored when there are so many fun, exciting, beautiful experiences to be had in the world?",
        ),
    ]
    data = PalmChat.generate(
        messages="give the protype for the stack on c++",
        context="carreer or growth advides",
        variation_count=3,
        message_queue=True,
    )
    print(data)
    # mesages: text provided to chat
    # context: on the what basis do you want start the chat
    # examples: demo for the LLm to undersat the tone and what do you reaaly want(few shot prompt type)
    # variation_count: how many variations of the context should be used
    # message_queue: if true, the chat will return the messages in a queue
    # temperature = default-0.5, data consistecy

StableDiffusion Image

stable_diffusion:
  - key: API Key here
    priority: equal
  • stable diffusion we have 3 action "img2img,text2image, inpaint"

  • default parameter required for the api are:

    Parameter Description
    key Your API Key used for request authorization.
    negative_prompt Items you don't want in the image.
    width Max Height: Width: 1024x1024. Should be divisible by 8.
    height Max Height: Width: 1024x1024. Should be divisible by 8.
    samples Number of images to be returned in response. The maximum value is 4.
    num_inference_steps Number of denoising steps. Available values: 21, 31, 41, 51.
    safety_checker A checker for NSFW images. If such an image is detected, it will be replaced by a blank image.
    enhance_prompt Enhance prompts for better results; default: yes, options: yes/no.
    seed Seed is used to reproduce results, same seed will give you the same image in return again. Pass null for a random number.
    guidance_scale Scale for classifier-free guidance (minimum: 1; maximum: 20).
    multi_lingual Allow multilingual prompt to generate images. Use "no" for the default English.
    panorama Set this parameter to "yes" to generate a panorama image.
    self_attention If you want a high-quality image, set this parameter to "yes". In this case, the image generation will take more time.
    upscale Set this parameter to "yes" if you want to upscale the given image resolution two times (2x). If the requested resolution is 512 x 512 px, the generated image will be 1024 x 1024 px.
    embeddings_model This is used to pass an embeddings model (embeddings_model_id).
    webhook Set a URL to get a POST API call once the image generation is complete.
    track_id This ID is returned in the response to the webhook API call. This will be used to identify the webhook request.
  • text2imag

from AIBridge import StableDiffusion

data = StableDiffusion.generate(
    prompts=["cat sitting on bench"],# prompts is the list for how many images you have to genrate per request
    action="text2img",
)
print(data)
  • Response
{'0': {'status': 'success', 'generationTime': 0.7476449012756348, 'id': 40967676, 'output': ['https://cdn2.stablediffusionapi.com/generations/a77b1e09-e7ee-480b-80ad-abbdcdb66877-0.png'], 'meta': {'H': 512, 'W': 512, 'enable_attention_slicing': 'true', 'file_prefix': 'a77b1e09-e7ee-480b-80ad-abbdcdb66877', 'guidance_scale': 7.5, 'model': 'runwayml/stable-diffusion-v1-5', 'n_samples': 1, 'negative_prompt': 'low quality', 'outdir': 'out', 'prompt': 'cat sitting on bench', 'revision': 'fp16', 'safetychecker': 'yes', 'seed': 2873893487, 'steps': 20, 'vae': 'stabilityai/sd-vae-ft-mse'}}}
  • img2img

from AIBridge import StableDiffusion

init_image = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
data = StableDiffusion.generate(
    prompts=["cat sitting on bench"],
    image_data=[init_image], #image data is the list followed by the prompts for each prompt there should be one image data
    action="img2img",
)
print(data)
  • Response
{
  "0": {
    "status": "success",
    "generationTime": 0.7667558193206787,
    "id": 40968310,
    "output": [
      "https://cdn2.stablediffusionapi.com/generations/2f1a2b63-8569-4b94-9b86-41d142f72774-0.png"
    ],
    "meta": {
      "H": 512,
      "W": 512,
      "file_prefix": "2f1a2b63-8569-4b94-9b86-41d142f72774",
      "guidance_scale": 7.5,
      "init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png",
      "n_samples": 1,
      "negative_prompt": "",
      "outdir": "out",
      "prompt": "cat sitting on bench",
      "safetychecker": "yes",
      "seed": 1762284371,
      "steps": 20,
      "strength": 0.7
    }
  }
}
  • inpaint

from AIBridge import StableDiffusion

mask_image = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
data = StableDiffusion.generate(
    prompts=["cat sitting on bench"],
    image_data=[init_image],
    mask_image=[mask_image],
    action="inpaint",
)
print(data)
  • Response
{
  "0": {
    "status": "success",
    "generationTime": 0.8590030670166016,
    "id": 40967907,
    "output": [
      "https://cdn2.stablediffusionapi.com/generations/59f14b2a-9ef5-4e41-83fc-fb1e5ff9b791-0.png"
    ],
    "meta": {
      "H": 512,
      "W": 512,
      "file_prefix": "59f14b2a-9ef5-4e41-83fc-fb1e5ff9b791",
      "guidance_scale": 7.5,
      "init_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png",
      "mask_image": "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png",
      "n_samples": 1,
      "negative_prompt": "",
      "outdir": "out",
      "prompt": "cat sitting on bench",
      "safetychecker": "yes",
      "seed": 242418520,
      "steps": 20,
      "strength": 0.7
    }
  }
}
  • Note

  • At the image genration time thorugh api in stable diffusion complexex image some time take time so they are keep in the que and to fetch the image after completion we have to provide the response id
from AIBridge import StableDiffusion
data=StableDiffusion.fetch_image(id="image_id here")
print(Data)
{
    "status": "success",
    "id": 12202888,
    "output": [
        "https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/e5cd86d3-7305-47fc-82c1-7d1a3b130fa4-0.png"
    ]
}
  • for Getting the information about all Queuing parameters
from AIBridge import StableDiffusion
data=StableDiffusion.system_load(id="image_id here")
print(Data)
{
  "queue_num": 0,
  "queue_time": 0,
  "status": "ok"
}

Cohere APi's

cohere_api:
  - key: API Key here
    priority: equal
    from AIBridge import SetConfig
    SetConfig.set_api_key(ai_service="cohere_api",key="YOUR_API_KEY",priority="high")
    #priority:high/medium/low/equal
from AIBridge import CohereApi
json_str = """{"text": "text here"}"""
csv_string = "name,animal_type,prey,predators"
xml_string = """<data><animal>animal information here</animal></data>"""
data = CohereApi.generate(
    prompts=["give the me the more information about the {{animals}}"],
    prompt_data=[{"animals": "animals"}],
    format_strcture=[xml_string],
    output_format=["xml"],
)
print(data)
# Prameters
# prompts= list of the string that need to executed in session  where output id dependant on each other,
# promts_ids= prompt  id's list and  so at a time ids will execute or prompts,
# prompt_data=[data of the every prompt id they required],
# variables=[ varibale dict of the prompt],
# output_format=["xml/json/csv/sql/"],
# format_strcture=[out put strcture of the prompt],
# model="models/text-bison-001", model for generate api of the palm
# variation_count =  the output require
# max_tokens = default 10000, maximut token per out put, no limit for token"
# temperature = default-0.5, data consistecy
# message_queue=False, scalability purpose

AI21 Api's

ai21_api:
  - key: API Key here
    priority: equal
    from AIBridge import SetConfig
    SetConfig.set_api_key(ai_service="ai21_api",key="YOUR_API_KEY",priority="high")
    #priority:high/medium/low/equal
from AIBridge import JurasicText
json_str = """{"text": "text here"}"""
csv_string = "name,animal_type,prey,predators"
xml_string = """<?xml version="1.0" encoding="UTF-8" ?><data><animal>animal information here</animal></data>"""
data = JurasicText.generate(
    prompts=["give the me the more information about the {{animals}}"],
    prompt_data=[{"animals": "tigers"}],
    format_strcture=[csv_string],
    output_format=["csv"],
)
print(data)
# Prameters
# prompts= list of the string that need to executed in session  where output id dependant on each other,
# promts_ids= prompt  id's list and  so at a time ids will execute or prompts,
# prompt_data=[data of the every prompt id they required],
# variables=[ varibale dict of the prompt],
# output_format=["xml/json/csv/sql/"],
# format_strcture=[out put strcture of the prompt],
# model="models/text-bison-001", model for generate api of the palm
# variation_count =  the output require
# max_tokens = default 10000, maximut token per out put, no limit for token"
# temperature = default-0.5, data consistecy
# message_queue=False, scalability purpose

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aibridge_test-0.3.3.tar.gz (35.3 kB view hashes)

Uploaded Source

Built Distribution

aibridge_test-0.3.3-py3-none-any.whl (54.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page