Skip to main content

Bridge for LLM"s

Project description

aibridgecore

aibridgecore is a Python SDK for working with multiple AI providers through a consistent set of text-generation utilities, prompt management, reusable variables, structured outputs, queue-backed execution, and provider-specific image and video modules.

Overview

  • Multi-provider text generation with a shared high-level pattern
  • Structured outputs for JSON, CSV, and XML workflows
  • Stored prompts and reusable variables backed by SQL or MongoDB
  • Redis-based queue support for asynchronous processing
  • Provider-specific image generation APIs
  • Provider-specific video generation APIs

Installation

pip install aibridgecore

Python 3.9 or later is required.

Configuration

Set the config file path with AIBRIDGE_CONFIG.

export AIBRIDGE_CONFIG=/absolute/path/to/aibridge_config.yaml

Minimal configuration example:

open_ai:
  equal:
    - YOUR_OPENAI_API_KEY

database: sql
message_queue: redis
redis_host: localhost
redis_port: 6379
group_name: my_consumer_group
stream_name: my_stream
no_of_threads: 1

You can also configure the SDK programmatically:

from aibridgecore import SetConfig

SetConfig.set_api_key(
    ai_service="open_ai",
    key="YOUR_OPENAI_API_KEY",
    priority="equal",
)

SetConfig.set_db_confonfig(
    database="sql",
    database_name=None,
    database_uri=None,
)

SetConfig.redis_config(
    redis_host="localhost",
    redis_port=6379,
    group_name="my_consumer_group",
    stream_name="my_stream",
    no_of_threads=1,
)

Supported ai_service keys:

  • open_ai
  • stable_diffusion
  • cohere_api
  • ai21_api
  • gemini_ai
  • anthropic
  • grok
  • deepseek
  • mistral
  • alibaba
  • kimi

Text Generation

Primary text providers exported from aibridgecore:

  • OpenAIService
  • GeminiAIService
  • AnthropicService
  • CohereApi
  • AI21labsText
  • OllamaService
  • GrokService
  • DeepseekService
  • MistralService
  • AlibabaService
  • KimiService

Basic generation example:

import json

from aibridgecore import OpenAIService

schema = json.dumps(
    {
        "summary": ["short summary bullet"],
        "keywords": ["keyword"],
    }
)

response = OpenAIService.generate(
    prompts=["Summarize {{topic}} for an engineering update."],
    prompt_data=[{"topic": "queue-backed AI processing"}],
    output_format=["json"],
    format_strcture=[schema],
    model="gpt-3.5-turbo",
    max_tokens=800,
    temperature=0.3,
    context=[{"role": "system", "context": "Be concise and factual."}],
)

print(response["items"]["response"][0]["data"][0])

Streaming is also available on the text provider classes that expose generate_stream(...):

from aibridgecore import OpenAIService

stream = OpenAIService.generate_stream(
    prompts=["Write a short release note for this SDK update."],
    model="gpt-3.5-turbo",
    context=[{"role": "system", "context": "Keep the tone professional."}],
)

for chunk in stream:
    print(chunk)

Structured output notes:

  • output_format accepts json, csv, or xml
  • format_strcture should match the expected structure for each prompt
  • context entries use role and context

Prompt and Variable Management

Use prompt templates when the shape of a prompt is reusable, and use variables when part of that prompt should come from a named stored dataset.

  • prompt_data injects request-specific values directly into a prompt template
  • variables maps template placeholders to previously saved variable keys

Example:

from aibridgecore import PromptInsertion, VariableInsertion

saved_variable = VariableInsertion.save_variables(
    var_key="release_tones",
    var_value=["clear", "professional", "direct"],
)

saved_prompt = PromptInsertion.save_prompt(
    name="release_summary",
    prompt="Write a {{tone}} summary about {{topic}}.",
    prompt_data={"topic": "this release"},
    variables={"tone": "release_tones"},
)

prompt_record = PromptInsertion.get_prompt(id=saved_prompt["id"])
all_prompts = PromptInsertion.get_all_prompt(page=1)

variable_record = VariableInsertion.get_variable(id=saved_variable["id"])
all_variables = VariableInsertion.get_all_variable(page=1)

Common operations:

from aibridgecore import PromptInsertion, VariableInsertion

PromptInsertion.update_prompt(
    id="PROMPT_ID",
    name="updated_release_summary",
    prompt_data={"topic": "the latest SDK release"},
    variables={"tone": "release_tones"},
)

VariableInsertion.update_variables(
    id="VARIABLE_ID",
    var_key="release_tones",
    var_value=["clear", "concise", "technical"],
)

Message Queue Support

Queue-backed execution is available through Redis.

When message_queue=True, generation returns a response id instead of the final model output:

from aibridgecore import FetchAIResponse, MessageQ, OpenAIService

MessageQ.mq_deque()

queued = OpenAIService.generate(
    prompts=["Generate a short deployment checklist."],
    model="gpt-3.5-turbo",
    message_queue=True,
)

response_id = queued["response_id"]
result = FetchAIResponse.get_response(id=response_id)
print(result)

Image Generation

Image APIs are currently imported from aibridgecore.image.*, not from the top-level package.

Available image providers:

  • aibridgecore.image.providers.openai.OpenAIImageProvider
  • aibridgecore.image.providers.stability.StabilityImageProvider
  • aibridgecore.image.providers.google_imagen.GoogleImagenProvider
  • aibridgecore.image.providers.alibaba_wan_image.AlibabaWanImageProvider

The image request contract supports:

  • text2img
  • edit
  • img2img

Mode support depends on the provider and model you use.

Example:

from aibridgecore.image.contracts import ImageGenerationRequest, ImageMode
from aibridgecore.image.providers.openai import OpenAIImageProvider

provider = OpenAIImageProvider(api_key="YOUR_OPENAI_API_KEY")

request = ImageGenerationRequest(
    prompts=["A clean product render on a studio background"],
    model="gpt-image-1",
    n=1,
    size="1024x1024",
    mode=ImageMode.TEXT_TO_IMAGE,
)

response = provider.generate(request)
artifact = response.results[0].images[0]

with open("generated_image.png", "wb") as file:
    file.write(artifact.content)

For edit and image-to-image flows, pass images=[...] and optionally masks=[...] in the request.

Video Generation

Video APIs are currently imported from aibridgecore.video.*, not from the top-level package.

Available video providers:

  • aibridgecore.video.providers.openai_sora.OpenAISoraProvider
  • aibridgecore.video.providers.google_veo.GoogleVeoProvider
  • aibridgecore.video.providers.alibaba_wan.AlibabaWanProvider
  • aibridgecore.video.providers.stability.StabilityVideoProvider
  • aibridgecore.video.providers.luma.LumaVideoProvider

Video generation is asynchronous. You start a job, store the provider job id, and poll for status until the result is ready.

Current request modes:

  • text2video
  • img2video
  • video2video is planned but not enabled yet

Example:

from aibridgecore.video.contracts import VideoGenerationRequest, VideoMode
from aibridgecore.video.providers.openai_sora import OpenAISoraProvider

provider = OpenAISoraProvider(api_key="YOUR_OPENAI_API_KEY")

request = VideoGenerationRequest(
    model="YOUR_VIDEO_MODEL",
    prompt="A slow cinematic drone shot over a rainforest canopy at sunrise",
    duration_seconds=5,
    aspect_ratio="16:9",
    mode=VideoMode.TEXT_TO_VIDEO.value,
)

job = provider.start_generation(request)
status = provider.check_status(job.provider_job_id)

print(job)
print(status)

For image-to-video workflows, set mode=VideoMode.IMG_TO_VIDEO.value and provide images=[...] in the request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aibridgecore-1.5.6.tar.gz (57.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aibridgecore-1.5.6-py3-none-any.whl (92.8 kB view details)

Uploaded Python 3

File details

Details for the file aibridgecore-1.5.6.tar.gz.

File metadata

  • Download URL: aibridgecore-1.5.6.tar.gz
  • Upload date:
  • Size: 57.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for aibridgecore-1.5.6.tar.gz
Algorithm Hash digest
SHA256 8a51734c33d85c5a6ca1df37df0c06b43f1c844c5678e526ecc22244696a6e3c
MD5 72f6ee58dc19fc95caf8b0ed73a18e4c
BLAKE2b-256 3006b11a2a3144c02431d7c12704c8c635950fbc1ef9500daac40ae85d3d12cf

See more details on using hashes here.

File details

Details for the file aibridgecore-1.5.6-py3-none-any.whl.

File metadata

  • Download URL: aibridgecore-1.5.6-py3-none-any.whl
  • Upload date:
  • Size: 92.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for aibridgecore-1.5.6-py3-none-any.whl
Algorithm Hash digest
SHA256 1499f13c6792d0d8712d9c43c48899176a1aff29f48afb57ebaf8ab70d89bec3
MD5 bdb1426f198cf2d3d0bf8b3b77b67a41
BLAKE2b-256 b0cf04c6a85416d35e5be6df385fde57dee472313f1f063c62b60c4230e9983c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page