Skip to main content

Bridge for LLM"s

Project description

aibridgecore

aibridgecore is a Python SDK for working with multiple AI providers through a consistent set of text-generation utilities, prompt management, reusable variables, structured outputs, queue-backed execution, and provider-specific image and video modules.

Overview

  • Multi-provider text generation with a shared high-level pattern
  • Structured outputs for JSON, CSV, and XML workflows
  • Stored prompts and reusable variables backed by SQL or MongoDB
  • Redis-based queue support for asynchronous processing
  • Provider-specific image generation APIs
  • Provider-specific video generation APIs

Installation

pip install aibridgecore

Python 3.9 or later is required.

Configuration

Set the config file path with AIBRIDGE_CONFIG.

export AIBRIDGE_CONFIG=/absolute/path/to/aibridge_config.yaml

Minimal configuration example:

open_ai:
  equal:
    - YOUR_OPENAI_API_KEY

database: sql
message_queue: redis
redis_host: localhost
redis_port: 6379
group_name: my_consumer_group
stream_name: my_stream
no_of_threads: 1

You can also configure the SDK programmatically:

from aibridgecore import SetConfig

SetConfig.set_api_key(
    ai_service="open_ai",
    key="YOUR_OPENAI_API_KEY",
    priority="equal",
)

SetConfig.set_db_confonfig(
    database="sql",
    database_name=None,
    database_uri=None,
)

SetConfig.redis_config(
    redis_host="localhost",
    redis_port=6379,
    group_name="my_consumer_group",
    stream_name="my_stream",
    no_of_threads=1,
)

Supported ai_service keys:

  • open_ai
  • stable_diffusion
  • cohere_api
  • ai21_api
  • gemini_ai
  • anthropic
  • grok
  • deepseek
  • mistral
  • alibaba
  • kimi

Text Generation

Primary text providers exported from aibridgecore:

  • OpenAIService
  • GeminiAIService
  • AnthropicService
  • CohereApi
  • AI21labsText
  • OllamaService
  • GrokService
  • DeepseekService
  • MistralService
  • AlibabaService
  • KimiService

Basic generation example:

import json

from aibridgecore import OpenAIService

schema = json.dumps(
    {
        "summary": ["short summary bullet"],
        "keywords": ["keyword"],
    }
)

response = OpenAIService.generate(
    prompts=["Summarize {{topic}} for an engineering update."],
    prompt_data=[{"topic": "queue-backed AI processing"}],
    output_format=["json"],
    format_strcture=[schema],
    model="gpt-3.5-turbo",
    max_tokens=800,
    temperature=0.3,
    context=[{"role": "system", "context": "Be concise and factual."}],
)

print(response["items"]["response"][0]["data"][0])

Streaming is also available on the text provider classes that expose generate_stream(...):

from aibridgecore import OpenAIService

stream = OpenAIService.generate_stream(
    prompts=["Write a short release note for this SDK update."],
    model="gpt-3.5-turbo",
    context=[{"role": "system", "context": "Keep the tone professional."}],
)

for chunk in stream:
    print(chunk)

Structured output notes:

  • output_format accepts json, csv, or xml
  • format_strcture should match the expected structure for each prompt
  • context entries use role and context

Prompt and Variable Management

Use prompt templates when the shape of a prompt is reusable, and use variables when part of that prompt should come from a named stored dataset.

  • prompt_data injects request-specific values directly into a prompt template
  • variables maps template placeholders to previously saved variable keys

Example:

from aibridgecore import PromptInsertion, VariableInsertion

saved_variable = VariableInsertion.save_variables(
    var_key="release_tones",
    var_value=["clear", "professional", "direct"],
)

saved_prompt = PromptInsertion.save_prompt(
    name="release_summary",
    prompt="Write a {{tone}} summary about {{topic}}.",
    prompt_data={"topic": "this release"},
    variables={"tone": "release_tones"},
)

prompt_record = PromptInsertion.get_prompt(id=saved_prompt["id"])
all_prompts = PromptInsertion.get_all_prompt(page=1)

variable_record = VariableInsertion.get_variable(id=saved_variable["id"])
all_variables = VariableInsertion.get_all_variable(page=1)

Common operations:

from aibridgecore import PromptInsertion, VariableInsertion

PromptInsertion.update_prompt(
    id="PROMPT_ID",
    name="updated_release_summary",
    prompt_data={"topic": "the latest SDK release"},
    variables={"tone": "release_tones"},
)

VariableInsertion.update_variables(
    id="VARIABLE_ID",
    var_key="release_tones",
    var_value=["clear", "concise", "technical"],
)

Message Queue Support

Queue-backed execution is available through Redis.

When message_queue=True, generation returns a response id instead of the final model output:

from aibridgecore import FetchAIResponse, MessageQ, OpenAIService

MessageQ.mq_deque()

queued = OpenAIService.generate(
    prompts=["Generate a short deployment checklist."],
    model="gpt-3.5-turbo",
    message_queue=True,
)

response_id = queued["response_id"]
result = FetchAIResponse.get_response(id=response_id)
print(result)

Image Generation

Image APIs are currently imported from aibridgecore.image.*, not from the top-level package.

Available image providers:

  • aibridgecore.image.providers.openai.OpenAIImageProvider
  • aibridgecore.image.providers.stability.StabilityImageProvider
  • aibridgecore.image.providers.google_imagen.GoogleImagenProvider
  • aibridgecore.image.providers.alibaba_wan_image.AlibabaWanImageProvider

The image request contract supports:

  • text2img
  • edit
  • img2img

Mode support depends on the provider and model you use.

Example:

from aibridgecore.image.contracts import ImageGenerationRequest, ImageMode
from aibridgecore.image.providers.openai import OpenAIImageProvider

provider = OpenAIImageProvider(api_key="YOUR_OPENAI_API_KEY")

request = ImageGenerationRequest(
    prompts=["A clean product render on a studio background"],
    model="gpt-image-1",
    n=1,
    size="1024x1024",
    mode=ImageMode.TEXT_TO_IMAGE,
)

response = provider.generate(request)
artifact = response.results[0].images[0]

with open("generated_image.png", "wb") as file:
    file.write(artifact.content)

For edit and image-to-image flows, pass images=[...] and optionally masks=[...] in the request.

Video Generation

Video APIs are currently imported from aibridgecore.video.*, not from the top-level package.

Available video providers:

  • aibridgecore.video.providers.openai_sora.OpenAISoraProvider
  • aibridgecore.video.providers.google_veo.GoogleVeoProvider
  • aibridgecore.video.providers.alibaba_wan.AlibabaWanProvider
  • aibridgecore.video.providers.stability.StabilityVideoProvider
  • aibridgecore.video.providers.luma.LumaVideoProvider

Video generation is asynchronous. You start a job, store the provider job id, and poll for status until the result is ready.

Current request modes:

  • text2video
  • img2video
  • video2video is planned but not enabled yet

Example:

from aibridgecore.video.contracts import VideoGenerationRequest, VideoMode
from aibridgecore.video.providers.openai_sora import OpenAISoraProvider

provider = OpenAISoraProvider(api_key="YOUR_OPENAI_API_KEY")

request = VideoGenerationRequest(
    model="YOUR_VIDEO_MODEL",
    prompt="A slow cinematic drone shot over a rainforest canopy at sunrise",
    duration_seconds=5,
    aspect_ratio="16:9",
    mode=VideoMode.TEXT_TO_VIDEO.value,
)

job = provider.start_generation(request)
status = provider.check_status(job.provider_job_id)

print(job)
print(status)

For image-to-video workflows, set mode=VideoMode.IMG_TO_VIDEO.value and provide images=[...] in the request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aibridgecore-1.5.8.tar.gz (58.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aibridgecore-1.5.8-py3-none-any.whl (93.4 kB view details)

Uploaded Python 3

File details

Details for the file aibridgecore-1.5.8.tar.gz.

File metadata

  • Download URL: aibridgecore-1.5.8.tar.gz
  • Upload date:
  • Size: 58.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for aibridgecore-1.5.8.tar.gz
Algorithm Hash digest
SHA256 b7ed53ad233010584441db6a8380756bff00da3fb574f1f9379ddfa79361a03d
MD5 3a515ca223402b54d6311ca68c8deb39
BLAKE2b-256 fc3abfca0b10f1591706b706dff37ec55fcd5542657167af79c6e7f5168d786f

See more details on using hashes here.

File details

Details for the file aibridgecore-1.5.8-py3-none-any.whl.

File metadata

  • Download URL: aibridgecore-1.5.8-py3-none-any.whl
  • Upload date:
  • Size: 93.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for aibridgecore-1.5.8-py3-none-any.whl
Algorithm Hash digest
SHA256 0841bab110a8d15e92f384acf3ee23867fb8e9ef5c143fa1794e25a5d4a6da65
MD5 d84906f43100644251509531b1f2bd95
BLAKE2b-256 e61c37d8653a85c47ae2b41e2e7115dd8cab9a404da7eac29929eb95faeb740b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page