Bridge for LLM"s
Project description
aibridgecore
aibridgecore is a Python SDK for working with multiple AI providers through a consistent set of text-generation utilities, prompt management, reusable variables, structured outputs, queue-backed execution, and provider-specific image and video modules.
Overview
- Multi-provider text generation with a shared high-level pattern
- Structured outputs for JSON, CSV, and XML workflows
- Stored prompts and reusable variables backed by SQL or MongoDB
- Redis-based queue support for asynchronous processing
- Provider-specific image generation APIs
- Provider-specific video generation APIs
Installation
pip install aibridgecore
Python 3.9 or later is required.
Configuration
Set the config file path with AIBRIDGE_CONFIG.
export AIBRIDGE_CONFIG=/absolute/path/to/aibridge_config.yaml
Minimal configuration example:
open_ai:
equal:
- YOUR_OPENAI_API_KEY
database: sql
message_queue: redis
redis_host: localhost
redis_port: 6379
group_name: my_consumer_group
stream_name: my_stream
no_of_threads: 1
You can also configure the SDK programmatically:
from aibridgecore import SetConfig
SetConfig.set_api_key(
ai_service="open_ai",
key="YOUR_OPENAI_API_KEY",
priority="equal",
)
SetConfig.set_db_confonfig(
database="sql",
database_name=None,
database_uri=None,
)
SetConfig.redis_config(
redis_host="localhost",
redis_port=6379,
group_name="my_consumer_group",
stream_name="my_stream",
no_of_threads=1,
)
Supported ai_service keys:
open_aistable_diffusioncohere_apiai21_apigemini_aianthropicgrokdeepseekmistralalibabakimi
Text Generation
Primary text providers exported from aibridgecore:
OpenAIServiceGeminiAIServiceAnthropicServiceCohereApiAI21labsTextOllamaServiceGrokServiceDeepseekServiceMistralServiceAlibabaServiceKimiService
Basic generation example:
import json
from aibridgecore import OpenAIService
schema = json.dumps(
{
"summary": ["short summary bullet"],
"keywords": ["keyword"],
}
)
response = OpenAIService.generate(
prompts=["Summarize {{topic}} for an engineering update."],
prompt_data=[{"topic": "queue-backed AI processing"}],
output_format=["json"],
format_strcture=[schema],
model="gpt-3.5-turbo",
max_tokens=800,
temperature=0.3,
context=[{"role": "system", "context": "Be concise and factual."}],
)
print(response["items"]["response"][0]["data"][0])
Streaming is also available on the text provider classes that expose generate_stream(...):
from aibridgecore import OpenAIService
stream = OpenAIService.generate_stream(
prompts=["Write a short release note for this SDK update."],
model="gpt-3.5-turbo",
context=[{"role": "system", "context": "Keep the tone professional."}],
)
for chunk in stream:
print(chunk)
Structured output notes:
output_formatacceptsjson,csv, orxmlformat_strctureshould match the expected structure for each promptcontextentries useroleandcontext
Prompt and Variable Management
Use prompt templates when the shape of a prompt is reusable, and use variables when part of that prompt should come from a named stored dataset.
prompt_datainjects request-specific values directly into a prompt templatevariablesmaps template placeholders to previously saved variable keys
Example:
from aibridgecore import PromptInsertion, VariableInsertion
saved_variable = VariableInsertion.save_variables(
var_key="release_tones",
var_value=["clear", "professional", "direct"],
)
saved_prompt = PromptInsertion.save_prompt(
name="release_summary",
prompt="Write a {{tone}} summary about {{topic}}.",
prompt_data={"topic": "this release"},
variables={"tone": "release_tones"},
)
prompt_record = PromptInsertion.get_prompt(id=saved_prompt["id"])
all_prompts = PromptInsertion.get_all_prompt(page=1)
variable_record = VariableInsertion.get_variable(id=saved_variable["id"])
all_variables = VariableInsertion.get_all_variable(page=1)
Common operations:
from aibridgecore import PromptInsertion, VariableInsertion
PromptInsertion.update_prompt(
id="PROMPT_ID",
name="updated_release_summary",
prompt_data={"topic": "the latest SDK release"},
variables={"tone": "release_tones"},
)
VariableInsertion.update_variables(
id="VARIABLE_ID",
var_key="release_tones",
var_value=["clear", "concise", "technical"],
)
Message Queue Support
Queue-backed execution is available through Redis.
When message_queue=True, generation returns a response id instead of the final model output:
from aibridgecore import FetchAIResponse, MessageQ, OpenAIService
MessageQ.mq_deque()
queued = OpenAIService.generate(
prompts=["Generate a short deployment checklist."],
model="gpt-3.5-turbo",
message_queue=True,
)
response_id = queued["response_id"]
result = FetchAIResponse.get_response(id=response_id)
print(result)
Image Generation
Image APIs are currently imported from aibridgecore.image.*, not from the top-level package.
Available image providers:
aibridgecore.image.providers.openai.OpenAIImageProvideraibridgecore.image.providers.stability.StabilityImageProvideraibridgecore.image.providers.google_imagen.GoogleImagenProvideraibridgecore.image.providers.alibaba_wan_image.AlibabaWanImageProvider
The image request contract supports:
text2imgeditimg2img
Mode support depends on the provider and model you use.
Example:
from aibridgecore.image.contracts import ImageGenerationRequest, ImageMode
from aibridgecore.image.providers.openai import OpenAIImageProvider
provider = OpenAIImageProvider(api_key="YOUR_OPENAI_API_KEY")
request = ImageGenerationRequest(
prompts=["A clean product render on a studio background"],
model="gpt-image-1",
n=1,
size="1024x1024",
mode=ImageMode.TEXT_TO_IMAGE,
)
response = provider.generate(request)
artifact = response.results[0].images[0]
with open("generated_image.png", "wb") as file:
file.write(artifact.content)
For edit and image-to-image flows, pass images=[...] and optionally masks=[...] in the request.
Video Generation
Video APIs are currently imported from aibridgecore.video.*, not from the top-level package.
Available video providers:
aibridgecore.video.providers.openai_sora.OpenAISoraProvideraibridgecore.video.providers.google_veo.GoogleVeoProvideraibridgecore.video.providers.alibaba_wan.AlibabaWanProvideraibridgecore.video.providers.stability.StabilityVideoProvideraibridgecore.video.providers.luma.LumaVideoProvider
Video generation is asynchronous. You start a job, store the provider job id, and poll for status until the result is ready.
Current request modes:
text2videoimg2videovideo2videois planned but not enabled yet
Example:
from aibridgecore.video.contracts import VideoGenerationRequest, VideoMode
from aibridgecore.video.providers.openai_sora import OpenAISoraProvider
provider = OpenAISoraProvider(api_key="YOUR_OPENAI_API_KEY")
request = VideoGenerationRequest(
model="YOUR_VIDEO_MODEL",
prompt="A slow cinematic drone shot over a rainforest canopy at sunrise",
duration_seconds=5,
aspect_ratio="16:9",
mode=VideoMode.TEXT_TO_VIDEO.value,
)
job = provider.start_generation(request)
status = provider.check_status(job.provider_job_id)
print(job)
print(status)
For image-to-video workflows, set mode=VideoMode.IMG_TO_VIDEO.value and provide images=[...] in the request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aibridgecore-1.5.11.tar.gz.
File metadata
- Download URL: aibridgecore-1.5.11.tar.gz
- Upload date:
- Size: 58.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e917f481e577243ee878876e23fdcc9d6c21935cbce5bd5fa4d20115901786d5
|
|
| MD5 |
2e701a58d10f186f18634a73c5ed78c9
|
|
| BLAKE2b-256 |
0fb2def1d6c12963ad57a83199e46912841846d0c313d5729645024f2f53944d
|
File details
Details for the file aibridgecore-1.5.11-py3-none-any.whl.
File metadata
- Download URL: aibridgecore-1.5.11-py3-none-any.whl
- Upload date:
- Size: 93.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ef15271e11aabf28de154a341da2a293080de5a86dd998259909fcccfc1e8152
|
|
| MD5 |
01fecb85aa6e70a1a6f240eb0669af98
|
|
| BLAKE2b-256 |
4f278d42c2843a7af0f1798bb1759f0cdb06bbddcfebab660ef7b478daedf248
|