Skip to main content

A custom wrapper for the Langchain Python package with modified response handling

Project description

custom_openai

custom_openai is a drop-in replacement for the official OpenAI Python SDK, enabling you to easily customize, standardize, and enrich the API responses for your own post-processing, logging, RAG pipelines, or monitoring needs. It supports both synchronous and asynchronous usage, with robust support for streaming and complete compatibility with the OpenAI API.


🚀 Features

  • Plug-and-play: Fully compatible with the OpenAI client interface. Just swap your import and go.
  • Custom response fields: Every .create() call (sync/async/streaming) includes a .flashquery attribute for easy extraction of processed or enriched response data.
  • Streaming support: Automatically injects your custom data into the final item in async generator streams.
  • No monkey-patching: Cleanly extends OpenAI’s classes without risky global side effects.

📦 Installation

pip install flashquery

🔥 Quickstart

Synchronous Example

from flashquery.client import CustomLangchainClient

client = CustomLangchainClient(
    provider="openai",
    model="gpt-4o-mini",
    temperature=0,
    api_key="sk-..."
)

response = client.generate([{"role": "user", "content": "Say hello?"}])

print(response.flashquery)  # Your custom field!

Streaming Example

import asyncio
from flashquery.client import CustomLangchainClient

async def main():
    client = CustomLangchainClient(
        provider="openai",
        model="gpt-4o-mini",
        temperature=0,
        api_key="sk-..."
    )
    response = client.astream([{"role": "user", "content": "Oi, tudo bem?"}])

    async for chunk in response:
        print("Chunk:", chunk.flashquery)

asyncio.run(main())

⚙️ How it Works

  • CustomOpenAIClient and CustomAsyncOpenAIClient inherit from the official OpenAI clients, overriding .create() methods of responses and chat.completions.
  • After each response is created, a new .flashquery attribute is attached.
  • In streaming mode, .flashquery is set on the last chunk yielded from the generator.

🤝 Contributing

Pull requests are welcome! For major changes, please open an issue first to discuss what you would like to change.


📄 License

MIT License


Need help? Open an issue or contribute on GitHub!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flashquery-0.0.1-py3-none-any.whl (4.5 kB view details)

Uploaded Python 3

File details

Details for the file flashquery-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: flashquery-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 4.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for flashquery-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3b14c7c1cfe1f3b918457546846cd397ffcba19bcceb913e3b27298185cc9e95
MD5 6787329d3c44e4017a7b834b099ccccb
BLAKE2b-256 8d38f6f52a5d4ad1a6aa9b9803a239cf1e901ef484a3fb683d678cfa29beb895

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page