Python library for asynchronous interactions with the OpenAI API, enabling concurrent request handling. It simplifies building scalable, AI-powered applications by offering efficient, rate-limited access to OpenAI services. Perfect for developers seeking to integrate OpenAI's capabilities with minimal overhead.
Project description
🚀 Concurrent OpenAI Manager
A lightweight, preemptive rate limiter and concurrency manager for OpenAI's API
✨ Features
- 🎯 Preemptive Token Estimation: Attempts to predict token usage before making API calls
- 🔄 Smart Rate Limiting: Manages requests and tokens per minute to avoid API limits
- ⚡ Concurrent Request Handling: Efficient parallel processing with semaphore control
- 💰 Built-in Cost Tracking: Real-time cost estimation for better budget management
- 🎚️ Fine-tuned Control: Adjustable parameters for optimal performance
📦 Installation
pip install concurrent-openai
🚀 Quick Start
- Set up your environment:
echo "OPENAI_API_KEY=your_api_key" >> .env
# OR
export OPENAI_API_KEY=your_api_key
Note: You can also pass the api_key to the ConcurrentOpenAI client.
- Start making requests:
from concurrent_openai import ConcurrentOpenAI
client = ConcurrentOpenAI(
api_key="your-api-key", # not required if OPENAI_API_KEY env var is set
max_concurrent_requests=5,
requests_per_minute=200,
tokens_per_minute=40000
)
response = client.create(
messages=[{"role": "user", "content": "Hello!"}],
model="gpt-4o",
temperature=0.7
)
print(response.content)
Or pass your own instance of AsyncOpenAI
from openai import AsyncOpenAI
from concurrent_openai import ConcurrentOpenAI
openai_client = AsyncOpenAI(api_key="your-api-key")
client = ConcurrentOpenAI(
client=openai_client,
max_concurrent_requests=5,
requests_per_minute=200,
tokens_per_minute=40000
)
🎯 Why Concurrent OpenAI Manager?
- Preemptive Rate Limiting: Unlike other libraries that react to rate limits, here the idea is to predict the token usage before making requests
- Resource Optimization: Smart throttling prevents request surges and optimizes API usage
- Cost Control: Built-in cost estimation helps manage API expenses effectively
- Lightweight: Minimal dependencies, focused functionality
🔧 Advanced Usage
Azure OpenAI Integration
The library supports seamless integration with Azure OpenAI services:
from openai import AsyncAzureOpenAI
from concurrent_openai import ConcurrentOpenAI
azure_client = AsyncAzureOpenAI(
azure_endpoint="your-azure-endpoint",
api_key="your-azure-api-key",
api_version="2024-02-01"
)
client = ConcurrentOpenAI(
client=azure_client,
max_concurrent_requests=5,
requests_per_minute=60,
tokens_per_minute=10000
)
response = await client.create(
messages=[{"role": "user", "content": "Hello!"}],
model="gpt-35-turbo", # Use your deployed model name
temperature=0.7
)
Batch Processing
from concurrent_openai import ConcurrentOpenAI
messages_list = [
[{"role": "user", "content": f"Process item {i}"}]
for i in range(10)
]
client = ConcurrentOpenAI(api_key="your-api-key")
responses = client.create_many(
messages_list=messages_list,
model="gpt-40",
temperature=0.7
)
for resp in responses:
if resp.is_success:
print(resp.content)
Cost Tracking
client = ConcurrentOpenAI(
api_key="your-api-key",
input_token_cost=2.5 / 1_000_000, # see https://openai.com/api/pricing/ for the latest costs
output_token_cost=10 / 1_000_000
)
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file concurrent_openai-1.2.2.tar.gz.
File metadata
- Download URL: concurrent_openai-1.2.2.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
433f5fc1f7126e639739e38953ca438f5c3641f0a0dc46fbe95100bbe5358f1b
|
|
| MD5 |
2b7c43978ed65a7a50de5dc617e9f5b3
|
|
| BLAKE2b-256 |
47fe7b739bb2af8df92a947943747297c8b69d8c7c3897f27e2a809f19ca3ed2
|
Provenance
The following attestation bundles were made for concurrent_openai-1.2.2.tar.gz:
Publisher:
ci.yml on marianstefi20/concurrent-openai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
concurrent_openai-1.2.2.tar.gz -
Subject digest:
433f5fc1f7126e639739e38953ca438f5c3641f0a0dc46fbe95100bbe5358f1b - Sigstore transparency entry: 173653975
- Sigstore integration time:
-
Permalink:
marianstefi20/concurrent-openai@22a98c4700b58fa280e489bea7376727ee497a90 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/marianstefi20
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@22a98c4700b58fa280e489bea7376727ee497a90 -
Trigger Event:
push
-
Statement type:
File details
Details for the file concurrent_openai-1.2.2-py3-none-any.whl.
File metadata
- Download URL: concurrent_openai-1.2.2-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a714c54d1d9a5456bb6151c0f57b7d4c2b2068415417b71fbc1e2ce76b3b22cc
|
|
| MD5 |
80a800b78010ee61dc98fdafa28465d7
|
|
| BLAKE2b-256 |
b3e5f4e399898cf43f9a15706840af07c27488a31195c1ac0c7f40746ef14f9c
|
Provenance
The following attestation bundles were made for concurrent_openai-1.2.2-py3-none-any.whl:
Publisher:
ci.yml on marianstefi20/concurrent-openai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
concurrent_openai-1.2.2-py3-none-any.whl -
Subject digest:
a714c54d1d9a5456bb6151c0f57b7d4c2b2068415417b71fbc1e2ce76b3b22cc - Sigstore transparency entry: 173653976
- Sigstore integration time:
-
Permalink:
marianstefi20/concurrent-openai@22a98c4700b58fa280e489bea7376727ee497a90 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/marianstefi20
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@22a98c4700b58fa280e489bea7376727ee497a90 -
Trigger Event:
push
-
Statement type: