A simple Python wrapper for OpenAI API without Pydantic dependencies
Project description
OpenAI Without Pydantic
A simple Python package for calling OpenAI's API without using Pydantic or the OpenAI SDK. Perfect for environments where Pydantic is unavailable or you want minimal dependencies.
Features
- ✅ Simple, single-function API:
ask_ai_question(input_text, question_asked) - ✅ Zero Pydantic dependency - uses direct HTTP requests
- ✅ No OpenAI SDK required - bypasses all SDK dependencies
- ✅ Support for all OpenAI chat models (GPT-4, GPT-4o, GPT-3.5, etc.)
- ✅ Customizable parameters (temperature, max_tokens, model)
- ✅ Type hints for better IDE support
- ✅ Comprehensive error handling
- ✅ Only requires
requestslibrary
Installation
Install from PyPI:
pip install openai-without-pydantic
That's it! Only requests is installed - no Pydantic, no OpenAI SDK!
Quick Start
Setting Your OpenAI API Key
The package needs your OpenAI API key to make requests. Set it as an environment variable:
Linux/macOS (bash/zsh):
export OPENAI_API_KEY='your-api-key-here'
Windows (Command Prompt):
set OPENAI_API_KEY=your-api-key-here
Windows (PowerShell):
$env:OPENAI_API_KEY='your-api-key-here'
Or use a .env file (recommended for development):
# Create a .env file in your project directory
OPENAI_API_KEY=your-api-key-here
Then load it in your Python code:
from dotenv import load_dotenv
load_dotenv() # This loads the .env file
Note: Install
python-dotenvfor.envfile support:pip install python-dotenv
Usage
Basic Example
from openai_wrapper import ask_ai_question
input_text = """
Python is a high-level programming language created by Guido van Rossum
and first released in 1991.
"""
question = "Who created Python?"
response = ask_ai_question(
input_text=input_text,
question_asked=question
)
print(response) # Output: Python was created by Guido van Rossum.
Advanced Usage
from openai_wrapper import ask_ai_question
# Use custom parameters
response = ask_ai_question(
input_text="Your context here...",
question_asked="Your question here...",
model="gpt-4o", # Specify model
temperature=0.3, # Lower for more factual responses
max_tokens=200, # Limit response length
api_key="your-key-here" # Or use environment variable
)
Function Parameters
- input_text (str, required): The context/text to use for answering the question
- question_asked (str, required): The question to ask about the input_text
- api_key (str, optional): OpenAI API key (defaults to OPENAI_API_KEY env var)
- model (str, optional): The OpenAI model to use (default: "gpt-4o-mini")
- temperature (float, optional): Sampling temperature 0-2 (default: 0.7)
- max_tokens (int, optional): Maximum tokens in response (default: None)
Running the Examples
Check out the examples/ directory for various usage examples:
# Basic usage
python examples/basic_usage.py
# Advanced features
python examples/advanced_usage.py
# Batch processing
python examples/batch_processing.py
Or use the convenience scripts (automatically activates virtual environment):
./run_examples.sh # Interactive menu
./run_basic.sh # Run basic example
./run_advanced.sh # Run advanced example
./run_batch.sh # Run batch example
See examples/README.md for detailed information about each example, or SCRIPTS.md for more about the convenience scripts.
Project Structure
.
├── openai_wrapper/ # Main package
│ ├── __init__.py # Package initialization
│ ├── client.py # Core implementation
│ └── py.typed # Type hint marker
├── examples/ # Usage examples
│ ├── basic_usage.py # Simple examples
│ ├── advanced_usage.py # Advanced features
│ ├── batch_processing.py # Batch operations
│ └── README.md # Examples documentation
├── tests/ # Unit tests
│ ├── __init__.py
│ └── test_client.py # Test suite
├── .github/workflows/ # CI/CD pipelines
│ ├── test.yml # Automated testing
│ └── publish.yml # PyPI publishing
├── run_examples.sh # Interactive example runner
├── run_basic.sh # Quick run basic example
├── run_advanced.sh # Quick run advanced example
├── run_batch.sh # Quick run batch example
├── run_tests.sh # Quick run tests
├── CHANGELOG.md # Version history
├── CONTRIBUTING.md # Contribution guidelines
├── SECURITY.md # Security policy
├── PUBLISHING.md # Publishing guide
├── SCRIPTS.md # Convenience scripts docs
├── LICENSE # MIT License
├── README.md # This file
├── pyproject.toml # Modern Python packaging
├── setup.py # Package setup
└── requirements.txt # Dependencies
Error Handling
The package includes comprehensive error handling:
try:
response = ask_ai_question(input_text="...", question_asked="...")
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"API error: {e}")
Why No Pydantic?
This package is specifically designed for environments where Pydantic is not available.
The Problem: The official OpenAI Python SDK (openai package) has a hard dependency on pydantic and pydantic-core, which may be unavailable in some environments.
Our Solution: We bypass the OpenAI SDK entirely and call the OpenAI API directly using HTTP requests via the requests library. This eliminates all Pydantic dependencies while maintaining full functionality.
Dependencies
requests>=2.25.0- For making HTTP requests to OpenAI API- That's it! No Pydantic, no OpenAI SDK, no hidden dependencies.
Testing
Run the test suite:
pytest tests/ -v
With coverage:
pytest tests/ --cov=openai_wrapper --cov-report=term-missing
See CONTRIBUTING.md for development setup.
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for:
- Development setup
- Coding standards
- How to submit pull requests
- Running tests
Security
For security concerns, please see SECURITY.md for:
- Reporting vulnerabilities
- Security best practices
- API key safety guidelines
Publishing
To publish this package to PyPI, see PUBLISHING.md for step-by-step instructions.
FAQ
Got questions? Check out the FAQ.md for answers to common questions!
Changelog
See CHANGELOG.md for version history and release notes.
License
MIT License - see LICENSE for details.
Code of Conduct
This project follows a Code of Conduct to ensure a welcoming environment. See CODE_OF_CONDUCT.md.
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file developeralex_openai_without_pydantic-0.1.0.tar.gz.
File metadata
- Download URL: developeralex_openai_without_pydantic-0.1.0.tar.gz
- Upload date:
- Size: 27.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f3b9af3428f2eaaac191ba6d5f965daf4ea65fa7a94f3f324d82d8d03af96c7c
|
|
| MD5 |
3c88a9c64d9c757cc34db6504cdbef0b
|
|
| BLAKE2b-256 |
d43199127e8c0b9a4a3289d00a4aad098562d6566610fa32ed5b90a4e54d3f9e
|
File details
Details for the file developeralex_openai_without_pydantic-0.1.0-py3-none-any.whl.
File metadata
- Download URL: developeralex_openai_without_pydantic-0.1.0-py3-none-any.whl
- Upload date:
- Size: 8.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
16f264cf865f1bc36f88d9ef5d6f290e507e4a850b22b063d53c6f80d62791b8
|
|
| MD5 |
4727f7f34ad2414077211db43dbb5820
|
|
| BLAKE2b-256 |
16fab15c0065a2fc338438105308d0300313f4882c5651e0ec57ad30e74be91e
|