Skip to main content

Python client library for the HelpingAI API

Project description

HelpingAI Python SDK

The official Python library for the HelpingAI API - Advanced AI with Emotional Intelligence

PyPI version Python Versions License: MIT

๐Ÿš€ Features

  • OpenAI-Compatible API: Drop-in replacement with familiar interface
  • Emotional Intelligence: Advanced AI models with emotional understanding
  • Streaming Support: Real-time response streaming
  • Comprehensive Error Handling: Detailed error types and retry mechanisms
  • Type Safety: Full type hints and IDE support
  • Flexible Configuration: Environment variables and direct initialization

๐Ÿ“ฆ Installation

pip install HelpingAI

๐Ÿ”‘ Authentication

Get your API key from the HelpingAI Dashboard.

Environment Variable (Recommended)

export HAI_API_KEY='your-api-key'

Direct Initialization

from HelpingAI import HAI

hai = HAI(api_key='your-api-key')

๐ŸŽฏ Quick Start

from HelpingAI import HAI

# Initialize client
hai = HAI()

# Create a chat completion
response = hai.chat.completions.create(
    model="Helpingai3-raw",
    messages=[
        {"role": "system", "content": "You are an expert in emotional intelligence."},
        {"role": "user", "content": "What makes a good leader?"}
    ]
)

print(response.choices[0].message.content)

๐ŸŒŠ Streaming Responses

# Stream responses in real-time
for chunk in hai.chat.completions.create(
    model="Helpingai3-raw",
    messages=[{"role": "user", "content": "Tell me about empathy"}],
    stream=True
):
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

โš™๏ธ Advanced Configuration

Parameter Control

response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "Write a story about empathy"}],
    temperature=0.7,        # Controls randomness (0-1)
    max_tokens=500,        # Maximum length of response
    top_p=0.9,            # Nucleus sampling parameter
    frequency_penalty=0.3, # Reduces repetition
    presence_penalty=0.3,  # Encourages new topics
    hide_think=True       # Filter out reasoning blocks
)

Client Configuration

hai = HAI(
    api_key="your-api-key",
    base_url="https://api.helpingai.co/v1",  # Custom base URL
    timeout=30.0,                            # Request timeout
    organization="your-org-id"               # Organization ID
)

๐Ÿ›ก๏ธ Error Handling

from HelpingAI import HAI, HAIError, RateLimitError, InvalidRequestError
import time

def make_completion_with_retry(messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            return hai.chat.completions.create(
                model="Helpingai3-raw",
                messages=messages
            )
        except RateLimitError as e:
            if attempt == max_retries - 1:
                raise
            time.sleep(e.retry_after or 1)
        except InvalidRequestError as e:
            print(f"Invalid request: {str(e)}")
            raise
        except HAIError as e:
            print(f"API error: {str(e)}")
            raise

๐Ÿค– Available Models

Helpingai3-raw

  • Advanced Emotional Intelligence: Enhanced emotional understanding and contextual awareness
  • Training Data: 15M emotional dialogues, 3M therapeutic exchanges, 250K cultural conversations, 1M crisis response scenarios
  • Best For: AI companionship, emotional support, therapy guidance, personalized learning

Dhanishtha-2.0-preview

  • World's First Intermediate Thinking Model: Multi-phase reasoning with self-correction capabilities
  • Unique Features: <think>...</think> blocks for transparent reasoning, structured emotional reasoning (SER)
  • Best For: Complex problem-solving, analytical tasks, educational content, reasoning-heavy applications
# List all available models
models = hai.models.list()
for model in models:
    print(f"Model: {model.id} - {model.description}")

# Get specific model info
model = hai.models.retrieve("Helpingai3-raw")
print(f"Model: {model.name}")

# Use Dhanishtha-2.0 for complex reasoning
response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "Solve this step by step: What's 15% of 240?"}],
    hide_think=False  # Show reasoning process
)

๐Ÿ“š Documentation

Comprehensive documentation is available:

๐Ÿ—๏ธ Project Structure

HelpingAI-python/
โ”œโ”€โ”€ HelpingAI/              # Main package
โ”‚   โ”œโ”€โ”€ __init__.py         # Package initialization
โ”‚   โ”œโ”€โ”€ client.py           # Main HAI client
โ”‚   โ”œโ”€โ”€ models.py           # Model management
โ”‚   โ”œโ”€โ”€ base_models.py      # Data models
โ”‚   โ”œโ”€โ”€ error.py            # Exception classes
โ”‚   โ””โ”€โ”€ version.py          # Version information
โ”œโ”€โ”€ docs/                   # Documentation
โ”œโ”€โ”€ tests/                  # Test suite
โ”œโ”€โ”€ setup.py               # Package configuration
โ””โ”€โ”€ README.md              # This file

๐Ÿ”ง Requirements

  • Python: 3.7-3.14
  • Dependencies:
    • requests - HTTP client
    • typing_extensions - Type hints support

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guide for details.

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ†˜ Support & Community

๐Ÿš€ What's New in v1.1.0

  • Extended Python Support: Now supports Python 3.7-3.14
  • Updated Models: Support for latest models (Helpingai3-raw, Dhanishtha-2.0-preview)
  • Dhanishtha-2.0 Integration: World's first intermediate thinking model with multi-phase reasoning
  • HelpingAI3 Support: Enhanced emotional intelligence with advanced contextual awareness
  • Improved Model Management: Better fallback handling and detailed model descriptions
  • OpenAI-Compatible Interface: Familiar API design
  • Enhanced Error Handling: Comprehensive exception types
  • Streaming Support: Real-time response streaming
  • Advanced Filtering: Hide reasoning blocks with hide_think parameter

Built with โค๏ธ by the HelpingAI Team

Empowering AI with Emotional Intelligence

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

helpingai-1.1.1.tar.gz (19.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

helpingai-1.1.1-py3-none-any.whl (19.8 kB view details)

Uploaded Python 3

File details

Details for the file helpingai-1.1.1.tar.gz.

File metadata

  • Download URL: helpingai-1.1.1.tar.gz
  • Upload date:
  • Size: 19.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.8

File hashes

Hashes for helpingai-1.1.1.tar.gz
Algorithm Hash digest
SHA256 0599a2d67769edfa9d555169e5978d3b17235e32378423cfc8f18944066521fd
MD5 9bbeb382696c648d6d0e2bc2e5f6a16d
BLAKE2b-256 8d81f9a77677e6816f286c414eb6dc1bf6425508d17ef8cd183f8b4d99cabbd9

See more details on using hashes here.

File details

Details for the file helpingai-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: helpingai-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 19.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.8

File hashes

Hashes for helpingai-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ddbe7d344e5519baafeaaed3047a52b725876c338a5ac83e4245de1bc2964f36
MD5 ec51ebe6575b2c88af7d32fcafdf9543
BLAKE2b-256 4515b651592b0cfc51ede24c5882a06f6a7f94f04f89ef1c805bf85c187bb853

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page