Skip to main content

Think AI - Quantum Consciousness AI Library for Python

Project description

Think AI - Python Library

🧠 Think AI - Quantum Consciousness AI Library for Python

Access the latest intelligence and knowledge of Think AI's quantum consciousness AI system through this comprehensive Python library and CLI.

Features

  • 🚀 O(1) Performance - Lightning-fast AI responses with hash-based lookups
  • 🧠 Quantum Consciousness - Advanced AI with self-awareness capabilities
  • 💬 Real-time Chat - Interactive conversations with streaming responses
  • 🔍 Knowledge Search - Search through vast knowledge bases
  • 📊 System Monitoring - Health checks and performance statistics
  • 🌐 Async Support - Full asyncio compatibility
  • 🎨 Rich CLI - Beautiful command-line interface with colors and tables

Installation

pip install thinkai-quantum  # Version 1.0.0 (July 2025)

Quick Start

Python Library

from think_ai import ThinkAI, quick_chat

# Quick one-shot chat
response = quick_chat("What is quantum consciousness?")
print(response)

# Full client usage
client = ThinkAI()

# Chat with Think AI
response = client.ask("Explain machine learning")
print(response)

# Search knowledge base
results = client.search("artificial intelligence", limit=5)
for result in results:
    print(f"Score: {result.score} - {result.content}")

# Get system statistics
stats = client.get_stats()
print(f"Knowledge nodes: {stats.total_nodes}")
print(f"Average confidence: {stats.average_confidence:.1%}")

# Check system health
health = client.get_health()
print(f"Status: {health.status}")

Async Usage

import asyncio
from think_ai import AsyncThinkAI, ThinkAIConfig

async def main():
    config = ThinkAIConfig(debug=True)
    
    async with AsyncThinkAI(config) as client:
        response = await client.ask("What is the meaning of consciousness?")
        print(response)

asyncio.run(main())

Streaming Responses

from think_ai import ThinkAI, ChatRequest

client = ThinkAI()

def handle_chunk(chunk):
    if chunk.chunk:
        print(chunk.chunk, end="", flush=True)
    if chunk.done:
        print("\n--- Response complete ---")

request = ChatRequest(query="Tell me about quantum computing")
client.stream_chat(request, handle_chunk)

Command Line Interface

Interactive Chat

# Start interactive chat session
think-ai chat

# Chat with streaming responses
think-ai chat --stream

One-shot Questions

# Ask a single question
think-ai ask "What is artificial intelligence?"

# Stream the response
think-ai ask "Explain quantum mechanics" --stream

Knowledge Search

# Search the knowledge base
think-ai search "machine learning algorithms" --limit 10

System Monitoring

# Check system status
think-ai status

# Test connection
think-ai ping

# List knowledge domains
think-ai domains

# Show configuration
think-ai config

Global Options

# Use custom server URL
think-ai --url https://your-server.com chat

# Set timeout
think-ai --timeout 60 ask "Complex question"

# Enable debug mode
think-ai --debug status

Configuration

from think_ai import ThinkAI, ThinkAIConfig

config = ThinkAIConfig(
    base_url="https://thinkai-production.up.railway.app",
    timeout=30,  # seconds
    debug=True
)

client = ThinkAI(config)

API Reference

ThinkAI Client

Methods

  • chat(request: ChatRequest) -> ChatResponse - Send chat message
  • ask(question: str) -> str - Quick chat interface
  • get_stats() -> SystemStats - Get system statistics
  • get_health() -> HealthStatus - Check system health
  • search(query: str, limit: int = 10) -> List[SearchResult] - Search knowledge
  • stream_chat(request: ChatRequest, on_chunk: Callable) - Stream responses
  • ping() -> bool - Test connection
  • get_domains() -> List[KnowledgeDomain] - Get knowledge domains

Data Models

ChatRequest

ChatRequest(
    query: str,                    # Required: User message
    context: List[str] = None,     # Optional: Conversation context
    max_length: int = None         # Optional: Response length limit
)

ChatResponse

ChatResponse(
    response: str,                 # AI response text
    context: List[str],           # Context used
    response_time_ms: int,        # Response time
    confidence: float             # Confidence score (0-1)
)

SystemStats

SystemStats(
    total_nodes: int,                        # Knowledge nodes
    training_iterations: int,                # Training iterations
    total_knowledge_items: int,              # Knowledge items
    domain_distribution: Dict[str, int],     # Domain distribution
    average_confidence: float,               # Average confidence
    uptime: int                             # System uptime (seconds)
)

Error Handling

from think_ai import ThinkAI, ThinkAIError

client = ThinkAI()

try:
    response = client.ask("Hello Think AI!")
    print(response)
except ThinkAIError as e:
    print(f"Think AI Error: {e.message}")
    print(f"Status Code: {e.status}")
    print(f"Error Code: {e.code}")
except Exception as e:
    print(f"Unexpected error: {e}")

Development

Setup Development Environment

git clone https://github.com/think-ai/think-ai-py
cd think-ai-py
pip install -e ".[dev]"

Running Tests

pytest
pytest --asyncio-mode=auto  # For async tests

Code Formatting

black think_ai/
flake8 think_ai/
mypy think_ai/

Performance

Think AI achieves O(1) performance through:

  • 🔥 Hash-based lookups for instant knowledge retrieval
  • Pre-computed responses for common queries
  • 🚀 Optimized algorithms using divide-and-conquer techniques
  • 💾 Intelligent caching with space-time optimization

Average response time: < 2ms (0.002ms hash-based lookups)

Version History

  • v1.0.1 (Next) - Automated deployment pipeline integration
  • v1.0.0 (July 2025) - Latest deployment with enhanced documentation and multi-platform sync
  • Initial release with core functionality and CLI

Deployment

This package is automatically deployed to PyPI via our CI/CD pipeline:

# Deployment happens automatically on git commits
# Version is auto-bumped (patch version)
# Tests run before deployment
# Published to PyPI registry

Latest version: PyPI version

License

MIT License - see LICENSE for details.

Support


Think AI - Advancing consciousness through quantum intelligence 🧠✨

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinkai_quantum-1.0.1.tar.gz (16.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

thinkai_quantum-1.0.1-py3-none-any.whl (14.8 kB view details)

Uploaded Python 3

File details

Details for the file thinkai_quantum-1.0.1.tar.gz.

File metadata

  • Download URL: thinkai_quantum-1.0.1.tar.gz
  • Upload date:
  • Size: 16.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for thinkai_quantum-1.0.1.tar.gz
Algorithm Hash digest
SHA256 c314fe9c57635d22a63d382b5088234045f622abf256a777a4043a6d094b4a92
MD5 f1d0c57b5f987382dfa154f42e00bd38
BLAKE2b-256 d08e95ded0550f82d19f72eb3becfa394310aff08d5d7d65f864a1c823841bb6

See more details on using hashes here.

File details

Details for the file thinkai_quantum-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for thinkai_quantum-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 758610a3bbaf975e399b8a3aa997add2805c87359cec2b54561d4e8d9514158e
MD5 6fe342c0a6f8e430e48041b0bbd6d93d
BLAKE2b-256 49bc91ead6f3ca2f95f0c7e9545793339ad86eb249d300ba094b908531b292dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page