Think AI - Quantum Consciousness AI Library for Python
Project description
thinkai-quantum - O(1) AI Coding Assistant for Python
🧠 Think AI - AI-Powered Coding Assistant with O(1) Performance for Python
Generate optimized code, analyze complexity, and get AI pair programming - all with guaranteed O(1) performance algorithms.
Features
- 💻 Code Generation - Create O(1) Python implementations instantly
- 🔍 Code Analysis - Analyze complexity and get optimization tips
- 🚀 O(1) Performance - All operations use hash-based lookups
- 🤖 AI Pair Programming - Interactive coding sessions
- 📊 Complexity Profiling - Understand your code's time complexity
- 🎨 Rich CLI - Beautiful interface with syntax highlighting
- 🐍 Pythonic API - Clean, intuitive Python interface
- 🌐 Async Support - Full asyncio compatibility
Installation
pip install thinkai-quantum # Version 1.0.0 (July 2025)
Quick Start
Python Library
from think_ai import ThinkAI, quick_chat
# Quick one-shot chat
response = quick_chat("What is quantum consciousness?")
print(response)
# Full client usage
client = ThinkAI()
# Chat with Think AI
response = client.ask("Explain machine learning")
print(response)
# Search knowledge base
results = client.search("artificial intelligence", limit=5)
for result in results:
print(f"Score: {result.score} - {result.content}")
# Get system statistics
stats = client.get_stats()
print(f"Knowledge nodes: {stats.total_nodes}")
print(f"Average confidence: {stats.average_confidence:.1%}")
# Check system health
health = client.get_health()
print(f"Status: {health.status}")
Async Usage
import asyncio
from think_ai import AsyncThinkAI, ThinkAIConfig
async def main():
config = ThinkAIConfig(debug=True)
async with AsyncThinkAI(config) as client:
response = await client.ask("What is the meaning of consciousness?")
print(response)
asyncio.run(main())
Streaming Responses
from think_ai import ThinkAI, ChatRequest
client = ThinkAI()
def handle_chunk(chunk):
if chunk.chunk:
print(chunk.chunk, end="", flush=True)
if chunk.done:
print("\n--- Response complete ---")
request = ChatRequest(query="Tell me about quantum computing")
client.stream_chat(request, handle_chunk)
Command Line Interface
Interactive Chat
# Start interactive chat session
think-ai chat
# Chat with streaming responses
think-ai chat --stream
One-shot Questions
# Ask a single question
think-ai ask "What is artificial intelligence?"
# Stream the response
think-ai ask "Explain quantum mechanics" --stream
Knowledge Search
# Search the knowledge base
think-ai search "machine learning algorithms" --limit 10
System Monitoring
# Check system status
think-ai status
# Test connection
think-ai ping
# List knowledge domains
think-ai domains
# Show configuration
think-ai config
Global Options
# Use custom server URL
think-ai --url https://your-server.com chat
# Set timeout
think-ai --timeout 60 ask "Complex question"
# Enable debug mode
think-ai --debug status
Configuration
from think_ai import ThinkAI, ThinkAIConfig
config = ThinkAIConfig(
base_url="https://thinkai-production.up.railway.app",
timeout=30, # seconds
debug=True
)
client = ThinkAI(config)
API Reference
ThinkAI Client
Methods
chat(request: ChatRequest) -> ChatResponse- Send chat messageask(question: str) -> str- Quick chat interfaceget_stats() -> SystemStats- Get system statisticsget_health() -> HealthStatus- Check system healthsearch(query: str, limit: int = 10) -> List[SearchResult]- Search knowledgestream_chat(request: ChatRequest, on_chunk: Callable)- Stream responsesping() -> bool- Test connectionget_domains() -> List[KnowledgeDomain]- Get knowledge domains
Data Models
ChatRequest
ChatRequest(
query: str, # Required: User message
context: List[str] = None, # Optional: Conversation context
max_length: int = None # Optional: Response length limit
)
ChatResponse
ChatResponse(
response: str, # AI response text
context: List[str], # Context used
response_time_ms: int, # Response time
confidence: float # Confidence score (0-1)
)
SystemStats
SystemStats(
total_nodes: int, # Knowledge nodes
training_iterations: int, # Training iterations
total_knowledge_items: int, # Knowledge items
domain_distribution: Dict[str, int], # Domain distribution
average_confidence: float, # Average confidence
uptime: int # System uptime (seconds)
)
Error Handling
from think_ai import ThinkAI, ThinkAIError
client = ThinkAI()
try:
response = client.ask("Hello Think AI!")
print(response)
except ThinkAIError as e:
print(f"Think AI Error: {e.message}")
print(f"Status Code: {e.status}")
print(f"Error Code: {e.code}")
except Exception as e:
print(f"Unexpected error: {e}")
Development
Setup Development Environment
git clone https://github.com/think-ai/think-ai-py
cd think-ai-py
pip install -e ".[dev]"
Running Tests
pytest
pytest --asyncio-mode=auto # For async tests
Code Formatting
black think_ai/
flake8 think_ai/
mypy think_ai/
Performance
Think AI achieves O(1) performance through:
- 🔥 Hash-based lookups for instant knowledge retrieval
- ⚡ Pre-computed responses for common queries
- 🚀 Optimized algorithms using divide-and-conquer techniques
- 💾 Intelligent caching with space-time optimization
Average response time: < 2ms (0.002ms hash-based lookups)
Version History
- v1.0.1 (Next) - Automated deployment pipeline integration
- v1.0.0 (July 2025) - Latest deployment with enhanced documentation and multi-platform sync
- Initial release with core functionality and CLI
Deployment
This package is automatically deployed to PyPI via our CI/CD pipeline:
# Deployment happens automatically on git commits
# Version is auto-bumped (patch version)
# Tests run before deployment
# Published to PyPI registry
License
MIT License - see LICENSE for details.
Support
- 🐛 Bug Reports: GitHub Issues
- 📖 Documentation: https://thinkai-production.up.railway.app/docs
- 💬 Community: Join our Discord server
- 📧 Contact: team@think-ai.dev
Think AI - Advancing consciousness through quantum intelligence 🧠✨
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file thinkai_quantum-1.0.2.tar.gz.
File metadata
- Download URL: thinkai_quantum-1.0.2.tar.gz
- Upload date:
- Size: 17.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9b479a70c56edef9d7804662b04e96732eafffa3d58a1f2f22145b72882d693
|
|
| MD5 |
56db20b8d99e32dc3e56f9bdde841ed4
|
|
| BLAKE2b-256 |
e6ec18e552d9a0cfbd5f0ce975a26d30b2fead67685a1b05bd668e8af9d2bf92
|
File details
Details for the file thinkai_quantum-1.0.2-py3-none-any.whl.
File metadata
- Download URL: thinkai_quantum-1.0.2-py3-none-any.whl
- Upload date:
- Size: 14.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a4130df6b569d1695b8ce69232ca6eed6caacfa6ff2a995c26d87641d8d52ca8
|
|
| MD5 |
03536b29f968c583b9a50ef79ad47c06
|
|
| BLAKE2b-256 |
a5e99d71a5f7d61293e286d1bf58d79703af377c59a83f572ef9d1516cd1d37d
|