Quantum-First Machine Learning Framework with Async Execution and Production Storage
Project description
Q-Store: Quantum-Native Database v4.1.1
A production-ready quantum-first ML platform with async execution, comprehensive verification/profiling/visualization tools, and hardware-agnostic support for quantum computing. Leverages quantum mechanical propertiesโsuperposition, entanglement, and tunnelingโfor quantum-accelerated ML training with 10-20x throughput improvements.
Community
Q-STORE website Link<<<<<<< HEAD
๐ What's New in v4.1.0
=======
๐ Example Projects
๐ What's New in v3.4
main
๐ Async-First Quantum Execution (10-20x Throughput)
- AsyncQuantumExecutor: Non-blocking circuit submission with parallel execution
- Zero-Blocking Storage: Async Zarr/Parquet writers with background tasks
- Result Caching: LRU cache for instant retrieval of repeated circuits
- Connection Pooling: Multi-connection backend clients for better utilization
- Background Polling: Async workers poll quantum backends without blocking training
- PyTorch Integration: Fixed QuantumLayer with proper async support
๐ Verification, Profiling & Visualization (v4.0 Foundation)
- Circuit Verification: Equivalence checking, property verification, formal analysis
- Performance Profiling: Gate-level profiling, optimization benchmarks
- State Visualization: Circuit diagrams (ASCII/LaTeX), Bloch sphere, state vectors
- 144 Comprehensive Tests: Full coverage for all verification/profiling/visualization modules
v4.1.0 Performance Achievements
IMPORTANT: Improvements shown are v4.1 vs v4.0 quantum, not quantum vs classical GPU!
| Metric | v4.0 Quantum | v4.1 Quantum | Improvement |
|---|---|---|---|
| Circuit throughput | Sequential | 10-20x parallel | 10-20x faster |
| Storage operations | Blocking | Async (0ms) | โ faster |
| Result caching | None | LRU cache | Instant repeats |
| PyTorch integration | Broken | Fixed + async | Production-ready |
| Module count | 22 | 29 | 7 new modules |
| Total Python files | 118 | 145 | 27 new files |
โก Reality Check: Quantum vs Classical GPU
Current NISQ quantum hardware is typically 0.7-1.2x classical GPU speed (often slower)
Why? Circuit overhead, API latency, limited parallelization, measurement shots
Quantum's Value: Better exploration of non-convex loss landscapes, not raw speed
When Quantum Helps:
- โ Complex optimization landscapes
- โ Small datasets (<10K samples)
- โ Problems where classical gets stuck in local minima
- โ Research and algorithm development
When Classical GPU Wins:
- โ Large datasets (>10K samples)
- โ Production workloads
- โ Cost-sensitive applications
- โ Most practical ML tasks today
Quantum ML Training (v3.2+, Enhanced in v4.1)
- Async Quantum Execution: Non-blocking circuit submission with 10-20x throughput
- Hardware-Agnostic Architecture: Works with Cirq, Qiskit, IonQ, and simulators
- Quantum Feature Extractor: Replace Dense layers with quantum circuits
- Quantum Neural Network Layers: Variational quantum circuits with async execution
- Quantum Gradient Computation: Parameter shift rule and SPSA estimation
- Hybrid Classical-Quantum Pipelines: Seamless PyTorch/TensorFlow integration
- Quantum Data Encoding: Amplitude and angle encoding strategies
- Production Storage: Async Zarr checkpoints and Parquet metrics
Advanced ML Features
- Quantum Transfer Learning: Fine-tune pre-trained quantum models
- Quantum Data Augmentation: Superposition-based data expansion
- Quantum Regularization: Entanglement-based model optimization
- Quantum Adversarial Training: Robust model training with quantum gradients
- Hyperparameter Optimization: Quantum annealing for HPO
Training Infrastructure (v4.1 Enhanced)
- Async Execution Pipeline: Non-blocking quantum circuit execution
- Background Workers: Async polling without blocking training loop
- Result Caching: LRU cache for repeated circuit measurements
- Connection Pooling: Multi-connection quantum backend clients
- Distributed Quantum Training: Multi-backend orchestration (v4.0)
- Training Data Management: Store datasets with async writers
- Model Checkpointing: Zarr-based async checkpoint saves
- Metrics Tracking: Parquet-based async metrics logging
- Framework Integration: PyTorch, TensorFlow, and JAX support
Overview
Q-Store provides a hardware-agnostic hybrid classical-quantum database architecture that:
- Stores data in quantum superposition for context-aware retrieval
- Uses entanglement for automatic relationship synchronization
- Applies decoherence as adaptive time-to-live (TTL)
- Leverages quantum tunneling for global pattern discovery
- Trains quantum ML models with variational quantum circuits (8-12x faster in v3.4)
- Supports multiple quantum backends (Cirq/IonQ, Qiskit/IonQ, simulators)
- Integrates with classical ML frameworks (PyTorch, TensorFlow, JAX)
- Scales with Pinecone for classical vector storage
- Optimized IonQ execution with batch API, native gates, and smart caching
Key Features
๐ Quantum Superposition
Store vectors in superposition of multiple contexts simultaneously. Measurement collapses to the most relevant context for your query.
await db.insert(
id='doc_1',
vector=embedding,
contexts=[
('technical_query', 0.6),
('general_query', 0.3),
('historical_query', 0.1)
],
coherence_time=5000.0 # ms
)
๐ Quantum Entanglement
Create entangled groups where updates propagate automatically via quantum correlation. No cache invalidation needed.
db.create_entangled_group(
group_id='related_docs',
entity_ids=['doc_1', 'doc_2', 'doc_3'],
correlation_strength=0.85
)
โฑ๏ธ Adaptive Decoherence
Physics-based relevance decay. Old data naturally fades without explicit TTL management.
โฑ๏ธ Adaptive Decoherence
Physics-based relevance decay. Old data naturally fades without explicit TTL management.
await db.insert(
id='hot_data',
vector=embedding,
coherence_time=1000 # ms - stays relevant
)
๐ Quantum Tunneling
Escape local optima to find globally optimal patterns that classical methods miss.
results = await db.query(
vector=query_embedding,
enable_tunneling=True, # Find distant patterns
mode=QueryMode.EXPLORATORY,
top_k=10
)
๐ง Quantum ML Training (v3.2+, 8x Faster in v3.4)
Train quantum neural networks with hardware-agnostic quantum circuits.
QuantumLayer - Variational quantum circuit layer for neural networks QuantumTrainer - Training orchestration with quantum gradient computation QuantumGradientComputer - Parameter shift rule for gradient calculation QuantumDataEncoder - Classical-to-quantum data encoding (amplitude/angle) IonQBatchClient (v3.4) - Parallel circuit submission with connection pooling SmartCircuitCache (v3.4) - Template-based circuit caching IonQNativeGateCompiler (v3.4) - Native gate optimization
# Define quantum neural network layer
quantum_layer = QuantumLayer(
n_qubits=10,
depth=4,
backend=backend,
entanglement='linear'
)
# Train quantum model with v3.4 optimizations
trainer = QuantumTrainer(config, backend_manager)
await trainer.train(
model=quantum_model,
train_loader=data_loader,
epochs=100 # Now 8x faster with v3.4!
)
Installation
Quick Start (5 minutes)
New users: See docs/QUICKSTART.md for a step-by-step beginner guide.
Prerequisites
- Python 3.11+
- Conda package manager (recommended) or pip
- Pinecone API key
- IonQ API key (optional for quantum hardware)
- Choose quantum SDK: Cirq or Qiskit (for hardware-agnostic support)
Setup
- Clone the repository:
git clone https://github.com/yucelz/q-store.git
cd q-store
- Create conda environment:
conda env create -f environment.yml
conda activate q-store
- Install the package in development mode:
# Install with all dependencies
pip install -e ".[dev,backends]"
# Or use the Makefile
make install-dev
- Install required libraries:
# Install the new Pinecone SDK (not pinecone-client)
pip install pinecone
# Verify installation
python -c "import pinecone; print('Pinecone installed successfully')"
- Configure your API keys in
.envfile:
Create a .env file in the project root:
# Required: Pinecone for vector storage
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=us-east-1
# Optional: IonQ for quantum features
IONQ_API_KEY=your_ionq_api_key
# Quantum SDK selection (cirq or qiskit)
QUANTUM_SDK=cirq # or 'qiskit' for hardware-agnostic support
QUANTUM_TARGET=simulator # or 'qpu.aria', 'qpu.forte'
Get your API keys:
- Pinecone: Sign up at pinecone.io and get your API key from the dashboard
- IonQ (Optional): Get your API key from cloud.ionq.com/settings/keys
- First Test - Run the Quickstart Example:
# Verify installation
python verify_installation.py
# Run the full quickstart demo
python examples/quantum_db_quickstart.py
Expected output from verification:
============================================================
Q-Store Installation Verification
============================================================
Checking imports...
โ NumPy
โ SciPy
โ Cirq
โ Pinecone
โ Q-Store
Checking .env file...
โ .env file exists
โ PINECONE_API_KEY set
โ PINECONE_ENVIRONMENT set
Testing basic functionality...
โ DatabaseConfig created
โ QuantumDatabase instantiated
============================================================
โ All checks passed!
============================================================
Expected output from quickstart:
============================================================
QUANTUM DATABASE - INTERACTIVE DEMO
============================================================
=== Quantum Database Setup ===
Configuration:
- Pinecone Index: quantum-demo
- Pinecone Environment: us-east-1
- Dimension: 768
- Quantum Enabled: True
- Superposition: True
- IonQ Target: simulator
Initializing database...
INFO:q_store.quantum_database:Pinecone initialized with environment: us-east-1
INFO:q_store.quantum_database:Creating Pinecone index: quantum-demo
INFO:q_store.quantum_database:Pinecone index 'quantum-demo' created successfully
โ Database initialized successfully
=== Example 1: Basic Operations ===
...
Note: The first run will create Pinecone indexes (quantum-demo and production-index). Subsequent runs will use existing indexes.
Quick Start
Using .env File (Recommended)
- Create a
.envfile in your project root:
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=us-east-1
IONQ_API_KEY=your_ionq_api_key # Optional
- Run the quickstart example:
python examples/quantum_db_quickstart.py
The example automatically loads credentials from .env using python-dotenv.
Basic Usage with Async/Await
import asyncio
import numpy as np
from dotenv import load_dotenv
from q_store import QuantumDatabase, DatabaseConfig, QueryMode
# Load environment variables
load_dotenv()
async def main():
# Configure database (reads from .env automatically)
config = DatabaseConfig(
# Pinecone settings
pinecone_api_key=os.getenv('PINECONE_API_KEY'),
pinecone_environment=os.getenv('PINECONE_ENVIRONMENT', 'us-east-1'),
pinecone_index_name='my-index',
pinecone_dimension=768,
# Quantum backend (hardware-agnostic)
quantum_sdk=os.getenv('QUANTUM_SDK', 'cirq'), # 'cirq' or 'qiskit'
ionq_api_key=os.getenv('IONQ_API_KEY'),
ionq_target=os.getenv('QUANTUM_TARGET', 'simulator'),
enable_quantum=True,
enable_superposition=True
)
# Initialize database with context manager
db = QuantumDatabase(config)
async with db.connect():
# Insert vector with quantum superposition
embedding = np.random.randn(768)
await db.insert(
id='item_1',
vector=embedding,
contexts=[('context_a', 0.7), ('context_b', 0.3)],
metadata={'category': 'example'}
)
# Query with context-aware collapse
results = await db.query(
vector=embedding,
context='context_a',
mode=QueryMode.BALANCED,
top_k=5
)
# Display results
for result in results:
print(f"ID: {result.id}, Score: {result.score:.4f}")
print(f"Quantum Enhanced: {result.quantum_enhanced}")
# Run
asyncio.run(main())
Quantum ML Training
from q_store import QuantumTrainer, QuantumModel, TrainingConfig
# Configure training
training_config = TrainingConfig(
# Database config
**config,
# ML training settings
learning_rate=0.01,
batch_size=32,
epochs=100,
# Quantum model architecture
n_qubits=10,
circuit_depth=4,
entanglement='linear'
)
async def train_quantum_model():
db = QuantumDatabase(training_config)
async with db.connect():
# Store training data in quantum database
await db.store_training_data(
dataset_id='mnist_train',
data=X_train,
labels=y_train
)
# Create quantum model
model = QuantumModel(
input_dim=784,
n_qubits=10,
output_dim=10,
backend=db.backend_manager.get_backend()
)
# Create trainer
trainer = QuantumTrainer(training_config, db.backend_manager)
# Create data loader
train_loader = db.create_ml_data_loader(
dataset_id='mnist_train',
batch_size=32
)
# Train quantum neural network
await trainer.train(
model=model,
train_loader=train_loader,
epochs=100
)
asyncio.run(train_quantum_model())
Batch Operations
async with db.connect():
# Prepare batch
batch = [
{
'id': f'doc_{i}',
'vector': np.random.rand(768),
'contexts': [('general', 1.0)],
'metadata': {'index': i}
}
for i in range(100)
]
# Batch insert (efficient)
await db.insert_batch(batch)
Monitoring and Metrics
# Get performance metrics
metrics = db.get_metrics()
print(f"Total Queries: {metrics.total_queries}")
print(f"Cache Hit Rate: {metrics.cache_hits / max(1, metrics.total_queries):.2%}")
print(f"Avg Latency: {metrics.avg_latency_ms:.2f}ms")
print(f"Active Quantum States: {metrics.active_quantum_states}")
# Get comprehensive stats
stats = db.get_stats()
print(stats)
Troubleshooting
Common Issues
1. ModuleNotFoundError: No module named 'q_store'
# Solution: Install the package in development mode
pip install -e .
2. ImportError: Pinecone package is required
# Solution: Install the new Pinecone SDK (not pinecone-client)
pip uninstall -y pinecone-client
pip install pinecone
3. PINECONE_API_KEY not found
# Solution: Create a .env file in the project root
cat > .env << EOF
PINECONE_API_KEY=your_actual_api_key
PINECONE_ENVIRONMENT=us-east-1
IONQ_API_KEY=your_ionq_key
EOF
4. Pinecone index creation fails
- Ensure your Pinecone account has available index quota
- Check that the environment (e.g.,
us-east-1) is valid - Verify your API key has the necessary permissions
5. IonQ quantum features not working
- IonQ API key is optional - the system works without it
- Quantum features will be disabled if
IONQ_API_KEYis not set - Verify your IonQ API key at cloud.ionq.com
6. Package version conflicts
# Solution: Recreate the conda environment
conda deactivate
conda env remove -n q-store
conda env create -f environment.yml
conda activate q-store
pip install -e .
pip install pinecone
Getting Help
- Check the examples directory for working code
- Review the design document for architecture details
- Submit issues on GitHub
- Contact: yucelz@gmail.com
Common Commands
# Installation and setup
conda activate q-store # Activate environment
python verify_installation.py # Verify installation
pip install -e . # Install package in dev mode
# Running examples
python examples/quantum_db_quickstart.py # Run quickstart demo
python examples/basic_example.py # Run basic example
python examples/financial_example.py # Run financial example
python examples/ml_training_example.py # Run ML training example
python examples/tinyllama_react_training.py # Run TinyLlama fine-tuning
# Testing
pytest tests/ -v # Run all tests
pytest tests/ -v -k "test_state" # Run specific tests
# Maintenance
conda env update -f environment.yml # Update dependencies
conda deactivate # Deactivate environment
Architecture (v4.1.0)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Application Layer โ
โ โข PyTorch โข TensorFlow โข JAX โ
โโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Quantum Training Engine (v4.1) โ
โ โข QuantumTrainer โข QuantumLayer (Fixed) โ
โ โข QuantumFeatureExtractor (Async) โ
โ โข QuantumGradientComputer โข QuantumOptimizer โ
โ โข QuantumDataEncoder โข Natural Gradients โ
โโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Async Execution Layer (v4.1 NEW) โ
โ โข AsyncQuantumExecutor (Non-blocking) โ
โ โข ResultCache (LRU) โข BackendClient (Pool) โ
โ โข Background Workers โข IonQAdapter โ
โโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Async Storage Layer (v4.1 NEW) โ
โ โข AsyncBuffer โข AsyncMetricsWriter (Parquet) โ
โ โข CheckpointManager (Zarr) โข AsyncLogger โ
โโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโดโโโโโโโโโ
โ โ
โโโโโโโโโผโโโโโโโ โโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโ
โ Classical โ โ Quantum Backends (v4.1) โ
โ Backend โโโโโบ โข IonQ Hardware โ
โ โ โ โข Cirq Simulators โ
โ โข Pinecone โ โ โข Qiskit Backends โ
โ โข Vector DB โ โ โข Mock Backends โ
โ โข Zarr/ โ โ โข Multi-Backend Orchestr. โ
โ Parquet โ โ โ
โ โข Async I/O โ โ Verification (v4.0): โ
โ โ โ โข Equivalence โข Properties โ
โ โ โ โ
โ โ โ Profiling (v4.0): โ
โ โ โ โข CircuitProfiler โ
โ โ โ โข PerformanceAnalyzer โ
โ โ โ โ
โ โ โ Visualization (v4.0): โ
โ โ โ โข CircuitVisualizer โ
โ โ โ โข StateVisualizer โ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Configuration
DatabaseConfig Options
from q_store import DatabaseConfig
config = DatabaseConfig(
# Pinecone configuration
pinecone_api_key='your_key',
pinecone_environment='us-east-1',
pinecone_index_name='my-index',
pinecone_dimension=768,
pinecone_metric='cosine',
# Quantum backend (hardware-agnostic)
quantum_sdk='cirq', # or 'qiskit'
ionq_api_key='your_ionq_key',
ionq_target='simulator', # or 'qpu.aria', 'qpu.forte'
# Feature flags
enable_quantum=True,
enable_superposition=True,
enable_entanglement=True,
enable_tunneling=True,
# Performance tuning
max_quantum_states=1000,
classical_candidate_pool=1000,
result_cache_ttl=300, # seconds
# Connection pooling
max_connections=50,
connection_timeout=30,
# Coherence settings
default_coherence_time=1000.0, # ms
decoherence_check_interval=60, # seconds
# Monitoring
enable_metrics=True,
enable_tracing=True
)
TrainingConfig Options (v3.4)
from q_store import TrainingConfig
training_config = TrainingConfig(
# Inherits all DatabaseConfig options
**config,
# ML Training settings
learning_rate=0.01,
batch_size=32,
epochs=100,
optimizer='adam', # 'adam', 'sgd', 'rmsprop'
# Quantum model architecture
n_qubits=10,
circuit_depth=4,
entanglement='linear', # 'linear', 'circular', 'full'
# Data encoding
encoding_method='amplitude', # or 'angle'
# v3.4 Performance Optimizations (NEW)
use_batch_api=True, # Enable IonQ batch API (8x faster)
use_native_gates=True, # Enable native gate compilation (30% faster)
enable_smart_caching=True, # Enable circuit caching (10x faster)
connection_pool_size=5, # HTTP connection pool size
adaptive_batch_sizing=True, # Automatic batch size optimization
# Regularization
quantum_regularization=True,
entanglement_penalty=0.01,
# Checkpointing
checkpoint_interval=10, # epochs
save_best_only=True,
# Advanced features
enable_data_augmentation=True,
enable_adversarial_training=False,
enable_transfer_learning=False
)
API Reference v3.4
QuantumDatabase
async def initialize()
Initialize database and start background tasks.
async def close()
Close database and cleanup resources.
async def connect()
Context manager for database lifecycle.
async def insert(id, vector, contexts=None, coherence_time=None, metadata=None)
Insert vector with optional quantum superposition.
async def insert_batch(vectors: List[Dict])
Batch insert for efficiency.
async def query(vector, context=None, mode=QueryMode.BALANCED, enable_tunneling=None, top_k=10)
Query database with quantum enhancements.
async def store_training_data(dataset_id, data, labels, metadata=None)
Store training dataset in quantum database.
async def load_training_batch(dataset_id, batch_size, shuffle=True)
Load training batch from quantum database.
create_ml_data_loader(dataset_id, batch_size=32, shuffle=True)
Create async data loader for training.
get_metrics() -> Metrics
Get performance metrics.
get_stats() -> Dict
Get comprehensive database statistics.
Quantum ML Training Classes (v3.4)
QuantumLayer
__init__(n_qubits, depth, backend, entanglement='linear')async forward(x: np.ndarray) -> np.ndarray- Forward pass through quantum circuit
QuantumTrainer
__init__(config, backend_manager)async train_epoch(model, data_loader, epoch)- Train for one epoch (8x faster in v3.4)async train(model, train_loader, val_loader=None, epochs=100)- Full training loopasync validate(model, val_loader)- Validation pass
QuantumGradientComputer
async compute_gradients(circuit, loss_function, current_params)- Compute quantum gradients using parameter shift rule
QuantumDataEncoder
amplitude_encode(data: np.ndarray) -> QuantumCircuit- Amplitude encodingangle_encode(data: np.ndarray, n_qubits: int) -> QuantumCircuit- Angle encoding
QuantumOptimizer
__init__(learning_rate, method='adam')step(parameters, gradients)- Update parameters
IonQBatchClient (NEW v3.4)
__init__(api_key, connection_pool_size=5)async submit_batch(circuits: List[Circuit])- Submit circuits in parallelasync get_results(job_ids: List[str])- Retrieve results efficiently
SmartCircuitCache (NEW v3.4)
__init__(max_size=1000)get_or_build(template_key, parameters)- Get cached or build circuitget_statistics()- Cache performance metrics
IonQNativeGateCompiler (NEW v3.4)
__init__()compile_to_native(circuit: Circuit)- Compile to GPi, GPi2, MS gatesestimate_fidelity(circuit: Circuit)- Estimate gate fidelity
QuantumHPOSearch
__init__(config, search_space, backend_manager)async search(model_class, dataset_id, metric, n_trials, use_quantum_annealing=True)- Hyperparameter search
CheckpointManager
__init__(config)async save(model, epoch, metrics)- Save model checkpointasync load(checkpoint_name)- Load model checkpoint
MetricsTracker
__init__(config)log_metrics(epoch, metrics)- Log training metricsget_history()- Get training history
QueryMode Enum
PRECISE: High precision, narrow resultsBALANCED: Balanced precision and coverageEXPLORATORY: Broad exploration, diverse results
StateStatus Enum
CREATED: Newly created stateACTIVE: Active coherent stateMEASURED: State has been measuredDECOHERED: State has lost coherenceARCHIVED: Archived state
Quantum Backend
Q-Store integrates with multiple quantum backends for hardware-agnostic ML training.
Supported SDKs:
cirq- Google Cirq with IonQ integrationqiskit- IBM Qiskit with IonQ integration- Mock simulators for development and testing
Supported Targets:
simulator- Free simulator (unlimited use)qpu.aria- 25 qubits, #AQ 25 (production)qpu.forte- 36 qubits, #AQ 36 (advanced)qpu.forte.1- 36 qubits, enterprise
IonQ Advantages:
- All-to-all qubit connectivity (no SWAP gates)
- High-fidelity native gates (>99.5% single-qubit, >97% two-qubit)
- Native gate set: RX, RY, RZ, XX (Mรธlmer-Sรธrensen)
- Optimal for variational quantum circuits in ML training
Backend Selection: The BackendManager automatically selects the best backend based on:
- Circuit requirements (qubit count, depth)
- Cost constraints
- Latency requirements
- Backend availability
Performance
| Operation | Classical | Quantum (v3.3.1) | Quantum (v3.4) | v3.4 Speedup |
|---|---|---|---|---|
| Vector Search | O(N) | O(โN) | O(โN) | Quadratic |
| Pattern Discovery | O(NยทM) | O(โ(NยทM)) | O(โ(NยทM)) | Quadratic |
| Correlation Updates | O(Kยฒ) | O(1) | O(1) | Kยฒ (entanglement) |
| Storage Compression | N vectors | logโ(N) qubits | logโ(N) qubits | Exponential |
| Gradient Computation | O(N) backprop | O(N) param shift | O(N) param shift | Comparable* |
| Circuit Execution | Sequential | Sequential | Parallel Batch | 8-12x faster |
| HPO Search | O(MยทN) grid | O(โM) tunneling | O(โM) tunneling | Quadratic |
*Quantum gradients enable exploration of non-convex loss landscapes
**v3.4 achieves 8-12x speedup through batch API, native gates, and smart caching
Use Cases
Quantum ML Training (v3.2+, 8x Faster in v3.4)
- Quantum neural network training
- Hybrid classical-quantum models
- Transfer learning with quantum layers
- Hyperparameter optimization
- Adversarial training
- Few-shot learning
Financial Services
- Portfolio correlation management
- Crisis pattern detection
- Time-series prediction
- Risk analysis
ML Model Training
- Context-aware training data selection
- Hyperparameter optimization
- Multi-task learning
- Active learning
Recommendation Systems
- User preference modeling
- Item similarity
- Cold start problem
- Session-based recommendations
Scientific Computing
- Molecular similarity search
- Protein structure comparison
- Drug discovery
- Materials science
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
See LICENSE file for details.
References
Development Commands
make install-dev # Install with development dependencies
make test # Run tests
make format # Auto-format code
make lint # Run linters
make verify # Run all checks
Support
For support, submit issues in this repository or contact yucelz@gmail.com.
Citation
If you use Q-Store in your research, please cite:
@software{qstore2025,
title={Q-Store: Quantum-Native Database Architecture v3.4},
author={Yucel Zengin},
year={2025},
url={https://github.com/yucelz/q-store}
}
Changelog
v4.1.0 (2024-12-28)
- NEW: AsyncQuantumExecutor - Non-blocking circuit execution (10-20x throughput)
- NEW: Async Storage System - Zero-blocking Zarr/Parquet with background writers
- NEW: ResultCache - LRU cache for instant repeated circuit results
- NEW: Connection Pooling - Multi-connection backend clients
- NEW: IonQAdapter - Seamless IonQ hardware backend integration
- FIXED: PyTorch QuantumLayer - n_parameters attribute and async execution
- ENHANCED: QuantumFeatureExtractor - Async execution and multi-basis measurements
- FOUNDATION: Built on v4.0.0 verification/profiling/visualization (144 tests)
- ARCHITECTURE: 145 Python files across 29 specialized modules
- PERFORMANCE: 10-20x circuit throughput improvement over v4.0
- STORAGE: Zero-blocking async I/O for all storage operations
- PRODUCTION: Complete async/await API with comprehensive error handling
v4.0.0 (2024-12-19)
- NEW: Verification Module - Circuit equivalence, property verification, formal analysis
- NEW: Profiling Module - Performance profiling, optimization benchmarks
- NEW: Visualization Module - Circuit diagrams, state visualization, Bloch sphere
- NEW: 144 comprehensive tests for verification/profiling/visualization
- NEW: Integration tests for end-to-end workflows
- NEW: Benchmark suite for performance tracking
- IMPROVED: Complete examples directory with basic/advanced/QML/chemistry/error-correction
- PERFORMANCE: Benchmark baselines established for regression testing
v4.0.0 (2024-12-XX)
- NEW: Multi-backend orchestration for distributed quantum computing
- NEW: Adaptive circuit optimization with dynamic simplification
- NEW: Adaptive shot allocation for smart resource management
- NEW: Natural gradient descent for improved convergence
- PERFORMANCE: 2-3x throughput improvement via multi-backend distribution
- PERFORMANCE: 30-40% faster execution with adaptive optimization
v3.4.0 (2024-12-16)
- NEW: IonQ Batch API integration for parallel circuit submission
- NEW: Smart circuit caching with template-based caching
- NEW: IonQ native gate compilation (GPi, GPi2, MS gates)
- NEW: Connection pooling for persistent HTTP connections
- PERFORMANCE: 8-12x faster training (29 min โ 3.3 min)
- PERFORMANCE: 5-8 circuits/second (up from 0.5-0.6)
v3.2.0 (2024-12-15)
- New: Hardware-agnostic quantum ML training infrastructure
- New: QuantumLayer - Variational quantum circuit layers
- New: QuantumTrainer - Training orchestration with quantum gradients
- New: QuantumGradientComputer - Parameter shift rule implementation
- New: QuantumDataEncoder - Amplitude and angle encoding
- New: QuantumOptimizer - Quantum-aware optimization algorithms
- New: QuantumHPOSearch - Quantum-enhanced hyperparameter optimization
- New: CheckpointManager - Model persistence with quantum states
- New: Support for multiple quantum SDKs (Cirq, Qiskit)
- New: Hybrid classical-quantum model support
- New: Quantum transfer learning capabilities
- New: Quantum data augmentation
- New: Quantum regularization techniques
- New: Training data management in quantum database
- New: BackendManager - Intelligent backend selection
- Improved: Database API extended for ML training workflows
- Improved: StateManager for model parameter storage
v2.0.0 (2025-12-13)
- New: Modern Python project structure with src/ layout
- New: pyproject.toml-based configuration (PEP 621)
- New: Modular package organization (core/, backends/, utils/)
- New: Development automation with Makefile
- New: Comprehensive documentation in docs/
- Breaking Changes: Full async/await API
- New: Production-ready architecture with connection pooling
- New: Pinecone integration for classical vector storage
- New: Comprehensive monitoring and metrics
- New: Enhanced configuration system (DatabaseConfig)
- New: Type-safe API with full type hints
- New: Lifecycle management with context managers
- New: Result caching for improved performance
- New: Comprehensive test suite
- Improved: State management with background decoherence loops
- Improved: Error handling and retry logic
- Improved: Documentation and examples
v1.0.0 (2025-01-08)
- Initial release
- Basic quantum database features
- IonQ integration
- Simple examples
Note: Q-Store v3.4 delivers production-ready quantum ML training with 8-12x performance improvements over v3.3.1. The system features hardware-agnostic support, seamless integration with classical ML frameworks (PyTorch, TensorFlow, JAX), and optimized IonQ execution through batch API, native gates, and smart caching. For mission-critical applications, additional validation and optimization are recommended.
Developer Guide
Setting Up Development Environment
# Clone repository
git clone https://github.com/yucelz/q-store.git
cd q-store
# Install in development mode with all dependencies
pip install -e ".[dev,backends,all]"
# Install pre-commit hooks
pip install pre-commit
pre-commit install
Code Quality Tools
Q-Store uses automated code quality tools configured in pyproject.toml and .pre-commit-config.yaml:
Formatting:
# Format code with black (line length: 100)
black src/q_store
# Sort imports with isort
isort src/q_store --profile black
Linting:
# Run ruff (fast Python linter)
ruff check src/q_store
# Run flake8
flake8 src/q_store
# Run mypy for type checking
mypy src/q_store
Pre-commit Hooks: All code quality checks run automatically on commit:
- Trailing whitespace removal
- End-of-file fixing
- YAML/JSON/TOML validation
- Black formatting
- Import sorting (isort)
- Ruff linting
- Type checking (mypy)
Run All Checks Manually:
pre-commit run --all-files
Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=src/q_store --cov-report=html
# Run specific test file
pytest tests/test_quantum_database.py
# Run with specific markers
pytest -m "not slow"
pytest -m integration
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Make your changes
- Run code quality tools:
pre-commit run --all-files - Run tests:
pytest - Commit changes (pre-commit hooks will run automatically)
- Push to your fork:
git push origin feature/my-feature - Create a Pull Request
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file q_store-4.1.1-cp313-cp313-win_amd64.whl.
File metadata
- Download URL: q_store-4.1.1-cp313-cp313-win_amd64.whl
- Upload date:
- Size: 22.3 MB
- Tags: CPython 3.13, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a82d16a59d05c161a010e1be65fe2db182ec7b64102db396e0aa7bc5e20221f
|
|
| MD5 |
dedd83d546642fc0f0a611cdde8c2975
|
|
| BLAKE2b-256 |
8cd9ccb9763e50c32ee195ea2069d0b1fbc121d4fd22941391e29dae8a226a46
|
File details
Details for the file q_store-4.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: q_store-4.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 67.2 MB
- Tags: CPython 3.13, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10f4fb8ead051fc459662aea535cb3ae0bcac30bdd88f91f01272549e345441f
|
|
| MD5 |
a4b4ebf3c474480c9fc186ed4bb008ce
|
|
| BLAKE2b-256 |
05519cf88ae716329503b8b4cb8f7bc6e9c53eb1be4b8a27a31ae55a6fb89a01
|
File details
Details for the file q_store-4.1.1-cp313-cp313-macosx_11_0_arm64.whl.
File metadata
- Download URL: q_store-4.1.1-cp313-cp313-macosx_11_0_arm64.whl
- Upload date:
- Size: 22.5 MB
- Tags: CPython 3.13, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d62abfda3d0bae6c44710fe33467c7ed9b845a154d5903ddae6c5d79f7febd83
|
|
| MD5 |
9d2f7ddb918fc584dc01794839fa1c0e
|
|
| BLAKE2b-256 |
50a108affb5b003d6ea73201860fe56f00a0da8786c1ef387cef68dff3cd5288
|
File details
Details for the file q_store-4.1.1-cp312-cp312-win_amd64.whl.
File metadata
- Download URL: q_store-4.1.1-cp312-cp312-win_amd64.whl
- Upload date:
- Size: 22.4 MB
- Tags: CPython 3.12, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e5ae3170c76f6fff5975fc4dc293f6002fa2e6ec819834d117a6d891c2faf86
|
|
| MD5 |
0a5cdbb777022f7b7c5626c94a50bdf6
|
|
| BLAKE2b-256 |
74ba064389d3d30b9e0a481da7c5a62ceea9290c6c44d71e44a153a69888c843
|
File details
Details for the file q_store-4.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: q_store-4.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 67.8 MB
- Tags: CPython 3.12, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9adca93ea4c7a0602ca4ed4a8008991c6f2fe0910e458b937d8da5c213e3aee4
|
|
| MD5 |
78990ac585c3be395fcaf49427f9faae
|
|
| BLAKE2b-256 |
cdbb8e6cf271a33283552b47587caa4eaf9335d5360d9620eb3e5c298a5ecacb
|
File details
Details for the file q_store-4.1.1-cp312-cp312-macosx_11_0_arm64.whl.
File metadata
- Download URL: q_store-4.1.1-cp312-cp312-macosx_11_0_arm64.whl
- Upload date:
- Size: 22.6 MB
- Tags: CPython 3.12, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6127c715084fbff9ef02815e635a1aebbcc6c96ee5fb315a75e7be85a45ff98c
|
|
| MD5 |
5fd3be58e0f7284af6fa8eeea86c2040
|
|
| BLAKE2b-256 |
6d77cb4960e754a2b2b3cbcf048328fef445e8ff9bb6f8a2737012c3cc2b2dfb
|
File details
Details for the file q_store-4.1.1-cp311-cp311-win_amd64.whl.
File metadata
- Download URL: q_store-4.1.1-cp311-cp311-win_amd64.whl
- Upload date:
- Size: 22.5 MB
- Tags: CPython 3.11, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c1c3e7c9f8af48ed6333012f864fc313431572d42846b7a8f2ff3a512facc48d
|
|
| MD5 |
b7bbd76cf0b24cfd83d590df0d67a0d7
|
|
| BLAKE2b-256 |
b906fb33d16a9673bf03dddf1fed2d4870ec5790ae1d7e34e688fcf41c3d7e07
|
File details
Details for the file q_store-4.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: q_store-4.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 67.4 MB
- Tags: CPython 3.11, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e264113bc0921ac9404ca888d8940b2af3c20fa03ae4b808e150847919f11bd
|
|
| MD5 |
15c0eedd67df07b891e13ee535566a58
|
|
| BLAKE2b-256 |
fae8a3c3d2a9945829049279855c5b92d9bf0323fe629bd6f29a5ec450575aca
|
File details
Details for the file q_store-4.1.1-cp311-cp311-macosx_11_0_arm64.whl.
File metadata
- Download URL: q_store-4.1.1-cp311-cp311-macosx_11_0_arm64.whl
- Upload date:
- Size: 22.7 MB
- Tags: CPython 3.11, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
97a7f189e4d926a4a30ee42d7ba814f905d26b7fb806a18be22e1d867111e574
|
|
| MD5 |
9cb636345bc83b8e0f19d01a1179c816
|
|
| BLAKE2b-256 |
762cdbb7d2fd1ec7f07b756a222ef1546c385f5edc7b8846a5da25a469df0579
|