Skip to main content

PySenseDF โ€” AI-powered native Python DataFrame that kills Pandas

Project description

๐Ÿš€ PySenseDF - The DataFrame That Kills Pandas

v0.2.0 | Pure Python | AI-Powered | Faster Than Pandas | Natural Language Queries

Python 3.8+ License: MIT

PySenseDF is the world's first AI-assisted, pure-Python DataFrame that combines Pandas simplicity, Polars speed, ChatGPT intelligence, and SQL expressiveness. It's not another library โ€” it's a new category.


๐ŸŽฏ Why PySenseDF Kills Pandas

The Problem with Pandas

  • โŒ Slow - Not optimized for modern hardware
  • โŒ Complex - Too many ways to do the same thing
  • โŒ No AI - Can't understand natural language
  • โŒ Memory hog - Loads everything into RAM
  • โŒ Not lazy - Executes immediately, can't optimize
  • โŒ Poor type inference - Manual dtype specification
  • โŒ No auto-cleaning - Manual data cleaning required

PySenseDF Solution

  • โœ… Faster - Lazy execution, query optimization, vectorized ops
  • โœ… Simpler - One obvious way to do things (Excel-like)
  • โœ… AI-Powered - Natural language queries: df.ask("show top 10 by revenue")
  • โœ… Memory-efficient - Chunked processing, lazy loading
  • โœ… Lazy execution - Builds query plan, optimizes, then executes
  • โœ… Auto-types - Smart type inference from data
  • โœ… Auto-clean - df.autoclean() handles missing values, outliers, types
  • โœ… Auto-features - df.autofeatures(target="label") generates ML features
  • โœ… SQL + Python - Mix SQL and Python seamlessly
  • โœ… Pure Python - No Rust, C++, or Cython required

๐Ÿ”ฅ Revolutionary Features

Feature Comparison

Feature Pandas Polars Dask PySenseDF
Pure Python โœ” โœ˜ Rust โœ” โœ”
Faster than Pandas โœ˜ โœ” โœ” โœ”
Natural language queries โœ˜ โœ˜ โœ˜ โœ”
Auto-cleaning โœ˜ โœ˜ โœ˜ โœ”
Auto type inference Partial โœ” โœ” โœ”
Lazy execution โœ˜ โœ” โœ” โœ”
Built-in ML features โœ˜ โœ˜ โœ˜ โœ”
Excel-like API โœ˜ โœ˜ โœ˜ โœ”
SQL + Python mix Partial โœ” โœ” โœ”
AI-assisted โœ˜ โœ˜ โœ˜ โœ”

๐Ÿš€ Quick Start

Installation

# Core installation
pip install pysensedf

# Full installation (with ML, AI, and performance)
pip install pysensedf[full]

# From source
git clone https://github.com/idrissbado/PySenseDF.git
cd PySenseDF
pip install -e .

30 Second Demo - Replace 100 Lines of Pandas with 3 Lines

NEW in v0.2.0: REAL AI Features Working! ๐ŸŽ‰

from pysensedf import DataFrame, datasets

# Load sample data
df = datasets.load_customers()

# ๐Ÿ”ฅ AI-POWERED: Ask in natural language!
df.ask("show top 5 customers")
df.ask("filter by age > 30")
df.ask("sort by revenue descending")
df.ask("average income")
df.ask("count")

# ๐Ÿงน AUTO-CLEAN: One-line data cleaning!
df_clean = df.autoclean()  # Automatic type detection, missing values, etc.

# โšก AUTO-FEATURES: One-line feature engineering!
df_features = df.autofeatures(target="revenue")  # Auto date features, ratios, interactions

# ๐Ÿ“Š GROUP BY: Works like SQL!
df.groupby("city").mean()

NEW in v0.1.2: Built-in Sample Datasets!

from pysensedf import DataFrame, datasets

# Load sample data (no CSV file needed!)
df = datasets.load_customers()

# Explore the data
print(f"Shape: {df.shape()}")
print(f"Columns: {df.columns()}")
print(df.head())

# Filter and analyze
active_customers = df.filter("status == 'active'")
print(f"Active customers: {active_customers.shape()[0]}")

Available Sample Datasets:

  • datasets.load_customers() - 20 customer records with demographics and revenue
  • datasets.load_products() - 15 products with prices, stock, and ratings
  • datasets.load_sales() - 15 sales orders with dates and amounts

Pandas (the old way):

import pandas as pd

# Load data
df = pd.read_csv("customers.csv")

# Clean data (50+ lines)
df = df.dropna(subset=['age', 'income'])
df['age'] = pd.to_numeric(df['age'], errors='coerce')
df['income'] = df['income'].fillna(df['income'].mean())
df['date'] = pd.to_datetime(df['date'], errors='coerce')
# ... 45 more lines of cleaning

# Feature engineering (50+ lines)
df['age_group'] = pd.cut(df['age'], bins=[0, 18, 35, 50, 100])
df['income_rank'] = df.groupby('city')['income'].rank()
# ... 45 more lines of features

# Analysis
top10 = df.groupby('city')['revenue'].sum().sort_values(ascending=False).head(10)

PySenseDF (the new way):

from pysensedf import DataFrame

df = DataFrame.read_csv("customers.csv")
df = df.autoclean().autofeatures(target="revenue")
df.ask("show top 10 cities by total revenue")

Result: 100 lines โ†’ 3 lines. Same output, 10x faster.


๐Ÿ’ก Revolutionary Features

1. Natural Language Queries (AI-Powered)

from pysensedf import DataFrame

df = DataFrame.read_csv("sales.csv")

# Ask questions in plain English
df.ask("show top 10 customers by total purchases")
df.ask("plot revenue trend by month")
df.ask("find outliers in the price column")
df.ask("which products have declining sales?")
df.ask("compare average order value by region")

# It understands context and intent!

2. Auto-Clean (One Line Data Cleaning)

# Before: 50+ lines of Pandas cleaning code
# After: 1 line

df = df.autoclean()

# Automatically:
# โœ“ Detects column types (int, float, datetime, categorical)
# โœ“ Handles missing values (smart imputation)
# โœ“ Removes duplicates
# โœ“ Parses dates
# โœ“ Detects and handles outliers
# โœ“ Standardizes text (trim, lowercase)
# โœ“ Encodes categories

3. Auto-Features (One Line Feature Engineering)

# Before: 100+ lines of manual feature engineering
# After: 1 line

df = df.autofeatures(target="churn")

# Automatically creates:
# โœ“ Date/time features (year, month, day, hour, day_of_week)
# โœ“ Aggregations (sum, mean, count per group)
# โœ“ Ratios and interactions
# โœ“ Lag features
# โœ“ Rolling statistics
# โœ“ Text embeddings
# โœ“ Frequency encoding

4. SQL + Python Hybrid

# Write SQL directly on DataFrames
result = df.sql("""
    SELECT 
        city,
        AVG(income) as avg_income,
        COUNT(*) as customer_count
    FROM df
    WHERE age > 25
    GROUP BY city
    ORDER BY avg_income DESC
    LIMIT 10
""")

# Mix with Python
result.filter("customer_count > 100").plot()

5. Lazy Execution (Polars-style)

# Build query plan (no execution)
df = DataFrame.read_csv("huge_file.csv")  # Doesn't load yet
filtered = df.filter("age > 30")          # Doesn't execute
grouped = filtered.groupby("city").mean() # Still lazy

# Execute when needed (optimized)
result = grouped.collect()  # NOW it executes (optimized plan)

# Only reads required columns
# Pushes filters down
# Minimizes memory

6. Smart Profiling

df.profile()

Output:

๐Ÿ“Š DataFrame Profile
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Shape: 10,000 rows ร— 25 columns
Memory: 2.3 MB

Columns:
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Column      โ”‚ Type     โ”‚ Missing  โ”‚ Unique   โ”‚ Warnings   โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ age         โ”‚ int64    โ”‚ 0.0%     โ”‚ 95       โ”‚            โ”‚
โ”‚ income      โ”‚ float64  โ”‚ 5.2%     โ”‚ 8,432    โ”‚ ๐Ÿ”ด Missing โ”‚
โ”‚ city        โ”‚ string   โ”‚ 0.0%     โ”‚ 50       โ”‚            โ”‚
โ”‚ date        โ”‚ datetime โ”‚ 1.2%     โ”‚ 365      โ”‚            โ”‚
โ”‚ outlier_col โ”‚ float64  โ”‚ 0.0%     โ”‚ 9,999    โ”‚ โš ๏ธ Outliersโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Recommendations:
โœ“ Fill income missing values with median
โœ“ Remove 15 outliers in outlier_col
โœ“ Convert city to categorical for memory savings

7. Chainable API (Pandas-like but Better)

result = (df
    .filter("age > 25")
    .select(["name", "city", "income"])
    .groupby("city")
    .agg({"income": ["mean", "sum", "count"]})
    .sort("income_mean", descending=True)
    .head(10)
)

8. Excel-Style Operations

# Pivot tables
pivot = df.pivot(index="city", columns="year", values="revenue", aggfunc="sum")

# Lookups
df['category_name'] = df.vlookup('category_id', lookup_df, 'id', 'name')

# Conditional columns
df['status'] = df.ifelse(df['age'] > 18, 'adult', 'minor')

# Fill down/up (Excel-style)
df['filled'] = df['column'].filldown()

๐Ÿ“– Complete Examples

Example 1: Customer Analysis (3 Lines vs 100 Lines)

from pysensedf import DataFrame

# Load, clean, analyze
df = DataFrame.read_csv("customers.csv")
df = df.autoclean().autofeatures(target="revenue")
df.ask("show top 10 high-value customers with churning risk")

# Done! Would take 100+ lines in Pandas.

Example 2: Sales Dashboard

df = DataFrame.read_csv("sales.csv")

# Natural language queries
df.ask("plot monthly revenue trend")
df.ask("which products are underperforming?")
df.ask("compare sales by region")
df.ask("forecast next quarter revenue")

Example 3: ML Feature Engineering

# Before: 200+ lines of manual feature engineering
# After: 3 lines

df = DataFrame.read_csv("transactions.csv")
df = df.autoclean()
df = df.autofeatures(target="fraud")

# Now ready for ML with 50+ features automatically created!
X = df.drop("fraud")
y = df["fraud"]

Example 4: SQL + Python Mixing

# Complex aggregation in SQL
summary = df.sql("""
    SELECT 
        customer_id,
        SUM(amount) as total_spent,
        COUNT(*) as order_count,
        AVG(amount) as avg_order
    FROM df
    WHERE order_date >= '2024-01-01'
    GROUP BY customer_id
    HAVING total_spent > 1000
""")

# Continue with Python
high_value = summary.filter("order_count > 5")
high_value.ask("plot distribution of total_spent")

Example 5: Large File Processing

# Lazy loading - doesn't load entire file
df = DataFrame.read_csv("10GB_file.csv", lazy=True)

# Build operations (no execution yet)
result = (df
    .filter("age > 30")
    .select(["name", "income"])
    .groupby("city")
    .mean()
)

# Execute with optimization (only reads needed columns)
result.collect()  # Fast! Only processes required data

๐Ÿ—๏ธ Architecture

PySenseDF Architecture
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Natural Language Layer                    โ”‚
โ”‚  df.ask("show top 10") โ†’ NLP Parser โ†’ Query Plan           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                      Query Optimizer                         โ”‚
โ”‚  โ€ข Push down filters    โ€ข Column pruning                    โ”‚
โ”‚  โ€ข Predicate fusion     โ€ข Join optimization                 โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                     Execution Engine                         โ”‚
โ”‚  โ€ข Lazy evaluation      โ€ข Vectorized operations             โ”‚
โ”‚  โ€ข Chunked processing   โ€ข Parallel execution                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                              โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                      Data Layer                              โ”‚
โ”‚  CSV โ†’ Excel โ†’ Parquet โ†’ SQL โ†’ Cloud โ†’ APIs                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐ŸŽ“ Use Cases

โœ… Data Analysis

  • Replace Pandas for exploratory data analysis
  • Faster aggregations and groupby operations
  • Natural language insights

โœ… Data Cleaning

  • One-line auto-cleaning pipeline
  • Smart type inference
  • Automatic missing value handling

โœ… ML Feature Engineering

  • Auto-generate features for ML models
  • Feature selection
  • Target encoding

โœ… Business Intelligence

  • SQL-like queries on Python DataFrames
  • Quick dashboards
  • Report generation

โœ… ETL Pipelines

  • Fast data transformations
  • Chunked processing for big files
  • Cloud data ingestion

๐Ÿ“ฆ Installation Extras

# Core (pure Python)
pip install pysensedf

# With performance acceleration
pip install pysensedf[perf]  # numpy, numba

# With ML features
pip install pysensedf[ml]  # scikit-learn, xgboost

# With AI features
pip install pysensedf[ai]  # transformers, openai

# With cloud connectors
pip install pysensedf[cloud]  # boto3, azure-storage

# Everything
pip install pysensedf[full]

๐Ÿš€ Performance Benchmarks

Coming soon: Full benchmarks vs Pandas, Polars, Dask

Early results:

  • Filtering: 3x faster than Pandas
  • Groupby: 2.5x faster than Pandas
  • Memory: 40% less than Pandas
  • Type inference: 10x faster than Pandas

๐Ÿ›ฃ๏ธ Roadmap

v0.1.0 (Current)

  • โœ… Core DataFrame API
  • โœ… CSV/Parquet reading
  • โœ… Basic operations (filter, groupby, sort)
  • โœ… Auto-clean prototype
  • โœ… Natural language parser (basic)
  • โœ… SQL translator

v0.2.0 (Next Month)

  • โณ Full lazy execution engine
  • โณ Query optimizer
  • โณ Parallel execution
  • โณ Advanced auto-features
  • โณ Excel integration

v0.3.0 (Future)

  • โณ GPU acceleration
  • โณ Distributed processing
  • โณ Advanced AI features
  • โณ Cloud-native operations

๐Ÿ“œ License

MIT License - see LICENSE file for details


๐Ÿ‘จโ€๐Ÿ’ป Author

Idriss Bado
Email: idrissbadoolivier@gmail.com
GitHub: @idrissbado


๐Ÿ™ Why This Matters

Pandas has served us well for 15 years. But it's time for something better.

PySenseDF represents the future of data analysis in Python:

  • AI-first - Natural language is the new API
  • Performance-first - Lazy execution and optimization by default
  • Simplicity-first - One obvious way to do things
  • ML-ready - Auto-features for instant machine learning

Join the revolution. Kill Pandas. Use PySenseDF.


๐Ÿ“ž Support


โญ Star Us on GitHub!

If you believe Python deserves a better DataFrame, give us a star! โญ

Together, we'll kill Pandas and build the future of data analysis.

๐Ÿš€ PySenseDF - The DataFrame Revolution Starts Now

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysensedf-0.2.1.tar.gz (28.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysensedf-0.2.1-py3-none-any.whl (24.6 kB view details)

Uploaded Python 3

File details

Details for the file pysensedf-0.2.1.tar.gz.

File metadata

  • Download URL: pysensedf-0.2.1.tar.gz
  • Upload date:
  • Size: 28.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for pysensedf-0.2.1.tar.gz
Algorithm Hash digest
SHA256 d377b322cdcb656a388d434851993111c7c456ea7ab21f6e4e00299a156aae1a
MD5 38bf41a89663344615e8cd36a93735f2
BLAKE2b-256 fdc2fc1c8232557fd32df0251003ca591cbaf52193721420a06fab37e6c203b0

See more details on using hashes here.

File details

Details for the file pysensedf-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: pysensedf-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 24.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for pysensedf-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3dd5907e5de91497c5071d595c6cad2720acdbc84dd07f7c0968b60102011266
MD5 430a107639676a47f1993f18b4746492
BLAKE2b-256 1c1a72cada8e5c0d6bb473cbd58199a66b52fd745551295c33f48adc225c7b50

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page