A pandas-compatible API layer built on top of Polars for high-performance data manipulation
Project description
๐ผโก PolarPandas
The fastest pandas-compatible API you'll ever use
PolarPandas is a blazing-fast, pandas-compatible API built on top of Polars. Write pandas code, get Polars performance. It's that simple.
๐ Why PolarPandas?
| Feature | pandas | PolarPandas | Speedup |
|---|---|---|---|
| DataFrame Creation | 224.89 ms | 15.95 ms | โก 14.1x faster |
| Read CSV | 8.00 ms | 0.88 ms | โก 9.1x faster |
| Sorting | 28.05 ms | 3.97 ms | โก 7.1x faster |
| GroupBy | 7.95 ms | 2.44 ms | โก 3.3x faster |
| Filtering | 1.26 ms | 0.42 ms | โก 3.0x faster |
๐ฏ Overall Performance: 5.2x faster than pandas
โจ Quick Start
import polarpandas as ppd
import polars as pl
# Create a DataFrame (pandas syntax, Polars performance)
df = ppd.DataFrame({
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35],
"city": ["NYC", "LA", "Chicago"]
})
# All your favorite pandas operations work!
df["age_plus_10"] = df["age"] + 10
df.sort_values("age", inplace=True)
result = df.groupby("city").agg(pl.col("age").mean())
# String operations with .str accessor
df["name_upper"] = df["name"].str.upper()
# Datetime operations with .dt accessor
df["birth_year"] = 2024 - df["age"]
print(df.head())
Output:
shape: (3, 6)
โโโโโโโโโโโฌโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโโโฌโโโโโโโโโโโโโ
โ name โ age โ city โ age_plus_10 โ name_upper โ birth_year โ
โ --- โ --- โ --- โ --- โ --- โ --- โ
โ str โ i64 โ str โ i64 โ str โ i64 โ
โโโโโโโโโโโชโโโโโโชโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก
โ Alice โ 25 โ NYC โ 35 โ ALICE โ 1999 โ
โ Bob โ 30 โ LA โ 40 โ BOB โ 1994 โ
โ Charlie โ 35 โ Chicago โ 45 โ CHARLIE โ 1989 โ
โโโโโโโโโโโดโโโโโโดโโโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโ
๐ฏ What's New in v0.9.0
โ๏ธ Rolling Apply Compatibility
- โ
DataFrame.rolling().applynow leverages Polars' nativerolling_map, so pandas-style custom functions Just Workโข - โ
Full support for
raw=True/False, positionalargs, keywordkwargs, weights, centered windows, andmin_periods - โ More predictable results when mixing numeric and object windows thanks to consistent Series wrapping
๐งญ GroupBy Reliability
- โ
Grouping by missing columns now mirrors pandas: the validation happens at aggregation time and raises a clear
KeyError - โ
Safer attribute access on
_GroupByobjects, preventing silent failures in chained operations
๐งช Quality & Tooling
- โ 1,014 tests passing across the suite, including comprehensive rolling-window scenarios
- โ
mypypasses cleanly forsrc/polarpandas, keeping the public API fully typed - โ
ruff check/ruff formatrun squeaky clean on the updated codebase
๐ฏ What's New in v0.8.0
๐๏ธ Enhanced SQL Support
- โ Primary key support - Create SQL tables with single or composite primary keys
- โ Auto-increment columns - Automatic ID generation for primary keys
- โ
Advanced
to_sql()method - Enhanced DataFrame.to_sql() and Series.to_sql() with:- Primary key specification (
primary_keyparameter) - Auto-increment support (
auto_incrementparameter) - Full if_exists options ('fail', 'replace', 'append')
- Connection string and SQLAlchemy engine support
- Primary key specification (
- โ Type mapping - Automatic Polars to SQL type conversion
- โ
Comprehensive SQL utilities - New
_sql_utils.pymodule with SQLAlchemy integration
๐งช Expanded Test Coverage
- โ 1,026 tests passing - Added 33 comprehensive SQL tests
- โ 88% coverage for SQL utilities - Extensive testing of SQL functionality
- โ Edge case testing - Empty DataFrames, nulls, Unicode, large datasets (10K+ rows)
- โ Data type testing - Integer, float, boolean, date, datetime, and string types
- โ Batch operations - Multiple table operations and transaction testing
๐ฆ New Features
- โ
Optional SQLAlchemy dependency - Install with
pip install polarpandas[sqlalchemy] - โ Graceful fallback - Informative error messages when SQLAlchemy not installed
- โ Connection flexibility - Support for connection strings, engines, and connection objects
๐ฏ What's New in v0.7.0
๐งช Improved Test Suite
- โ 993 tests passing - Doubled from 498 tests, comprehensive coverage
- โ 48% code coverage - Significant improvement in test coverage
- โ 13 previously skipped tests now passing - Fixed bugs and implemented missing features
- โ No segfaults - Resolved numpy/pandas compatibility issues with Python 3.9+
- โ 72 documented skipped tests - Clear reasons for unimplemented features
๐ง New Features & Bug Fixes
- โ
Implemented
cut()function - Proper data binning with custom labels support - โ Fixed Series.sort_index() - Resolved constructor issue
- โ Fixed Series.repeat() - Now works correctly with Polars backend
- โ Fixed Series.where() - Expression evaluation bug resolved
- โ Fixed Series.mask() - Expression evaluation bug resolved
๐งน Pandas Removal Infrastructure
- โ Test helpers created - Custom assertion utilities replace pandas testing functions
- โ Expected values generator - Generate test expectations without runtime pandas dependency
- โ First file converted - test_dataframe_statistical.py now runs without pandas (79 pandas calls eliminated)
- โ Clear conversion path - Complete documentation and tooling for removing pandas from all tests
๐๏ธ Code Quality
- โ All ruff checks passing - Zero linting errors in src/ and tests/
- โ All pyright checks passing - Zero type errors in new code
- โ Python 3.9+ support - Better compatibility, no segfaults
- โ Comprehensive documentation - Test improvement reports and conversion guides
๐ฏ What's New in v0.6.0
๐ Massive API Expansion
- โ 619 pandas-compatible features - Comprehensive pandas API coverage
- โ 69 module-level functions - All major pandas functions implemented
- โ 206 DataFrame methods - Complete DataFrame API support
- โ 186 Series methods - Full Series functionality
- โ 73 Index methods - Complete Index operations
- โ
57 String accessor methods - Full
.straccessor support - โ
28 Datetime accessor methods - Comprehensive
.dtaccessor support - โ 91 LazyFrame methods - Complete LazyFrame API (262 total methods tracked including pandas DataFrame comparison)
๐ Enhanced I/O Support
- โ Comprehensive file format support - CSV, JSON, Parquet, Excel, HDF5, HTML, XML, Stata, SPSS, SAS, and more
- โ
Enhanced SQL support - Full pandas-compatible
to_sql()with primary key and auto-increment support - โ Optional dependencies - Organized into feature groups (excel, hdf5, html, spss, sas, xarray, clipboard, formatting, sqlalchemy)
- โ
Flexible installation - Install only what you need:
pip install polarpandas[excel]orpip install polarpandas[all]
๐ Features (from v0.2.0)
- LazyFrame Class - Optional lazy execution for maximum performance
- Lazy I/O Operations -
scan_csv(),scan_parquet(),scan_json()for lazy loading - Complete I/O operations - Full CSV/JSON read/write support
- Advanced statistical methods -
nlargest(),nsmallest(),rank(),diff(),pct_change() - String & datetime accessors - Full
.strand.dtaccessor support - Module-level functions -
read_csv(),concat(),merge(),get_dummies() - Comprehensive edge cases - Empty DataFrames, null values, mixed types
- Full type annotations - Complete ty type checking support
- Comprehensive test coverage - Tests for all core functionality and edge cases
๐ฆ Installation
# Install from source (development)
git clone https://github.com/eddiethedean/polarpandas.git
cd polarpandas
pip install -e .
# Or install directly (when published)
pip install polarpandas
# Install with optional features
pip install polarpandas[sqlalchemy] # For enhanced SQL features (primary keys, auto-increment)
pip install polarpandas[excel] # For Excel file support
pip install polarpandas[all] # Install all optional dependencies
Requirements: Python 3.8+ and Polars
Optional Dependencies:
numpy- For passing NumPy dtype objects likenp.int64in schemassqlalchemy- For enhanced SQL features (primary keys, auto-increment into_sql())pandas- For certain conversion features and compatibilityopenpyxl,xlsxwriter- For Excel file I/Olxml,html5lib- For HTML/XML parsingpyreadstat,sas7bdat- For SPSS/SAS file supporttypes-tabulate- Lightweight type stubs to keeptabulate-powered helpers mypy-clean- And more... see
pyproject.tomlfor complete list
๐ฅ Core Features
โก Eager vs Lazy Execution
PolarPandas gives you the best of both worlds:
import polarpandas as ppd
import polars as pl
# ๐ EAGER EXECUTION (Default - like pandas)
df = ppd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
result = df.filter(df["a"] > 1) # Executes immediately
print(result)
# Shows results right away:
# shape: (2, 2)
# โโโโโโโฌโโโโโโ
# โ a โ b โ
# โ --- โ --- โ
# โ i64 โ i64 โ
# โโโโโโโชโโโโโโก
# โ 2 โ 5 โ
# โ 3 โ 6 โ
# โโโโโโโดโโโโโโ
# โก LAZY EXECUTION (Optional - for maximum performance)
lf = df.lazy() # Convert to LazyFrame
lf_filtered = lf.filter(pl.col("a") > 1) # Stays lazy
df_result = lf_filtered.collect() # Materialize when ready
# ๐ LAZY I/O (For large files)
lf = ppd.scan_csv("huge_file.csv") # Lazy loading
lf_processed = lf.filter(pl.col("value") > 100).select("name", "value")
df_final = lf_processed.collect() # Execute optimized plan
When to use LazyFrame:
- ๐ Large datasets (>1M rows)
- ๐ Complex operations (multiple filters, joins, aggregations)
- ๐พ Memory constraints (lazy evaluation uses less memory)
- โก Performance critical applications
๐ DataFrame Operations
# Initialization
df = ppd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
# Eager I/O (immediate loading)
df = ppd.read_csv("data.csv")
df = ppd.read_json("data.json")
df = ppd.read_parquet("data.parquet")
# Lazy I/O (for large files)
lf = ppd.scan_csv("large_file.csv")
lf = ppd.scan_parquet("huge_file.parquet")
lf = ppd.scan_json("big_file.json")
# Mutable operations (pandas-style)
df["new_col"] = df["A"] * 2
df.drop("old_col", axis=1, inplace=True)
df.rename(columns={"A": "alpha"}, inplace=True)
df.sort_values("B", inplace=True)
# Advanced operations
import polars as pl
df.groupby("category").agg(pl.col("value").mean()) # Use Polars expressions
df.pivot_table(values="sales", index="region", columns="month")
df.rolling(window=3).mean()
๐๏ธ Enhanced SQL Operations
PolarPandas now supports full pandas-compatible SQL operations with advanced features:
from sqlalchemy import create_engine
# Create database connection
engine = create_engine('sqlite:///mydb.db')
# Basic write (uses Polars' fast write_database)
df = ppd.DataFrame({'id': [1, 2, 3], 'name': ['Alice', 'Bob', 'Charlie']})
df.to_sql('users', engine, if_exists='replace')
# Create table with primary key (requires SQLAlchemy)
df.to_sql('users', engine, if_exists='replace', primary_key='id')
# Create table with auto-incrementing primary key
df.to_sql('users', engine, if_exists='replace',
primary_key='id', auto_increment=True)
# Composite primary key
df.to_sql('users', engine, if_exists='replace',
primary_key=['id', 'email'])
# Read back from SQL
result = ppd.read_sql("SELECT * FROM users WHERE id > 1", engine)
Key Features:
- ๐ Fast by default - Uses Polars' native
write_database()when no special features needed - ๐ Primary key support - Set single or composite primary keys (requires SQLAlchemy)
- โก Auto-increment - Enable auto-incrementing IDs (requires SQLAlchemy)
- ๐ Smart fallback - Automatically uses Polars for performance, SQLAlchemy for features
- โ
Pandas-compatible - Complete pandas
to_sql()signature support
๐งฉ Schema Conversion (pandas-style to Polars)
PolarPandas accepts schemas in multiple forms and converts them to Polars types automatically:
- String dtype names: "int64", "float64", "object", "bool", "datetime", "category"
- NumPy dtypes:
np.int64,np.float32,np.uint8, ... - pandas dtypes:
pd.Int64Dtype(),pd.Float32Dtype(),pd.StringDtype(), ... - Polars schema dict or
pl.Schema
Constructor usage:
import numpy as np
import polars as pl
import polarpandas as ppd
data = {"a": [1, 2, 3], "b": ["x", "y", "z"]}
# Strings
df = ppd.DataFrame(data, dtype={"a": "int64", "b": "string"})
# NumPy dtypes (requires optional numpy install)
df = ppd.DataFrame(data, dtype={"a": np.int64, "b": np.float64})
# pandas dtypes
# df = ppd.DataFrame(data, dtype={"a": pd.Int64Dtype(), "b": pd.StringDtype()})
# Polars schema dict
df = ppd.DataFrame(data, dtype={"a": pl.Int64, "b": pl.Utf8})
I/O functions:
# Eager
df = ppd.read_csv("data.csv", dtype={"id": "int64", "name": "string"})
df = ppd.read_json("data.json", schema={"value": "float64"})
df = ppd.read_parquet("data.parquet", dtype={"id": "uint32"}) # casts after read
df = ppd.read_feather("data.feather", schema={"flag": "bool"}) # casts after read
# Lazy (scan)
lf = ppd.scan_csv("data.csv", schema={"id": "int64"})
lf = ppd.scan_parquet("data.parquet", dtype={"score": "float32"}) # lazy cast
lf = ppd.scan_json("data.json", dtype={"name": "string"})
Notes:
- When both
dtypeandschemaare provided,schematakes precedence. - Parquet/Feather do not accept a schema parameter at read time in Polars; types are cast after reading (or lazily for scans).
๐ Series Operations
# String operations
df["name"].str.upper()
df["email"].str.contains("@")
df["text"].str.split(" ")
# Datetime operations
df["date"].dt.year
df["timestamp"].dt.floor("D")
df["datetime"].dt.strftime("%Y-%m-%d")
# Statistical methods
df["values"].rank()
df["scores"].nlargest(5)
df["prices"].clip(lower=0, upper=100)
๐ฏ Advanced Indexing โก
All indexing operations now use native Polars implementations for maximum performance - no pandas conversion overhead!
# Label-based indexing (with index set)
df = ppd.DataFrame({
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35],
"city": ["NYC", "LA", "Chicago"]
}, index=["a", "b", "c"])
# Select rows by label
df.loc["a"] # Single row (returns Series)
df.loc[["a", "b"], ["name", "age"]] # Multiple rows and columns
# Output:
# shape: (2, 2)
# โโโโโโโโโฌโโโโโโ
# โ name โ age โ
# โ --- โ --- โ
# โ str โ i64 โ
# โโโโโโโโโชโโโโโโก
# โ Alice โ 25 โ
# โ Bob โ 30 โ
# โโโโโโโโโดโโโโโโ
# Position-based indexing
df.iloc[0:2, 1:3] # Slice rows and columns
# Output:
# shape: (2, 2)
# โโโโโโโฌโโโโโโโโโโ
# โ age โ city โ
# โ --- โ --- โ
# โ i64 โ str โ
# โโโโโโโชโโโโโโโโโโก
# โ 25 โ NYC โ
# โ 30 โ LA โ
# โโโโโโโดโโโโโโโโโโ
df.iloc[[0, 2], :] # Select specific rows, all columns
# Output:
# shape: (2, 3)
# โโโโโโโโโโโฌโโโโโโฌโโโโโโโโโโ
# โ name โ age โ city โ
# โ --- โ --- โ --- โ
# โ str โ i64 โ str โ
# โโโโโโโโโโโชโโโโโโชโโโโโโโโโโก
# โ Alice โ 25 โ NYC โ
# โ Charlie โ 35 โ Chicago โ
# โโโโโโโโโโโดโโโโโโดโโโโโโโโโโ
# Assignment (now using native Polars - 270x faster for boolean masks!)
df.loc["a", "age"] = 26
df.iloc[0, 0] = "Alice Updated"
df.loc[df["age"] > 25, "age"] = 30 # Boolean mask assignment - optimized!
๐๏ธ Architecture
PolarPandas uses a wrapper pattern that provides:
- Mutable operations with
inplaceparameter - Index preservation across operations
- Pandas-compatible API with Polars performance
- Type safety with comprehensive type hints
- Error handling that matches pandas behavior
# Internal structure
class DataFrame:
def __init__(self, data):
self._df = pl.DataFrame(data) # Polars backend
self._index = None # Pandas-style index
self._index_name = None # Index metadata
๐ Performance Benchmarks
Run benchmarks yourself:
python benchmark_large.py
Large Dataset Performance (1M rows)
| Operation | pandas | PolarPandas | Speedup |
|---|---|---|---|
| DataFrame Creation | 224.89 ms | 15.95 ms | โก 14.1x |
| Read CSV | 8.00 ms | 0.88 ms | โก 9.1x |
| Sorting | 28.05 ms | 3.97 ms | โก 7.1x |
| GroupBy | 7.95 ms | 2.44 ms | โก 3.3x |
| Filtering | 1.26 ms | 0.42 ms | โก 3.0x |
Memory Efficiency
- 50% less memory usage than pandas
- โก Lazy evaluation for complex operations (LazyFrame)
- Optimized data types with Polars backend
- Query optimization with lazy execution plans
๐งช Testing & Quality
โ Comprehensive Testing
- 498 tests passing (100% success rate)
- 54 tests properly skipped (documented limitations)
- 72% code coverage across all functionality
- Edge case handling for empty DataFrames, null values, mixed types
- Comprehensive error handling with proper exception conversion
- Parallel test execution - Fast test runs with pytest-xdist
โ Code Quality
- Zero linting errors with ruff compliance
- 100% type safety - all ty type errors resolved
- Fully formatted code with ruff formatter
- Clean code standards throughout
- Production-ready code quality
โ Type Safety
# Full type hints support
def process_data(df: ppd.DataFrame) -> ppd.DataFrame:
return df.groupby("category").agg({"value": "mean"})
# IDE support with autocompletion
df.loc[df["age"] > 25, "name"] # Type-safe operations
๐ง Development
Running Tests
# All tests
pytest tests/ -v
# With coverage
pytest tests/ --cov=src/polarpandas --cov-report=html
# Specific test file
pytest tests/test_dataframe_core.py -v
# SQL enhanced suite (requires SQLAlchemy extra)
pip install -e '.[test,sqlalchemy]'
pytest -m requires_sqlalchemy tests/test_sql_enhanced.py -v
Code Quality
# Format code
ruff format .
# Check linting
ruff check .
# Type checking
ty check src/polarpandas/
Current Status:
- โ All tests passing (498 passed, 54 skipped)
- โ Zero linting errors (ruff check)
- โ Code fully formatted (ruff format)
- โ Type checked (ty compliance)
- โ Parallel test execution supported
Benchmarks
# Basic benchmarks
python benchmark.py
# Large dataset benchmarks
python benchmark_large.py
# Detailed analysis
python benchmark_detailed.py
๐ Known Limitations
PolarPandas achieves 100% compatibility for implemented features. Remaining limitations are due to fundamental Polars architecture differences:
๐ Permanent Limitations
- Correlation/Covariance: Polars doesn't have built-in
corr()/cov()methods - Transpose with mixed types: Polars handles mixed types differently than pandas
- MultiIndex support: Polars doesn't have native MultiIndex support
- JSON orient formats: Some pandas JSON orient formats not supported by Polars
๐ Temporary Limitations
- Advanced indexing: Some complex pandas indexing patterns not yet implemented
- Complex statistical methods: Some advanced statistical operations need implementation
Total: 54 tests properly skipped with clear documentation
๐ค Contributing
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Run the test suite:
pytest tests/ -v - Check code quality:
ruff check src/polarpandas/ - Submit a pull request
Development Setup
git clone https://github.com/eddiethedean/polarpandas.git
cd polarpandas
pip install -e ".[dev,test]"
๐ก Running optional SQL tests? Install the SQLAlchemy extra (
pip install -e ".[sqlalchemy]"or rely on the dev/test extras above) and executepytest -m requires_sqlalchemyto include the SQL enhanced suite. Without the extra, those tests are automatically skipped.
๐ Documentation
- API Compatibility Matrix - Complete pandas API compatibility matrix showing which functions and methods are implemented
- API Reference - Complete API documentation
- Performance Guide - Optimization tips
- Migration Guide - From pandas to PolarPandas
- Examples - Real-world usage examples
๐ Why Choose PolarPandas?
| Feature | pandas | Polars | PolarPandas |
|---|---|---|---|
| Performance | โญโญ | โญโญโญโญโญ | โญโญโญโญโญ |
| Memory Usage | โญโญ | โญโญโญโญโญ | โญโญโญโญโญ |
| API Familiarity | โญโญโญโญโญ | โญโญ | โญโญโญโญโญ |
| Ecosystem | โญโญโญโญโญ | โญโญโญ | โญโญโญโญ |
| Type Safety | โญโญ | โญโญโญโญ | โญโญโญโญ |
๐ฏ Best of both worlds: pandas API + Polars performance
๐ Roadmap
v0.6.0 (Current)
- โ 619 pandas-compatible features - Comprehensive API coverage
- โ Complete Index methods - All 73 Index methods implemented
- โ
Full String accessor - All 57
.strmethods implemented - โ
Complete Datetime accessor - All 28
.dtmethods implemented - โ 91 LazyFrame methods - Complete LazyFrame API with pandas DataFrame comparison (262 total methods tracked)
- โ Enhanced I/O support - Multiple file formats with optional dependencies
- โ
Type checking with
ty- Modern, fast type checker integration - โ API compatibility matrix - Comprehensive tracking of pandas compatibility
v0.4.0
- โ Native Polars Indexing - Replaced all pandas fallbacks with native Polars implementations
- โ Boolean Mask Optimization - 270x performance improvement for boolean mask assignment
- โ Optional Pandas - Pandas is now truly optional, only required for specific conversion features
- โ Enhanced Error Handling - Typo suggestions in error messages
- โ Code Refactoring - Centralized index management and exception utilities
- โ Type Safety - Improved type checking and resolved critical type issues
v0.3.1
- โ Fixed GitHub Actions workflow dependencies (pytest, pandas, numpy, pyarrow)
- โ Fixed Windows file handling issues in I/O tests (28 tests now passing)
- โ All platforms (Ubuntu, macOS, Windows) now passing all 457 tests
v0.3.0
- โ Comprehensive Documentation - Professional docstrings for all public APIs
- โ LazyFrame Class - Optional lazy execution for maximum performance
- โ
Lazy I/O Operations -
scan_csv(),scan_parquet(),scan_json() - โ Eager DataFrame - Default pandas-like behavior
- โ
Seamless Conversion -
df.lazy()andlf.collect()methods - โ 100% Type Safety - All ty errors resolved
- โ Comprehensive Testing - 457 tests covering all functionality
- โ Code Quality - Zero linting errors, fully formatted code
v0.7.0 (Planned)
- Advanced MultiIndex support
- More statistical methods
- Enhanced I/O formats (additional formats)
- Further performance optimizations
- Additional LazyFrame method implementations
Future
- Machine learning integration
- Advanced visualization support
- Distributed computing support
- GPU acceleration
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Polars - The blazing-fast DataFrame library
- pandas - The inspiration and API reference
- Contributors - Everyone who helps make PolarPandas better
Made with โค๏ธ for the data science community
โญ Star us on GitHub โข ๐ Report Issues โข ๐ฌ Discussions
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file polarpandas-0.9.0.tar.gz.
File metadata
- Download URL: polarpandas-0.9.0.tar.gz
- Upload date:
- Size: 236.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9b28feef84522ca6cbd8cb49af9833bda8bdfa96a7090d31043f38301b31eaf1
|
|
| MD5 |
237a0ecda13db6ac9bf71fa8804d4693
|
|
| BLAKE2b-256 |
de046dd68c3f19007afeb37304b8cfc4cc7e667e57b6dfcabceea4d6fc2a73f4
|
File details
Details for the file polarpandas-0.9.0-py3-none-any.whl.
File metadata
- Download URL: polarpandas-0.9.0-py3-none-any.whl
- Upload date:
- Size: 167.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eab616eba1bd708ec9598d7342958f1cdc6944a19b11ac05312a420c9debc582
|
|
| MD5 |
e5044a342677d644cb9fe4dc89a9b3ea
|
|
| BLAKE2b-256 |
1d140936a776ac6b0be4217b3b6f2a83c749372d38a6b9c02923db028680a64a
|