Skip to main content

No project description provided

Project description

Polars for Data Science

Documentation | User Guide | Want to Contribute?
pip install polars-ds

The Project

The goal of the project is to reduce dependencies, improve code organization, simplify data pipelines and overall faciliate analysis of various kinds of tabular data that a data scientist may encounter. It is a package built around your favorite Polars dataframe. Here are some of the main areas of data science that is covered by the package:

  1. Well-known numerical transform/quantities. E.g. fft, conditional entropy, singular values, basic linear regression related quantities, population stability index, weight of evidence, column-wise/row-wise jaccard similarity etc.

  2. Statistics. Basic tests such as the t-test, f-test, KS statistics. Miscallaneous functions like weighted correlation, Xi-correlation. In-dataframe random column generations, etc.

  3. Metrics. ML metrics for common model performance reporting. E.g ROC AUC for binary/multiclass classification, logloss, r2, MAPE, etc.

  4. KNN-related queries. E.g. filter to k-nearest neighbors to point, find indices of all neighbors within a certain distance, etc.

  5. String metrics such as Levenshtein distance, Damure Levenshtein distance, other string distances, snowball stemming (English only), string Jaccard similarity, etc.

  6. Diagnosis. This modules contains the DIA (Data Inspection Assitant) class, which can help you profile your data, visualize data in lower dimensions, detect functional dependencies, detect other common data quality issues like null rate or high correlation. (Need plotly, great_tables, graphviz as optional dependencies.)

  7. Sample. Traditional dataset sampling. No time series sampling yet. This module provides functionalities such as stratified downsample, volume neutral random sampling, etc.

  8. Polars Native ML Pipeline. See examples here. The goal is to have a Polars native pipeline that can replace Scikit-learn's pipeline and provides all the benefits of Polars. All the basic transforms in Scikit-learn and categorical-encoders are planned. This can be super powerful together with Polars's expressions. (Basically, once you have expressions, you don't need to write custom transforms like col(A)/col(B), log transform, sqrt transform, linear/polynomial transforms, etc.) Polar's expressions also offer JSON serialization in higher versions so this can also be desirable for use in the cloud. (This part is under active development.)

Some other areas that currently exist, but is de-prioritized:

  1. Complex number related queries.

  2. Graph related queries. (The various representations of "Graphs" in tabular dataframe makes it hard to have consistent backend handling of such data.)

But why? Why not use Sklearn? SciPy? NumPy?

The goal of the package is to facilitate data processes and analysis that go beyond standard SQL queries, and to reduce the number of dependencies in your project. It incorproates parts of SciPy, NumPy, Scikit-learn, and NLP (NLTK), etc., and treats them as Polars queries so that they can be run in parallel, in group_by contexts, all for almost no extra engineering effort.

Let's see an example. Say we want to generate a model performance report. In our data, we have segments. We are not only interested in the ROC AUC of our model on the entire dataset, but we are also interested in the model's performance on different segments.

import polars as pl
import polars_ds as pds

size = 100_000
df = pl.DataFrame({
    "a": np.random.random(size = size)
    , "b": np.random.random(size = size)
    , "x1" : range(size)
    , "x2" : range(size, size + size)
    , "y": range(-size, 0)
    , "actual": np.round(np.random.random(size=size)).astype(np.int32)
    , "predicted": np.random.random(size=size)
    , "segments":["a"] * (size//2 + 100) + ["b"] * (size//2 - 100)
})
print(df.head())

shape: (5, 8)
┌──────────┬──────────┬─────┬────────┬─────────┬────────┬───────────┬──────────┐
 a         b         x1   x2      y        actual  predicted  segments 
 ---       ---       ---  ---     ---      ---     ---        ---      
 f64       f64       i64  i64     i64      i32     f64        str      
╞══════════╪══════════╪═════╪════════╪═════════╪════════╪═══════════╪══════════╡
 0.19483   0.457516  0    100000  -100000  0       0.929007   a        
 0.396265  0.833535  1    100001  -99999   1       0.103915   a        
 0.800558  0.030437  2    100002  -99998   1       0.558918   a        
 0.608023  0.411389  3    100003  -99997   1       0.883684   a        
 0.847527  0.506504  4    100004  -99996   1       0.070269   a        
└──────────┴──────────┴─────┴────────┴─────────┴────────┴───────────┴──────────┘

Traditionally, using the Pandas + Sklearn stack, we would do:

import pandas as pd
from sklearn.metrics import roc_auc_score

df_pd = df.to_pandas()

segments = []
rocaucs = []

for (segment, subdf) in df_pd.groupby("segments"):
    segments.append(segment)
    rocaucs.append(
        roc_auc_score(subdf["actual"], subdf["predicted"])
    )

report = pd.DataFrame({
    "segments": segments,
    "roc_auc": rocaucs
})
print(report)

  segments   roc_auc
0        a  0.497745
1        b  0.498801

This is ok, but not great, because (1) we are running for loops in Python, which tends to be slow. (2) We are writing more Python code, which leaves more room for errors in bigger projects. (3) The code is not very intuitive for beginners. Using Polars + Polars ds, one can do the following:

df.lazy().group_by("segments").agg(
    pds.query_roc_auc("actual", "predicted").alias("roc_auc"),
    pds.query_log_loss("actual", "predicted").alias("log_loss"),
).collect()

shape: (2, 3)
┌──────────┬──────────┬──────────┐
 segments  roc_auc   log_loss 
 ---       ---       ---      
 str       f64       f64      
╞══════════╪══════════╪══════════╡
 a         0.497745  1.006438 
 b         0.498801  0.997226 
└──────────┴──────────┴──────────┘

Notice a few things: (1) Computing ROC AUC on different segments is equivalent to an aggregation on segments! It is a concept everyone who knows SQL (aka everybody who works with data) will be familiar with! (2) There is no Python code. The extension is written in pure Rust and all complexities are hidden away from the end user. (3) Because Polars provides parallel execution for free, we can compute ROC AUC and log loss simultaneously on each segment! (In Pandas, one can do something like this in aggregations but is soooo much harder to write and way more confusing to reason about.)

The end result is simpler, more intuitive code that is also easier to reason about, and faster execution time. Because of Polars's extension (plugin) system, we are now blessed with both:

Performance and elegance - something that is quite rare in the Python world.

Getting Started

import polars_ds as pds

To make full use of the Diagnosis module, do

pip install "polars_ds[plot]"

Examples

See this for Polars Extensions: notebook

See this for Native Polars DataFrame Explorative tools: notebook

Disclaimer

Currently in Beta. Feel free to submit feature requests in the issues section of the repo. This library will only depend on python Polars and will try to be as stable as possible for polars>=0.20.6. Exceptions will be made when Polars's update forces changes in the plugins.

This package is not tested with Polars streaming mode and is not designed to work with data so big that has to be streamed.

The recommended usage will be for datasets of size 1k to 2-3mm rows, but actual performance will vary depending on dataset and hardware. Performance will only be a priority for datasets that fit in memory. It is a known fact that knn performance suffers greatly with a large k. Str-knn and Graph queries are only suitable for smaller data, of size ~1-5k for common computers.

Credits

  1. Rust Snowball Stemmer is taken from Tsoding's Seroost project (MIT). See here
  2. Some statistics functions are taken from Statrs (MIT) and internalized. See here
  3. Graph functionalities are powered by the petgragh crate. See here
  4. Linear algebra routines are powered partly by faer

Other related Projects

  1. Take a look at our friendly neighbor functime
  2. String similarity metrics is soooo fast and easy to use because of RapidFuzz

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

polars_ds-0.4.5.tar.gz (2.4 MB view details)

Uploaded Source

Built Distributions

polars_ds-0.4.5-cp38-abi3-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.8+ Windows x86-64

polars_ds-0.4.5-cp38-abi3-manylinux_2_24_aarch64.whl (11.9 MB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.24+ ARM64

polars_ds-0.4.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ x86-64

polars_ds-0.4.5-cp38-abi3-macosx_11_0_arm64.whl (11.2 MB view details)

Uploaded CPython 3.8+ macOS 11.0+ ARM64

polars_ds-0.4.5-cp38-abi3-macosx_10_12_x86_64.whl (12.8 MB view details)

Uploaded CPython 3.8+ macOS 10.12+ x86-64

File details

Details for the file polars_ds-0.4.5.tar.gz.

File metadata

  • Download URL: polars_ds-0.4.5.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.5.1

File hashes

Hashes for polars_ds-0.4.5.tar.gz
Algorithm Hash digest
SHA256 fae63d884fa7179454e0b7f533c85c329712a8c34c4430a6430a5598604b523e
MD5 9001866a76bd86ebe1f1c78b89dda712
BLAKE2b-256 50c22652fdec34aa277b06668cb0d76d9e6221caa69df5b0c06d9b28b64aa56e

See more details on using hashes here.

File details

Details for the file polars_ds-0.4.5-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for polars_ds-0.4.5-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 3ce421fbe9f4b67fb13608ea0d1846d0aebf2b6943b61ae5826b229088c579bc
MD5 92a384418594a759cbde93c003f8d9ec
BLAKE2b-256 0225218d3941ac732b15b598b89afde8e625951f4d10a7c1e6fcf848fd762e9e

See more details on using hashes here.

File details

Details for the file polars_ds-0.4.5-cp38-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for polars_ds-0.4.5-cp38-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 33d56716ed490450173b4cbf6a91f20fa2677b05461aca1592da8e91a05751d5
MD5 7194e2f2d6a059e776da815c231416d1
BLAKE2b-256 d50758e1af7c5864bfd80359554f1bda4c5e840b57511c852233a5e78510c117

See more details on using hashes here.

File details

Details for the file polars_ds-0.4.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for polars_ds-0.4.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 79a41cb24f1a41be943983b75bfa45fc66caba61aa253ddb9aaa518f5dc455b7
MD5 406cb264e79a66ae562153e2c27ea096
BLAKE2b-256 e42a0cfb067d520e20a597a66332faf09b20d10be82f55a2ad53c759ac862724

See more details on using hashes here.

File details

Details for the file polars_ds-0.4.5-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for polars_ds-0.4.5-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f2d19c51aa27d5633b10d11e1c1f4b445447b501658395c4ffb0f7f5429677aa
MD5 bd0ac2f3b15e5cec6561477415e2b18f
BLAKE2b-256 e4f987a77d618882af51c23ae488aaa2b925aa13e73ed0567aa52b2d81d1bdff

See more details on using hashes here.

File details

Details for the file polars_ds-0.4.5-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for polars_ds-0.4.5-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 b1d353ac12fd868fe2e776759668539756c11d81e68cd619848266064ff6b9f2
MD5 ccb786e2d8cf4b132d2252021826c2e8
BLAKE2b-256 db49da7baae38a602e5423d8070623f305997a017a0edc6beaf25d7fc5e1006b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page