A library for easy usage of predictive-power based statistics
Project description
Skeptic
Early concept stage, feedback is appreciated, but please note that this is not meant to be "useable" or "correct" yet, not even in an early alpha sort of way
A library for easy usage of predictive-power based statistics.
Why use predictive power statistics ?
Predictive power based statistics are [arguably] better for modeling reality as well as more intuitive for certain problems.
When faced with high-dimensionality non-linear problems (e.g. correlation of a set of FMRIs with the presence and severity of brain cancer), "standard" statistical tests can't be used. However, this doesn't mean we can't say anything statistically significant about the above data, it just means we need to use more advanced methods, currently dubbed under machine learning, to find the correlation and assign a p-value to our finding being non-random.
When the validity of scientific claims must be explained to a mathematically layman audience, things like the T-test aren't necessarily intuitive to grasp, and make some heavy-handed assumptions about the data. Arguably, at least for some types of problems, predictive power based conclusions are much more intuitive to grasp.
Functionality (why you should use this library)
This library attempts to provide said predictive power based statistics and in the process tries to abstract away a few things:
- The process of finding the "best possible model" for a problem, given the amount of compute the researcher has on hand.
- The process of efficient k-fold cross validation using said predictive model.
- The process of finding an calculating meaningful errors/accuracy functions based on which to yield a predictive-power correlation (partially abstracted away)
- The process of cleaning data (e.g. going from a csv file to a pandas dataframe with the correct types assigned to each column), in part.
- Computing a "p value" analog based on the data and (optionally) input from the researcher about the statistical significance test they would normally attempt to use with the dataset.
Various other things that I might add if I find the time and there's some interest in the project:
- Embedding based techniques for deconfounding
- Operating under assumptions about the global distribution
- Operating under assumptions about the "expected" shape of the sample distribution
- ???
Roadmap
Roadmap for the feature from the heading above
-
- WIP - prototype done
-
- WIP - prototype done
-
- WIP - prototype done
-
- WIP - prototype done
-
- Not started yet
Why make this library
Because every alternative for this on the market seems to be:
- Mixing in too many classic statistic assumptions, thus making the tool wider-reaching but diluting it's usefulness for the cases when predictive power is potentially superior.
- A complex and convoluted mess.
- Closed source and sometimes paid-for.
- Too kosher in the machine learning model being used, yielding less than ideal results.
How to use this library
The documentation is not ready yet, but please see the integration tests for some examples of usage.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file skeptic-0.1.1.tar.gz
.
File metadata
- Download URL: skeptic-0.1.1.tar.gz
- Upload date:
- Size: 3.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1a60075f0dd62b53abaa21b8bf565579ff2dabd5a873dbd70e4cde66b495845f |
|
MD5 | 7c24efa79f91cdc033cfd228a2dc91b8 |
|
BLAKE2b-256 | 35472e745a0e91406b40bc0ecbf65a82bf0ebed3148175cbbef530aa31a91843 |