Skip to main content

A Python library for hypothesis testing with automated assumptions

Project description

Hypotest

A Python library for deterministic hypothesis testing with automatic assumption checking and optional LLM-based interpretation.

Hypotest provides a clean statistical engine designed for data scientists, researchers, and engineers who need reliable and reproducible statistical testing workflows.

Overview

Hypotest simplifies hypothesis testing by providing:

A deterministic statistical engine

Automatic assumption validation (normality, variance homogeneity)

Structured result objects with statistical metadata

Optional LLM-based interpretation layer

A safe Dataset abstraction for robust data handling

All statistical computations are deterministic and independent of LLM usage.

Installation

Once published:

pip install hypotest

Development install:

git clone https://github.com/chikku1234568/Unified-EDA-HypoTest-LM-Library cd hypotest pip install -e .

Optional LLM support:

pip install hypotest[llm]

Quick Start Example: Independent t-test import pandas as pd import numpy as np

import hypotest from hypotest.core.dataset import Dataset from hypotest.tests.parametric.ttest import TTest

Create example dataset

df = pd.DataFrame({ "group": ["A"] * 100 + ["B"] * 100, "value": np.concatenate([ np.random.normal(0, 1, 100), np.random.normal(1, 1, 100), ]) })

Wrap DataFrame in Dataset abstraction

dataset = Dataset(df)

Run t-test

test = TTest()

result = test.execute( dataset=dataset, target="value", features=["group"], )

print(result)

Output:

TestResult(test='Independent t-test', feature='group', statistic=4.231, p=0.00003, significant)

Automatic Assumption Checking

Hypotest automatically checks statistical assumptions before or during test execution.

for assumption in result.assumptions: print(assumption.assumption_name, assumption.passed)

Example output:

normality True homoscedasticity False

Each assumption provides:

statistical result

interpretation

recommendation

Optional LLM Interpretation

Hypotest can generate natural-language explanations using any OpenAI-compatible provider.

Example using DeepSeek:

hypotest.configure( llm_api_key="your-api-key", llm_base_url="https://api.deepseek.com/v1", llm_model="deepseek-chat", enable_llm_interpretation=True, )

print(result.explain())

Example output:

The independent t-test indicates a statistically significant difference between the two groups...

LLM interpretation is optional and does not affect statistical computation.

Configuration

Configure hypotest globally:

hypotest.configure( llm_api_key="your-key", llm_base_url="https://api.deepseek.com/v1", llm_model="deepseek-chat", enable_llm_interpretation=True, )

View configuration:

print(hypotest.info())

Dataset Abstraction

Hypotest uses a Dataset wrapper to provide safe data handling:

from hypotest.core.dataset import Dataset

dataset = Dataset(df)

This enables:

safe missing value handling

validation before test execution

future extensibility

Supported Tests (Current MVP)

Independent t-test

Planned:

Welch's t-test

Mann-Whitney U test

ANOVA

Chi-square test

Correlation tests

Features

Core features implemented:

Deterministic statistical engine

Automatic assumption checking

Structured TestResult objects

Dataset abstraction layer

Plug-in test registry system

Optional LLM interpretation

Planned features:

Automatic test recommendation

Effect size library

Automated reporting

Additional statistical tests

Example: Full Workflow import pandas as pd import numpy as np import hypotest

from hypotest.core.dataset import Dataset from hypotest.tests.parametric.ttest import TTest

hypotest.configure(enable_llm_interpretation=False)

df = pd.DataFrame({ "group": ["A"] * 50 + ["B"] * 50, "value": np.random.randn(100), })

dataset = Dataset(df)

test = TTest()

result = test.execute(dataset, "value", ["group"])

print(result)

for a in result.assumptions: print(a.assumption_name, a.passed)

print(result.explain()) # None if LLM disabled

Project Structure hypotest/ ├── core/ │ ├── dataset.py │ ├── result.py │ ├── tests/ │ ├── parametric/ │ ├── ttest.py │ ├── assumptions/ │ ├── normality.py │ ├── variance.py │ ├── llm/ │ ├── client.py │ ├── interpreter.py │ ├── config/ │ ├── manager.py │ ├── info.py

Requirements

Python ≥ 3.10

pandas ≥ 1.5

numpy ≥ 1.21

scipy ≥ 1.9

Optional:

openai-compatible client (for LLM interpretation)

Philosophy

Hypotest separates:

Deterministic statistical computation

Probabilistic natural-language interpretation

This ensures statistical correctness while enabling explainability.

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lm_hypotest-0.1.1.tar.gz (33.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lm_hypotest-0.1.1-py3-none-any.whl (44.3 kB view details)

Uploaded Python 3

File details

Details for the file lm_hypotest-0.1.1.tar.gz.

File metadata

  • Download URL: lm_hypotest-0.1.1.tar.gz
  • Upload date:
  • Size: 33.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for lm_hypotest-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ea7d48ec6c2d08edaa47f5d87d3ba0f5d9a0ea47a96ae380fec2f4d4e13109df
MD5 da0502f9ccccf6ff83444bd407e95090
BLAKE2b-256 06b108b6b77d424f349547e05e4e18787720a4a77c65c7d4320eca8886b3b048

See more details on using hashes here.

File details

Details for the file lm_hypotest-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: lm_hypotest-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 44.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for lm_hypotest-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 79189b17f59fe6d008e47fdd4c77ac8fbf98a57519323cfa79cd1127ca93d32e
MD5 dfaf953607443f8c21eac1651b11441e
BLAKE2b-256 76c78b32a86fcfb884c50484238a004ec182522a8a8a8059514d4bca4c6fe33b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page