Skip to main content

Advanced feature engineering, analysis, modeling and optimization for data science

Project description

Freamon: Feature-Rich EDA, Analytics, and Modeling Toolkit

Freamon Logo

PyPI version GitHub release

Freamon is a comprehensive Python toolkit for exploratory data analysis, feature engineering, and model development with a focus on practical data science workflows.

Features

  • Exploratory Data Analysis: Automatic EDA with comprehensive reporting in HTML, Markdown, and Jupyter notebooks
  • Feature Engineering: Advanced feature engineering for numeric, categorical, and text data
  • Deduplication: Multiple deduplication methods with index tracking to map results back to original data
  • Topic Modeling: Optimized text analysis with NMF and LDA, supporting large datasets up to 100K documents
  • Automated Modeling: Intelligent end-to-end modeling workflow for text, tabular, and time series data
  • Modeling: Custom model implementations with feature importance and model interpretation
  • Pipeline: Scikit-learn compatible pipeline with additional features
  • Drift Analysis: Tools for detecting and analyzing data drift
  • Word Embeddings: Integration with various word embedding techniques
  • Visualization: Publication-quality visualizations with proper handling of all special characters
  • Performance Optimization: Multiprocessing support and intelligent sampling for large dataset analysis

Installation

pip install freamon

Quick Start

from freamon.eda import EDAAnalyzer

# Create an analyzer instance
analyzer = EDAAnalyzer(df, target_column='target')

# Run the analysis
analyzer.run_full_analysis()

# Generate a report
analyzer.generate_report('eda_report.html')

# Or a markdown report for version control
analyzer.generate_report('eda_report.md', format='markdown')

Key Components

Automated Modeling Flow

Perform end-to-end modeling with automatic handling of text and time series features:

from freamon import auto_model

# Simple interface - just provide a dataframe, target, and optional date column
results = auto_model(
    df=train_df,
    target_column='target',
    date_column='date',  # Optional for time series
    model_type='lightgbm',
    problem_type='classification',
    text_columns=['text_column'],  # Will be auto-detected if not provided
    categorical_columns=['category_column']  # Will be auto-detected if not provided
)

# Access the trained model and results
model = results['model']
feature_importance = results['feature_importance']
text_topics = results['text_topics']
cv_metrics = results['metrics']

# Make predictions on new data
predictions = results['autoflow'].predict(test_df)

EDA Module

The EDA module provides comprehensive data analysis:

from freamon.eda import EDAAnalyzer

analyzer = EDAAnalyzer(df, target_column='target')
analyzer.run_full_analysis()

# Generate different types of reports
analyzer.generate_report('report.html')  # HTML report
analyzer.generate_report('report.md', format='markdown')  # Markdown report
analyzer.generate_report('report.md', format='markdown', convert_to_html=True)  # Both formats

Deduplication with Tracking

Perform deduplication while maintaining the ability to map results back to the original dataset:

from freamon.deduplication.exact_deduplication import hash_deduplication
from examples.deduplication_tracking_example import IndexTracker

# Initialize tracker with original dataframe
tracker = IndexTracker().initialize_from_df(df)

# Perform deduplication
deduped_df = hash_deduplication(df['text_column'])

# Update tracking
kept_indices = deduped_df.index.tolist()
tracker.update_from_kept_indices(kept_indices)

# Map results back to original dataset
full_results = tracker.create_full_result_df(
    results_df, original_df, fill_value={'predicted': None}
)

Pipeline with Deduplication

Create ML pipelines that include deduplication steps:

from freamon.pipeline.pipeline import Pipeline
from examples.pipeline_with_deduplication_tracking import (
    IndexTrackingPipeline, HashDeduplicationStep
)

# Create pipeline with deduplication
pipeline = IndexTrackingPipeline(steps=[
    TextPreprocessingStep(text_column='text'),
    HashDeduplicationStep(text_column='processed_text'),
    ModelTrainingStep()
])

# Run pipeline and track indices
processed_data = pipeline.fit_transform(df)

# Map results back to original indices
mapped_results = pipeline.create_full_result_df(
    'model_training', results_df, fill_value={'predicted': 'unknown'}
)

Documentation

For more detailed information, refer to the examples directory and the following resources:

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

freamon-0.3.31.tar.gz (352.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freamon-0.3.31-py3-none-any.whl (409.7 kB view details)

Uploaded Python 3

File details

Details for the file freamon-0.3.31.tar.gz.

File metadata

  • Download URL: freamon-0.3.31.tar.gz
  • Upload date:
  • Size: 352.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for freamon-0.3.31.tar.gz
Algorithm Hash digest
SHA256 df06cbf9e758f7835bd75b542976da168260a93ed668d43d430ea3b65ed9ce14
MD5 d39c3f02a6b5fabfd475ac1e14f44f12
BLAKE2b-256 0f61612b174fae9449fc8d5c16e034beaf1c7304dca844cbecedf2a15f42dfe1

See more details on using hashes here.

File details

Details for the file freamon-0.3.31-py3-none-any.whl.

File metadata

  • Download URL: freamon-0.3.31-py3-none-any.whl
  • Upload date:
  • Size: 409.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for freamon-0.3.31-py3-none-any.whl
Algorithm Hash digest
SHA256 44e677dbd64676e0404ab5ae37f75f899eaef54efb0ffcc889433c51f857db82
MD5 df703c03f0c81a8f3b911f403c498eab
BLAKE2b-256 ed5338ed5097090914174d14d5be072d4b74f107f3cd8f76b7eaf8cf96831d21

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page