Skip to main content

Advanced feature engineering, analysis, modeling and optimization for data science

Project description

Freamon: Feature-Rich EDA, Analytics, and Modeling Toolkit

Freamon Logo

PyPI version GitHub release

Freamon is a comprehensive Python toolkit for exploratory data analysis, feature engineering, and model development with a focus on practical data science workflows.

Features

  • Exploratory Data Analysis: Automatic EDA with comprehensive reporting in HTML, Markdown, and Jupyter notebooks
  • Feature Engineering: Advanced feature engineering for numeric, categorical, and text data
  • Deduplication: Multiple deduplication methods with index tracking to map results back to original data
  • Topic Modeling: Optimized text analysis with NMF and LDA, supporting large datasets up to 100K documents
  • Automated Modeling: Intelligent end-to-end modeling workflow for text, tabular, and time series data
  • Modeling: Custom model implementations with feature importance and model interpretation
  • Pipeline: Scikit-learn compatible pipeline with additional features
  • Drift Analysis: Tools for detecting and analyzing data drift
  • Word Embeddings: Integration with various word embedding techniques
  • Visualization: Publication-quality visualizations with proper handling of all special characters
  • Performance Optimization: Multiprocessing support and intelligent sampling for large dataset analysis

Installation

pip install freamon

Quick Start

from freamon.eda import EDAAnalyzer

# Create an analyzer instance
analyzer = EDAAnalyzer(df, target_column='target')

# Run the analysis
analyzer.run_full_analysis()

# Generate a report
analyzer.generate_report('eda_report.html')

# Or a markdown report for version control
analyzer.generate_report('eda_report.md', format='markdown')

Key Components

Automated Modeling Flow

Perform end-to-end modeling with automatic handling of text and time series features:

from freamon import auto_model

# Simple interface - just provide a dataframe, target, and optional date column
results = auto_model(
    df=train_df,
    target_column='target',
    date_column='date',  # Optional for time series
    model_type='lightgbm',
    problem_type='classification',
    text_columns=['text_column'],  # Will be auto-detected if not provided
    categorical_columns=['category_column']  # Will be auto-detected if not provided
)

# Access the trained model and results
model = results['model']
feature_importance = results['feature_importance']
text_topics = results['text_topics']
cv_metrics = results['metrics']

# Make predictions on new data
predictions = results['autoflow'].predict(test_df)

EDA Module

The EDA module provides comprehensive data analysis:

from freamon.eda import EDAAnalyzer

analyzer = EDAAnalyzer(df, target_column='target')
analyzer.run_full_analysis()

# Generate different types of reports
analyzer.generate_report('report.html')  # HTML report
analyzer.generate_report('report.md', format='markdown')  # Markdown report
analyzer.generate_report('report.md', format='markdown', convert_to_html=True)  # Both formats

Deduplication with Tracking

Perform deduplication while maintaining the ability to map results back to the original dataset:

from freamon.deduplication.exact_deduplication import hash_deduplication
from examples.deduplication_tracking_example import IndexTracker

# Initialize tracker with original dataframe
tracker = IndexTracker().initialize_from_df(df)

# Perform deduplication
deduped_df = hash_deduplication(df['text_column'])

# Update tracking
kept_indices = deduped_df.index.tolist()
tracker.update_from_kept_indices(kept_indices)

# Map results back to original dataset
full_results = tracker.create_full_result_df(
    results_df, original_df, fill_value={'predicted': None}
)

Pipeline with Deduplication

Create ML pipelines that include deduplication steps:

from freamon.pipeline.pipeline import Pipeline
from examples.pipeline_with_deduplication_tracking import (
    IndexTrackingPipeline, HashDeduplicationStep
)

# Create pipeline with deduplication
pipeline = IndexTrackingPipeline(steps=[
    TextPreprocessingStep(text_column='text'),
    HashDeduplicationStep(text_column='processed_text'),
    ModelTrainingStep()
])

# Run pipeline and track indices
processed_data = pipeline.fit_transform(df)

# Map results back to original indices
mapped_results = pipeline.create_full_result_df(
    'model_training', results_df, fill_value={'predicted': 'unknown'}
)

Documentation

For more detailed information, refer to the examples directory and the following resources:

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

freamon-0.3.32.tar.gz (354.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freamon-0.3.32-py3-none-any.whl (411.4 kB view details)

Uploaded Python 3

File details

Details for the file freamon-0.3.32.tar.gz.

File metadata

  • Download URL: freamon-0.3.32.tar.gz
  • Upload date:
  • Size: 354.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for freamon-0.3.32.tar.gz
Algorithm Hash digest
SHA256 b9b2a3e1cfc994c96eb20b6c30199ee99d9d8e6210c3bfa83347272134bb9b46
MD5 d58f7911f9528990e5af4e9b55099498
BLAKE2b-256 7d702c61d7c80c012e2d3fba820bb6b1d0e14d517eb0b7188867d65f45434ebd

See more details on using hashes here.

File details

Details for the file freamon-0.3.32-py3-none-any.whl.

File metadata

  • Download URL: freamon-0.3.32-py3-none-any.whl
  • Upload date:
  • Size: 411.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for freamon-0.3.32-py3-none-any.whl
Algorithm Hash digest
SHA256 b104ec7623bb7698a57e78e4aa638dd25dbf1a89ad0e4a2b1c95dd32b90646b1
MD5 5e5c7c7c72ce662e3a88d6933cc6e4e7
BLAKE2b-256 441269213b5602ae5900972182ff5e7dee9f1d8afe32306b75396f42b0d3c386

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page