Advanced feature engineering, analysis, modeling and optimization for data science
Project description
Freamon: Feature-Rich EDA, Analytics, and Modeling Toolkit
Freamon is a comprehensive Python toolkit for exploratory data analysis, feature engineering, and model development with a focus on practical data science workflows.
Features
- Exploratory Data Analysis: Automatic EDA with comprehensive reporting in HTML, Markdown, and Jupyter notebooks
- Feature Engineering: Advanced feature engineering for numeric, categorical, and text data
- Deduplication: Multiple deduplication methods with index tracking to map results back to original data
- Topic Modeling: Optimized text analysis with NMF and LDA, supporting large datasets up to 100K documents
- Automated Modeling: Intelligent end-to-end modeling workflow for text, tabular, and time series data
- Modeling: Custom model implementations with feature importance and model interpretation
- Pipeline: Scikit-learn compatible pipeline with additional features
- Drift Analysis: Tools for detecting and analyzing data drift
- Word Embeddings: Integration with various word embedding techniques
- Visualization: Publication-quality visualizations with proper handling of all special characters
- Performance Optimization: Multiprocessing support and intelligent sampling for large dataset analysis
Installation
pip install freamon
Quick Start
from freamon.eda import EDAAnalyzer
# Create an analyzer instance
analyzer = EDAAnalyzer(df, target_column='target')
# Run the analysis
analyzer.run_full_analysis()
# Generate a report
analyzer.generate_report('eda_report.html')
# Or a markdown report for version control
analyzer.generate_report('eda_report.md', format='markdown')
Key Components
Automated Modeling Flow
Perform end-to-end modeling with automatic handling of text and time series features:
from freamon import auto_model
# Simple interface - just provide a dataframe, target, and optional date column
results = auto_model(
df=train_df,
target_column='target',
date_column='date', # Optional for time series
model_type='lightgbm',
problem_type='classification',
text_columns=['text_column'], # Will be auto-detected if not provided
categorical_columns=['category_column'] # Will be auto-detected if not provided
)
# Access the trained model and results
model = results['model']
feature_importance = results['feature_importance']
text_topics = results['text_topics']
cv_metrics = results['metrics']
# Make predictions on new data
predictions = results['autoflow'].predict(test_df)
EDA Module
The EDA module provides comprehensive data analysis:
from freamon.eda import EDAAnalyzer
analyzer = EDAAnalyzer(df, target_column='target')
analyzer.run_full_analysis()
# Generate different types of reports
analyzer.generate_report('report.html') # HTML report
analyzer.generate_report('report.md', format='markdown') # Markdown report
analyzer.generate_report('report.md', format='markdown', convert_to_html=True) # Both formats
Deduplication with Tracking
Perform deduplication while maintaining the ability to map results back to the original dataset:
from freamon.deduplication.exact_deduplication import hash_deduplication
from examples.deduplication_tracking_example import IndexTracker
# Initialize tracker with original dataframe
tracker = IndexTracker().initialize_from_df(df)
# Perform deduplication
deduped_df = hash_deduplication(df['text_column'])
# Update tracking
kept_indices = deduped_df.index.tolist()
tracker.update_from_kept_indices(kept_indices)
# Map results back to original dataset
full_results = tracker.create_full_result_df(
results_df, original_df, fill_value={'predicted': None}
)
Pipeline with Deduplication
Create ML pipelines that include deduplication steps:
from freamon.pipeline.pipeline import Pipeline
from examples.pipeline_with_deduplication_tracking import (
IndexTrackingPipeline, HashDeduplicationStep
)
# Create pipeline with deduplication
pipeline = IndexTrackingPipeline(steps=[
TextPreprocessingStep(text_column='text'),
HashDeduplicationStep(text_column='processed_text'),
ModelTrainingStep()
])
# Run pipeline and track indices
processed_data = pipeline.fit_transform(df)
# Map results back to original indices
mapped_results = pipeline.create_full_result_df(
'model_training', results_df, fill_value={'predicted': 'unknown'}
)
Documentation
For more detailed information, refer to the examples directory and the following resources:
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file freamon-0.3.29.tar.gz.
File metadata
- Download URL: freamon-0.3.29.tar.gz
- Upload date:
- Size: 345.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
832f1ac0ec47ccd2d3aace083fe8ff5a79584c49a6f248f9621319a87735beba
|
|
| MD5 |
d5642aa5394d25a87efee73104d23da2
|
|
| BLAKE2b-256 |
86afc3d6d76155111169f7404187cd8bb49a84f4be2155d3903e8b6b22cf1a95
|
File details
Details for the file freamon-0.3.29-py3-none-any.whl.
File metadata
- Download URL: freamon-0.3.29-py3-none-any.whl
- Upload date:
- Size: 401.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
89502ccc87672d2e92d644d67f1f854563d06fbf29f8a5f78badafea52f97744
|
|
| MD5 |
8a071946479aabffa581e525cb656336
|
|
| BLAKE2b-256 |
decba3688812fdc5a2f63bcf5e5847ce873986311f34680b8e0e2351df689591
|