Enterprise-grade time series forecasting package with ARIMA, Prophet, and LSTM models
Project description
Forecasting Package 📈
An enterprise-grade time series forecasting package with MLOps capabilities, supporting multiple forecasting models, automated hyperparameter tuning, and production-ready deployment.
✨ Features
🎯 Core Capabilities
- Multiple Forecasting Models: ARIMA, Prophet, LSTM, and Ensemble methods
- Automated Model Selection: AutoML with hyperparameter optimization
- Data Validation: Comprehensive data quality checks and preprocessing
- Feature Engineering: Automated temporal and Fourier feature generation
- Evaluation Framework: Backtesting, cross-validation, and comprehensive metrics
🚀 MLOps Integration
- Experiment Tracking: MLflow integration for experiment management
- Model Registry: Version control and model lifecycle management
- Performance Monitoring: Real-time model performance and drift detection
- A/B Testing: Compare multiple models in production
🌐 Production Ready
- REST API: FastAPI-based prediction and training endpoints
- Docker Support: Containerized deployment with docker-compose
- CI/CD Pipeline: Automated testing, building, and deployment
- Monitoring: Prometheus and Grafana integration
- Scalability: Horizontal scaling support
📦 Installation
Using pip
pip install forecasting-package
From source
git clone https://github.com/username/forecasting-package.git
cd forecasting-package
pip install -e .
Using Docker
docker pull ghcr.io/username/forecasting-package:latest
docker run -p 8000:8000 ghcr.io/username/forecasting-package:latest
🚀 Quick Start
Basic Usage
import pandas as pd
from forecasting.models.arima import ARIMAForecaster
from forecasting.data.validators import TimeSeriesValidator
from forecasting.evaluation.metrics import ForecastMetrics
# Load your time series data
df = pd.read_csv('data.csv', parse_dates=['date'])
# Validate data
validator = TimeSeriesValidator()
is_valid, issues = validator.validate(df['value'])
# Create and train model
model = ARIMAForecaster(order=(1, 1, 1))
model.fit(df['value'].values)
# Make predictions
forecast = model.predict(steps=30)
# Evaluate
metrics = ForecastMetrics()
mae = metrics.mae(df['value'][-30:], forecast)
print(f"MAE: {mae}")
Using Prophet
from forecasting.models.prophet import ProphetForecaster
# Create Prophet model with custom seasonality
model = ProphetForecaster(
seasonality_mode='multiplicative',
yearly_seasonality=True,
weekly_seasonality=True
)
# Fit with exogenous variables
model.fit(
y=df['value'].values,
dates=df['date'].values,
exog=df[['temperature', 'holiday']].values
)
# Forecast with confidence intervals
forecast, lower, upper = model.predict(steps=30, return_conf_int=True)
Using LSTM
from forecasting.models.lstm import LSTMForecaster
# Create LSTM model
model = LSTMForecaster(
hidden_size=64,
num_layers=2,
dropout=0.2,
learning_rate=0.001
)
# Train model
model.fit(
y=df['value'].values,
epochs=100,
batch_size=32,
validation_split=0.2
)
# Predict
forecast = model.predict(steps=30)
Ensemble Methods
from forecasting.models.ensemble import EnsembleForecaster
from forecasting.models.arima import ARIMAForecaster
from forecasting.models.prophet import ProphetForecaster
# Create ensemble
ensemble = EnsembleForecaster(
models=[
ARIMAForecaster(order=(1, 1, 1)),
ProphetForecaster()
],
weights=[0.6, 0.4] # Optional custom weights
)
# Train and predict
ensemble.fit(df['value'].values)
forecast = ensemble.predict(steps=30)
AutoML
from forecasting.models.tuning import AutoML
# Automated model selection and tuning
automl = AutoML(
models=['arima', 'prophet', 'lstm'],
metric='rmse',
cv_folds=5
)
# Find best model
best_model = automl.fit(df['value'].values)
# Use best model
forecast = best_model.predict(steps=30)
🔧 Advanced Usage
Data Preprocessing
from forecasting.data.preprocessors import TimeSeriesPreprocessor
from forecasting.data.feature_engineering import FeatureEngineer
# Preprocess data
preprocessor = TimeSeriesPreprocessor()
df_clean = preprocessor.handle_missing_values(df)
df_scaled, scaler = preprocessor.scale_data(df_clean)
# Engineer features
engineer = FeatureEngineer()
df_features = engineer.create_temporal_features(df_scaled)
df_features = engineer.create_fourier_features(df_features, n_terms=3)
Backtesting
from forecasting.evaluation.backtesting import RollingOriginBacktester
# Setup backtester
backtester = RollingOriginBacktester(
initial_window=100,
horizon=10,
step=1
)
# Run backtest
results = backtester.run(model, df['value'].values)
# Analyze results
print(f"Average RMSE: {results['avg_rmse']}")
print(f"Average MAE: {results['avg_mae']}")
MLflow Integration
from forecasting.mlops.tracking import ExperimentTracker
from forecasting.mlops.registry import ModelRegistry
# Track experiments
tracker = ExperimentTracker(
experiment_name='sales_forecasting',
tracking_uri='http://localhost:5000'
)
with tracker.start_run():
# Train model
model.fit(df['value'].values)
# Log parameters
tracker.log_params({'order': (1, 1, 1)})
# Log metrics
tracker.log_metrics({'rmse': 10.5, 'mae': 8.2})
# Log model
tracker.log_model(model, 'arima_model')
# Register model
registry = ModelRegistry(tracking_uri='http://localhost:5000')
version = registry.register_model(
model_name='sales_forecaster',
model=model,
metadata={'dataset': 'sales_2024'}
)
🌐 API Usage
Start API Server
# Using uvicorn
uvicorn forecasting.api.main:app --host 0.0.0.0 --port 8000
# Using Docker
docker-compose up -d
Make Predictions
import requests
# Predict endpoint
response = requests.post(
'http://localhost:8000/api/v1/predict',
json={
'model_name': 'sales_forecaster',
'steps': 30,
'confidence_level': 0.95
}
)
forecast = response.json()
print(forecast['predictions'])
Train Model via API
# Training endpoint
response = requests.post(
'http://localhost:8000/api/v1/train',
json={
'model_type': 'arima',
'data': data_list,
'config': {
'order': [1, 1, 1],
'seasonal_order': [0, 0, 0, 0]
}
}
)
result = response.json()
print(f"Model trained: {result['model_id']}")
📊 Monitoring
Prometheus Metrics
The API exposes metrics at /metrics:
forecast_requests_total: Total prediction requestsforecast_request_duration_seconds: Request durationmodel_prediction_errors_total: Prediction errorsactive_models: Number of active models
Grafana Dashboards
Access Grafana at http://localhost:3000 (default credentials: admin/admin)
Pre-configured dashboards:
- Model Performance Overview
- API Request Metrics
- System Resource Usage
- Model Drift Detection
🧪 Testing
# Run all tests
make test
# Run with coverage
make test-coverage
# Run specific test suite
pytest tests/unit/test_models.py -v
# Run integration tests
pytest tests/integration/ -v
📚 Documentation
Full documentation is available at https://docs.forecasting-package.example.com
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
# Setup development environment
make install-dev
# Run pre-commit checks
make pre-commit
# Submit pull request
git checkout -b feature/your-feature
git commit -m "Add your feature"
git push origin feature/your-feature
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Prophet by Facebook
- statsmodels for ARIMA implementation
- PyTorch for LSTM models
- MLflow for experiment tracking
- FastAPI for API framework
📞 Support
- 📧 Email: support@forecasting-package.example.com
- 💬 Slack: Join our community
- 🐛 Issues: GitHub Issues
- 📖 Docs: Documentation
🗺️ Roadmap
- Support for additional models (XGBoost, LightGBM)
- Real-time streaming predictions
- Automated anomaly detection
- Multi-variate forecasting
- Cloud deployment templates (AWS, GCP, Azure)
- Web UI for model management
⭐ Star History
Made with ❤️ by the Forecasting Package Team
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file forecaster_ai-0.1.0.tar.gz.
File metadata
- Download URL: forecaster_ai-0.1.0.tar.gz
- Upload date:
- Size: 8.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0fb8a94d53107fce428523cbfefa292d91bdd90011e566ae3c848e0b1d14c21a
|
|
| MD5 |
bacbac5e728202f16041c54a91ce7880
|
|
| BLAKE2b-256 |
2ba50cf7dc8f1711e3d7e4b010ed4f802cf04418c4b7d62d95909c03fd0665cc
|
File details
Details for the file forecaster_ai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: forecaster_ai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6af9322f28d7395f0690e66ddda7acb231a67bce48f8954f541be51bddf3a51
|
|
| MD5 |
66c5e91de933e3a7808b33f392b0d5ef
|
|
| BLAKE2b-256 |
8e692c638dbd3563efdefb8f27b8df7366449c25ee8f4e0b43f10f53b5da4726
|