An implementation of Rabdomization based ML algorithm.
Project description
🧠 Random-ML: A Randomized Machine Learning Framework
Random-ML is a high-performance machine learning library that implements randomized neural networks and functional link architectures.
It provides fast, accurate, and scalable models like Random Vector Functional Link (RVFL), Extreme Learning Machines (ELM), and Stochastic Configuration Networks (SCN).
🚀 Key Features
- Randomized Neural Networks → Avoids backpropagation, leading to faster training.
- Supports Ensemble Learning → Includes Boosting & Bagging.
- Multiple Activation Functions →
relu,sigmoid,tanh,leaky_relu,sin. - Works with Scikit-Learn → Seamlessly integrates into the existing ML ecosystem.
- Customizable Models → Allows tuning hidden units, weight initialization, and regularization.
⚡ Installation
You can install Random-ML via pip:
pip install random-ml
or install from source:
git clone https://github.com/yourusername/random-ml.git
cd random-ml
pip install -e .
🛠 Usage Examples
🚀 1. Regression with RVFL
from random_ml.regressor import RVFLRegressor
import numpy as np
# Generate sample data
X = np.random.rand(100, 10)
y = np.sin(X[:, 0]) # Regression target
# Train RVFL model
rvfl_reg = RVFLRegressor(in_dim=10, n_hidden_units=50, activation="relu")
rvfl_reg.fit(X, y)
y_pred = rvfl_reg.predict(X)
print("Predictions:", y_pred[:5])
🔥 2. Classification with RVFL
from random_ml.classifier import RVFLClassifier
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
# Generate classification data
X, y = make_moons(n_samples=500, noise=0.2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train RVFL classifier
rvfl_cls = RVFLClassifier(in_dim=2, n_hidden_units=50, activation="relu")
rvfl_cls.fit(X_train, y_train)
y_pred = rvfl_cls.predict(X_test)
# Accuracy
from sklearn.metrics import accuracy_score
print(f"RVFL Accuracy: {accuracy_score(y_test, y_pred) * 100:.2f}%")
🤖 3. Using Boosting & Bagging for Classification
from random_ml.ensemble import RVFLBoostingClassifier, RVFLBaggingClassifier
rvfl_kwargs = {"n_hidden_units": 50, "activation": "relu"}
# AdaBoost with RVFL
boosting_cls = RVFLBoostingClassifier(in_dim=2, rvfl_kwargs=rvfl_kwargs, n_estimators=20, random_state=42)
boosting_cls.fit(X_train, y_train)
y_pred_boosting = boosting_cls.predict(X_test)
# Bagging with RVFL
bagging_cls = RVFLBaggingClassifier(in_dim=2, rvfl_kwargs=rvfl_kwargs, n_estimators=20, random_state=42)
bagging_cls.fit(X_train, y_train)
y_pred_bagging = bagging_cls.predict(X_test)
print(f"Boosting Accuracy: {accuracy_score(y_test, y_pred_boosting) * 100:.2f}%")
print(f"Bagging Accuracy: {accuracy_score(y_test, y_pred_bagging) * 100:.2f}%")
📖 Documentation & API Reference
Full documentation is available at ReadTheDocs.
To build the documentation locally:
cd docs
make html
Then open docs/build/html/index.html in your browser.
📄 Available Modules
| Module | Description |
|---|---|
random_ml.rvfl |
Implements RVFL-based models |
random_ml.ensemble |
Includes Boosting & Bagging implementations |
randomml.classifier |
Provides RVFL-based classifiers |
random_ml.regressor |
Contains RVFL-based regressors |
randomml.mlpedrvfl |
Implements the MLPedRVFL model |
🤝 Contributing
We welcome contributions to improve Random-ML! To contribute:
- Fork the repository.
- Clone your fork:
git clone https://github.com/yourusername/random-ml.git cd random-ml
- Install dependencies:
pip install -r requirements.txt
- Create a new branch:
git checkout -b feature_branch
- Make changes & commit:
git add . git commit -m "Describe your changes"
- Push to GitHub & submit a pull request.
For details, see Contributing Guide.
🏆 Citing random-ml
The preprint is Submitted to Pattern Recognition Letter, please cite it:
Vinay Kumar Giri, Rahul Goswami, Vimlesh Kumar, Synergistic Regression through MLP and edRVFL Fusion: The MLPedRVFL Model for Enhanced Performance and Efficiency, Preprint Submitted to Pattern Recognition Letter.
BibTeX Citation
@misc{randomml2024,
A placeholder for bibTeX citation
}
Coming Soon - The paper is under review. Yaay!
📝 License
Random-ML is released under the MIT License. See LICENSE for details.
🎯 What's Next?
- ✅ Implement more activation functions
- ✅ Improve training speed
- ✅ Support GPU acceleration
- ✅ Add new ensemble methods
- ✅ Open to community contributions!
🔥 Now you're ready to use Random-ML for fast and scalable machine learning! 🚀
Let us know if you need help or have feature requests! 😊✨
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file random_ml-0.1.0.tar.gz.
File metadata
- Download URL: random_ml-0.1.0.tar.gz
- Upload date:
- Size: 10.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
504c9b038c9b2ece6dfddb42549f0ac490da79419f3ce0598559fdc813b2c4ec
|
|
| MD5 |
ec88fcc77f1f09b6c2c5635032c02392
|
|
| BLAKE2b-256 |
dc1a957499e7d9db38c6f07c50eabb72e8c030fb0f11ffa7156c60b1670ab447
|
File details
Details for the file random_ml-0.1.0-py3-none-any.whl.
File metadata
- Download URL: random_ml-0.1.0-py3-none-any.whl
- Upload date:
- Size: 12.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e1bc28fd67a96834302cc05eb93326d22c9e626edca7cb594f558016e9077b2f
|
|
| MD5 |
3bbdafb198405637c4e45c6d7744d4e3
|
|
| BLAKE2b-256 |
053b38a17fb3d31f7dc910b1672f4ace230d7fab9e43e02d7eb09895f28286bb
|