Inductive-bias Learning
Project description
IBLM:Inductive-bias Learning Models
What is IBL?
IBL (Inductive-bias Learning) is a new machine learning modeling method that uses LLM to infer the structure of the model itself from the data set and outputs it as Python code. The learned model (code model) can be used as a machine learning model to predict a new dataset.In this repository, you can try different learning methods with IBL.(Currently only binary classification with simple methods is available.)
- Currently, only binary classification is supported.
Examples
Use the link below to try it out immediately on Google colab.
How to Use
- Installation and Import
pip install iblm
import iblm
-
Setting
-
OpenAI
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="openai", model_name="gpt-4-0125-preview", objective="binary")
-
Azure OpenAI
os.environ["AZURE_OPENAI_KEY"] = "YOUR_API_KEY" os.environ["AZURE_OPENAI_ENDPOINT"] = "xxx" os.environ["OPENAI_API_VERSION"] = "xxx" ibl = iblm.IBLModel(api_type="azure", model_name="gpt-4-0125-preview", objective="binary")
-
Google API
os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="gemini", model_name="gemini-pro", objective="binary")
-
Anthropic API
os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="", model_name="", objective="binary")
-
-
Model Learning
Currently, only small amounts of data can be executed.code_model = ibl.fit(x_train, y_train) print(code_model)
-
Model Predictions
y_proba = ibl.predict(x_test)
Inductive-bias Learning Models
-
Inductive-bias Learning
Normal Inductive-bias Learningfrom iblm import IBLBaggingModel iblm = IBLModel( api_type="openai", model_name="gpt-4-0125-preview", objective="binary" )
-
Inductive-bias Learning bagging
Sampling data from a given dataset, we create multiple models, and the average of these models is used as the predicted value.from iblm import IBLBaggingModel iblbagging = IBLBaggingModel( api_type="openai", model_name="gpt-4-0125-preview", objective="binary", num_model=20, # Number of models to create max_sample = 2000, # Maximum number of samples from the data set min_sample = 300, # Minimum number of samples from the data set )
Supported LLMs
- OpenAI
- gpt-4-0125-preview
- gpt-3.5-turbo-0125
- Azure OpenAI
- gpt-4-0125-preview
- gpt-3.5-turbo-0125
- Google
- gemini-pro
- Anthropic
- claude-3-opus-20240229
- claude-3-sonnet-20240229
Contributor
Cite
If you find this repo helpful, please cite the following papers:
@article{tanaka2023inductive,
title={Inductive-bias Learning: Generating Code Models with Large Language Model},
author={Tanaka, Toma and Emoto, Naofumi and Yumibayashi, Tsukasa},
journal={arXiv preprint arXiv:2308.09890},
year={2023}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file iblm-1.0.6.tar.gz
.
File metadata
- Download URL: iblm-1.0.6.tar.gz
- Upload date:
- Size: 11.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.9.18 Darwin/23.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ad4eb7c5e384a91b6dfe8a736e91614b5b207d280b4b5953d19cb171bb163e21 |
|
MD5 | bf6ae750ec8752aee38d1ea48f9fa5f1 |
|
BLAKE2b-256 | 10423b65f8ba80e07716337d943680c1a85433d89057f15736fd53d134ed256e |
File details
Details for the file iblm-1.0.6-py3-none-any.whl
.
File metadata
- Download URL: iblm-1.0.6-py3-none-any.whl
- Upload date:
- Size: 19.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.9.18 Darwin/23.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fe8661dfba6794518a8e91cdc0503bc1c3e77977530d2e5a1acc9ad7ff70fd23 |
|
MD5 | 345790622f36d42906eb1896d4a43125 |
|
BLAKE2b-256 | 4588b818304763ac83fc25f99278f163f2b4433f9dc02af3c1ba8b7f4970f384 |