Inductive-bias Learning
Project description
IBLM:Inductive-bias Learning Models
What is IBL?
IBL (Inductive-bias Learning) is a new machine learning modeling method that uses LLM to infer the structure of the model itself from the data set and outputs it as Python code. The learned model (code model) can be used as a machine learning model to predict a new dataset.In this repository, you can try different learning methods with IBL.(Currently only binary classification with simple methods is available.)
- Currently, only binary classification is supported.
Examples
Use the link below to try it out immediately on Google colab.
How to Use
- Installation and Import
pip install iblm
import iblm
-
Setting
-
OpenAI
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="openai", model_name="gpt-4-0125-preview", objective="binary")
-
Azure OpenAI
os.environ["AZURE_OPENAI_KEY"] = "YOUR_API_KEY" os.environ["AZURE_OPENAI_ENDPOINT"] = "xxx" os.environ["OPENAI_API_VERSION"] = "xxx" ibl = iblm.IBLModel(api_type="azure", model_name="gpt-4-0125-preview", objective="binary")
-
Google API
os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="gemini", model_name="gemini-pro", objective="binary")
-
Anthropic API
os.environ["ANTHROPIC_API_KEY"] = "YOUR_API_KEY" ibl = iblm.IBLModel(api_type="", model_name="", objective="binary")
-
-
Model Learning
Currently, only small amounts of data can be executed.code_model = ibl.fit(x_train, y_train) print(code_model)
-
Model Predictions
y_proba = ibl.predict(x_test)
Inductive-bias Learning Models
-
Inductive-bias Learning
Normal Inductive-bias Learningfrom iblm import IBLBaggingModel iblm = IBLModel( api_type="openai", model_name="gpt-4-0125-preview", objective="binary" )
-
Inductive-bias Learning bagging
Sampling data from a given dataset, we create multiple models, and the average of these models is used as the predicted value.from iblm import IBLBaggingModel iblbagging = IBLBaggingModel( api_type="openai", model_name="gpt-4-0125-preview", objective="binary", num_model=20, # Number of models to create max_sample = 2000, # Maximum number of samples from the data set min_sample = 300, # Minimum number of samples from the data set )
Supported LLMs
- OpenAI
- gpt-4-0125-preview
- gpt-3.5-turbo-0125
- Azure OpenAI
- gpt-4-0125-preview
- gpt-3.5-turbo-0125
- Google
- gemini-pro
- Anthropic
- claude-3-opus-20240229
- claude-3-sonnet-20240229
Contributor
Cite
If you find this repo helpful, please cite the following papers:
@article{tanaka2023inductive,
title={Inductive-bias Learning: Generating Code Models with Large Language Model},
author={Tanaka, Toma and Emoto, Naofumi and Yumibayashi, Tsukasa},
journal={arXiv preprint arXiv:2308.09890},
year={2023}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.