Agentic AI system for generating Bayesian optimization code from natural language using LangGraph and OpenAI GPT models
Project description
Honegumi RAG Assistant
An agentic workflow for Bayesian optimization code generation, built with LangGraph and powered by OpenAI models. Honegumi RAG Assistant codifies the end-to-end pipeline—parameter extraction, skeleton code generation, documentation retrieval, and code synthesis—into reusable nodes orchestrated as a LangGraph. LangSmith integration tracks and visualizes your graph executions. The result? Complete, ready-to-run Bayesian optimization code generated in seconds from natural language descriptions.
🚀 Why Honegumi RAG Assistant?
- Agentic LangGraph design lets you describe problems in natural language and get production-ready code
- Fast iteration: eliminate manual coding and focus on science
- Built on Honegumi: Leverages deterministic skeleton generation from Honegumi
- Intelligent RAG: Retrieves relevant Ax Platform documentation to enhance code generation
- LangSmith-backed for graph tracking, versioning, and observability
- Production-ready: versionable, testable, pip-installable
🛠️ Prerequisites
- Conda (Miniconda or Anaconda)
- Python 3.11+
- OpenAI API key
- LangSmith API key (optional)
📘 Google Colab Tutorial
To help you get started quickly, we've prepared an interactive Google Colab tutorial:
Google Colab Tutorial: Getting Started with Honegumi RAG Assistant
In this tutorial, you'll learn how to:
- Install Honegumi RAG Assistant and all necessary dependencies on Colab
- Set up API keys using Colab Secrets
- Build a vector store from Ax Platform documentation
- Describe your optimization problem and generate code
- View the generated code in your Google Drive
The tutorial runs entirely in Colab—no local setup required. All you need is access to your Google Drive and valid OpenAI/LangSmith API keys.
Installation via pip
-
Create & activate a conda environment
conda create -n honegumi_rag python=3.11 -y conda activate honegumi_rag
-
Install via pip
pip install honegumi-rag-assistant
-
Configure your API keys
Honegumi RAG Assistant will automatically look for a file named
.envin your current working directory (or any parent) and load any keys it finds.In the folder where you'll run the CLI (or in any ancestor), create a file called
.envcontaining:OPENAI_API_KEY=sk-... LANGCHAIN_API_KEY=lsv2_...
-
Build vector store (one-time setup)
For best results with documentation retrieval, build the vector store:
# Download the build script wget https://raw.githubusercontent.com/hasan-sayeed/honegumi_rag_assistant/main/scripts/build_vector_store.py # Run it python build_vector_store.py --output ./ax_docs_vectorstore # Set the path in your .env echo "AX_DOCS_VECTORSTORE_PATH=./ax_docs_vectorstore" >> .env
-
Run the assistant
Now, start the
honegumi-ragpipeline:honegumi-rag
Honegumi RAG Assistant will:
- Prompt you to describe your Bayesian optimization problem
- Extract parameters (objectives, constraints, search space, etc.)
- Generate a deterministic code skeleton using Honegumi
- Retrieve relevant Ax Platform documentation
- Generate complete, runnable Python code
- Stream the code generation in real-time
- Optionally save the generated code to a file
Usage Examples
Interactive Mode (Default)
honegumi-rag
You'll be prompted:
Please describe your Bayesian optimization problem.
(Press Enter when finished)
Your problem:
The assistant generates complete code and displays it in real-time.
Save Generated Code
honegumi-rag --output-dir ./my_experiments
Use Different Models
Customize which GPT models are used for each agent:
honegumi-rag \
--code-writer-model gpt-5 \
--param-selector-model gpt-4o \
--retrieval-planner-model gpt-4o
Key Features
Multi-Agent Architecture
- Parameter Selector: Extracts optimization parameters from natural language
- Skeleton Generator: Uses Honegumi for deterministic code templates
- Retrieval Planner: Generates intelligent documentation queries
- Parallel Retrievers: Concurrent documentation retrieval for speed
- Code Writer: GPT-5 powered code generation with streaming
- Reviewer (optional): Quality assessment and refinement
Advanced Features
- LangSmith Integration: Full tracing and debugging support
- Streaming Output: See code generation in real-time
- Flexible Models: Mix GPT-5 and GPT-4o for cost-performance optimization
- Optional Save: Print code or save to file—your choice
Command Line Arguments
| Argument | Description | Default |
|---|---|---|
--output-dir |
Save generated script to specified directory (if omitted, code is only printed, not saved) | None (no save) |
--debug |
Enable debug mode with detailed logging | False |
--review |
Enable Reviewer agent (slower, more accurate) | False |
--param-selector-model |
Model for Parameter Selector | gpt-5 |
--retrieval-planner-model |
Model for Retrieval Planner | gpt-5 |
--code-writer-model |
Model for Code Writer agent | gpt-5 |
--reviewer-model |
Model for Reviewer agent | gpt-4o |
Example Problems
Chemical Process Optimization
Optimize temperature (100-300°C), pressure (1-5 bar), and catalyst concentration (0.1-1.0 M)
to maximize conversion rate in a catalytic reaction.
Materials Design
Optimize composition of a polymer blend: Component A (0-100%), Component B (0-100%),
and curing temperature (80-150°C) to maximize tensile strength while minimizing cost.
Machine Learning Hyperparameters
Optimize neural network hyperparameters: learning rate (1e-5 to 1e-1),
batch size (16 to 256), and dropout rate (0.1 to 0.5) to maximize validation accuracy.
Documentation & Support
- Full Documentation: GitHub Repository
- Google Colab Tutorial: Interactive Tutorial
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: hasan.sayeed@utah.edu
Feedback & Feature Requests
This project demonstrates a proof of concept of what's possible with agentic systems for Bayesian optimization. While Honegumi RAG Assistant works out-of-the-box for many scenarios, your use case may involve more complex pipelines, custom constraints, or specific modeling needs.
Have something bigger in mind? Want Honegumi RAG Assistant to handle advanced features or integrate with your workflow?
We'd love to hear from you!
- Open a GitHub issue
- Start a discussion
- Or reach out directly at hasan.sayeed.71.93@gmail.com
License
MIT License - see LICENSE.txt for details.
Acknowledgments
- Built with PyScaffold
- Powered by LangGraph and LangChain
- Skeleton generation by Honegumi
- Uses Meta's Ax Platform for Bayesian optimization
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file honegumi_rag_assistant-0.1.7.tar.gz.
File metadata
- Download URL: honegumi_rag_assistant-0.1.7.tar.gz
- Upload date:
- Size: 685.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d47b306a11ea6a2d5b479b61fb95dc7930b662f0cd9a3204c4f030e12d81a944
|
|
| MD5 |
ef2c52bec24f899081880b16ac638fbf
|
|
| BLAKE2b-256 |
28a142c0d9414b5e62932613156d31420f312e8e62ac12b486692a57c518055a
|
File details
Details for the file honegumi_rag_assistant-0.1.7-py3-none-any.whl.
File metadata
- Download URL: honegumi_rag_assistant-0.1.7-py3-none-any.whl
- Upload date:
- Size: 46.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7697c27499c2056694c7f8fd7b2dd447cbe81a05960b97d7fa04bbc9346d722e
|
|
| MD5 |
8c27ef75b0d11c5879f34d8d0d8bf4b2
|
|
| BLAKE2b-256 |
fc4feb3338876936c52b31be51078732e849be1d9ba2ebf61dee3d2256e88865
|