AI-powered natural language interface for REST APIs with real-time progress streaming
Project description
Enable AI
Natural language interface for REST APIs with MCP server support
Transform natural language queries into API calls using a LangGraph-powered workflow.
🎯 Overview
enable-ai is a Python library that understands natural language and automatically:
- Matches queries to APIs - "list all users" → GET /users/
- Authenticates automatically - Handles JWT, OAuth, API keys
- Extracts parameters - "get user 5" → GET /users/5/
- Returns structured data - Clean JSON responses
- Manages state with LangGraph - Enables step re-runs and back-and-forth
- Exposes MCP server - Integrate with AI assistants like Claude Desktop
Use Cases:
- Build natural language interfaces for your APIs
- Create AI-powered chatbots for customer support
- Integrate with SaaS platforms for AI-driven workflows
Current scope: API-only. Database and document features are planned and documented as future extensions.
🚀 Installation & Setup
Step 1: Install the Package
pip install enable-ai
Or for development:
git clone https://github.com/EnableEngineering/enable_ai.git
cd enable_ai
pip install -e .
Step 2: Create Configuration Files
The module automatically detects config.json and .env from your working directory.
config.json - Define your data sources
{
"data_sources": {
"api": {
"type": "api",
"enabled": true,
"base_url": "http://localhost:8002/api",
"schema_path": "schemas/api_schema.json"
}
},
"security_credentials": {
"api": {
"jwt": {
"enabled": true,
"token_endpoint": "/token/",
"username_field": "email",
"password_field": "password",
"env": {
"username": "API_EMAIL",
"password": "API_PASSWORD"
}
}
}
}
}
.env - Store credentials securely
OPENAI_API_KEY=sk-proj-your-key-here
API_EMAIL=admin@example.com
API_PASSWORD=your_password
Important: Add .env to your .gitignore!
📖 Usage Guide
1. Python Library Usage
from enable_ai import NLPProcessor
# Initialize (auto-detects config.json and .env from current directory)
processor = NLPProcessor()
# Process natural language queries
result = processor.process("list all users")
print(result['summary']) # Natural language summary
print(result['data']) # Structured data from API
Advanced Usage - Custom Config
# Use specific config path
processor = NLPProcessor(config_path="/path/to/config.json")
# Pass custom config dictionary
config = {
"data_sources": {
"api": {
"type": "api",
"enabled": True,
"base_url": "http://api.example.com"
}
}
}
processor = NLPProcessor(config=config)
# Override authentication token
result = processor.process(
"list all users",
access_token="your_jwt_token_here"
)
2. MCP Server Usage
Run as a Model Context Protocol (MCP) server for AI assistants:
# Start MCP server (auto-detects config from current directory)
python3 -m enable_ai.mcp_server
Test with MCP Inspector
# Install MCP inspector
npm install -g @modelcontextprotocol/inspector
# Launch inspector
cd /path/to/your-backend
npx @modelcontextprotocol/inspector python3 -m enable_ai.mcp_server
Integrate with Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"enable_ai": {
"command": "python3",
"args": ["-m", "enable_ai.mcp_server"],
"cwd": "/path/to/your-backend"
}
}
}
Now Claude can process natural language queries against your APIs!
3. Command Line Usage
# Quick test from command line
cd /path/to/your-backend
python3 -c "
from enable_ai import NLPProcessor
proc = NLPProcessor()
result = proc.process('list all users')
print(result['summary'])
"
🏗️ Architecture
User Query: "list all users"
↓
LangGraph Workflow
↓
Parser (LLM-powered)
↓
Intent + Parameters
↓
Matcher (API only)
↓
Execution Plan
↓
Authentication (JWT/OAuth/API Key)
↓
Execute Query
↓
Results + Summary
📦 Module Components
Core Modules
orchestrator.py - Main orchestrator
The central processing engine that coordinates all operations via a LangGraph workflow. Handles query parsing, authentication, execution planning, and response summarization. This is your main entry point via NLPProcessor class.
workflow.py - LangGraph pipeline
Defines the stateful workflow for parsing, planning, executing, and summarizing API calls. Enables step re-runs and back-and-forth when required.
query_parser.py - Natural language understanding
Converts user queries into structured intents using OpenAI GPT-4. Extracts entities (IDs, names, dates), determines actions (list, get, create, update, delete), and identifies target resources.
types.py - Type definitions and data structures
Defines type-safe classes for requests, responses, and errors. Includes APIRequest, APIResponse, APIError, and authentication credential structures.
Data Source Matchers
api_matcher.py - REST API matching
Matches parsed queries to REST API endpoints from OpenAPI/custom schemas. Handles path parameters, query strings, request bodies, and HTTP methods (GET, POST, PUT, DELETE, PATCH).
database_matcher.py - Database query generation (planned)
Database support is planned for a future release; the current pipeline focuses on APIs only.
knowledge_graph_matcher.py - Document/RAG search (planned)
Knowledge graph support is planned for a future release; the current pipeline focuses on APIs only.
Utilities
api_client.py - HTTP request handler
Executes REST API calls with automatic retry logic, timeout handling, and error management. Supports all HTTP methods and authentication schemes.
config_loader.py - Configuration management
Loads and validates configuration from JSON files or dictionaries. Handles environment variable substitution and schema path resolution.
mcp_server.py - MCP protocol server
Exposes the NLP processor through Model Context Protocol for integration with AI assistants like Claude Desktop. Provides 4 tools: process_query, get_schema_resources, authenticate, get_config_info.
Schema Generation
schema_generator/ - Automatic schema creation
Tools to automatically generate schemas from various sources:
schema_converter.py- Convert OpenAPI specs to internal format (supported)database_inspector.py- Introspect database schemas (planned)pdf_analyzer.py- Extract structure from PDF documents (planned)json_analyzer.py- Analyze JSON APIs automatically (planned)cli.py- Command-line interface for schema generation
🔍 Auto-Detection
The module automatically finds configuration files from your working directory:
Priority Order
- Current working directory -
./config.json,./.env(highest priority) - Environment variables -
$NLP_CONFIG_PATH - User home directory -
~/.enable_ai/config.json - Package defaults - Bundled examples
Verification
# Test auto-detection
cd /path/to/your-backend
python3 << 'EOF'
import sys; sys.stderr = sys.stdout
from enable_ai.mcp_server import DEFAULT_CONFIG_PATH, DEFAULT_ENV_PATH
print(f"Config: {DEFAULT_CONFIG_PATH}")
print(f"Env: {DEFAULT_ENV_PATH}")
EOF
Expected output:
✓ Loaded .env from: /path/to/your-backend/.env
✓ Found config.json at: /path/to/your-backend/config.json
🔐 Authentication Support
JWT (JSON Web Tokens)
Automatically obtains and refreshes JWT tokens using credentials from .env.
OAuth 2.0
Supports client credentials and authorization code flows.
API Keys
Loads API keys from environment variables and includes them in request headers.
Manual Tokens
Pass tokens explicitly: processor.process("query", access_token="token")
🗃️ Schema Examples
API Schema (OpenAPI format)
{
"type": "api",
"resources": {
"users": {
"description": "User management endpoints",
"endpoints": [
{
"path": "/users/",
"method": "GET",
"description": "List all users"
},
{
"path": "/users/{id}/",
"method": "GET",
"description": "Get user by ID"
}
]
}
}
}
Database Schema (planned)
{
"type": "database",
"tables": {
"users": {
"description": "User accounts table",
"columns": {
"id": {"type": "INTEGER", "primary_key": true},
"email": {"type": "VARCHAR"},
"name": {"type": "VARCHAR"}
}
}
}
}
🧪 Testing
# Run tests
pytest tests/
# Run specific test
python tests/test_processor_query.py
# Test with real API
python tests/test_api_endpoint.py
📊 Example Queries
| Natural Language | Result |
|---|---|
| "list all users" | GET /users/ → Returns user list |
| "get user 5" | GET /users/5/ → Returns user details |
| "show me service orders with high priority" | Filters service orders by priority |
| "create a new user with email test@example.com" | POST /users/ → Creates user |
| "find documents about machine learning" | Semantic search in knowledge base |
🛠️ Development
Generate Schemas Automatically
# From OpenAPI spec
python -m enable_ai.schema_generator.cli \
--source openapi \
--input swagger.json \
--output schemas/api_schema.json
# From database (planned)
python -m enable_ai.schema_generator.cli \
--source database \
--connection-string "postgresql://localhost/db" \
--output schemas/db_schema.json
# From PDFs (planned)
python -m enable_ai.schema_generator.cli \
--source pdf \
--input documents/ \
--output schemas/knowledge_graph.json
🌐 Use Cases
1. Customer Support Chatbot
processor = NLPProcessor()
user_query = "Show me my recent orders"
result = processor.process(user_query, access_token=user_token)
# Returns order history automatically
2. Internal Tools (planned)
# Let employees query databases naturally (planned)
result = processor.process("How many users signed up this month?")
3. API Documentation Assistant
# Help developers discover APIs
result = processor.process("What user endpoints are available?")
4. SaaS Integration
# Deploy as MCP server for AI assistant integration
# Claude Desktop, custom agents, etc.
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
📄 License
MIT License - See LICENSE file for details
🔗 Resources
- Repository: https://github.com/EnableEngineering/enable_ai
- Issues: https://github.com/EnableEngineering/enable_ai/issues
- PyPI: https://pypi.org/project/enable-ai/
💡 Quick Start Summary
# 1. Install
pip install enable-ai
# 2. Create config.json and .env in your project
# 3. Use it
python3 -c "
from enable_ai import NLPProcessor
proc = NLPProcessor()
print(proc.process('list all users')['summary'])
"
That's it! The module handles authentication, API matching, and execution automatically. 🚀
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file enable_ai-0.3.0.tar.gz.
File metadata
- Download URL: enable_ai-0.3.0.tar.gz
- Upload date:
- Size: 56.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
86cd42960a232baa1ea538eb91a62ebcfb0d12ace82370a5687702973da20168
|
|
| MD5 |
4422f51a2a98d8caebce15e08b9d6485
|
|
| BLAKE2b-256 |
56bbdc7a0b4fdec55f08b810982c7fa62f304d68f3753bbdf052c92c1203dfe1
|
File details
Details for the file enable_ai-0.3.0-py3-none-any.whl.
File metadata
- Download URL: enable_ai-0.3.0-py3-none-any.whl
- Upload date:
- Size: 63.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d5ab4b6c0b3d587d62c3a934ead5ea61bd518ff7bfe430e5c6e07ab262bcf162
|
|
| MD5 |
1b94b0b1f33672285f124062a2578629
|
|
| BLAKE2b-256 |
7487683c062bde0b7368a20272da8f7308cdced122e020d92ecddc7417185546
|