Language Interface Model for Machine Automation - Control ESP devices with natural language
Project description
LIMMA
Language Interface Model for Machine Automation
LIMMA is a Python SDK that lets you control ESP8266/ESP32 devices using natural language commands.
It connects with the LIMMA API to translate user instructions into device function calls, then executes them over your local network.
Author: Yash Kumar Firoziya
โจ Features
Core Features
- ๐ ESP Device Management โ connect, reset, WiFi setup, status check
- ๐ Natural Language โ Device Control โ powered by the LIMMA server
- ๐ง Context Manager โ remembers previous commands for context-aware parsing
- ๐ก Network Utilities โ auto-discover ESP devices in your LAN
- โ๏ธ Flexible Execution โ supports
wait(), device mapping, andreplymessages
๐ New in v0.2.0+ โ Multi-Provider LLM Integration
- ๐ค Unified LLM Interface โ Single API for multiple LLM providers via
limma.llm - ๐ Provider Agnostic โ Switch between OpenAI, Gemini, Mistral, Groq with one line
- ๐ง Consistent Message Format โ Same request style across all providers
- ๐ Flexible Configuration โ Use env vars, inline setup, or config files
- ๐ No Vendor Lock-In โ Swap providers without rewriting logic
- ๐ Role-Based Conversations โ System, user, assistant message support
- ๐ก๏ธ Automatic Token Handling โ Prevents context overflows
๐ New in v0.2.0+ โ Voice Capabilities
- ๐๏ธ Cross-Platform Voice โ Speech recognition & text-to-speech via
limma.voice - ๐ Voice Customization โ Adjust rate, volume, and gender (male/female/neutral)
- ๐ฃ๏ธ Simple API โ Both standalone functions and VoiceAssistant class
- ๐ฏ Beginner-Friendly โ Add voice control to your projects in minutes
- ๐๏ธ Voice Settings โ List available voices, change gender on the fly
๐ฆ Installation
pip install limma
Install with Optional Dependencies
# For LLM features
pip install limma[llm]
# For voice features
pip install limma[voice]
# For all features
pip install limma[all]
๐ Quick Start
Basic ESP Control
from limma import Limma, LimmaConfig
config = LimmaConfig(
esp_ip="192.168.1.100",
application_type="home",
device_map={"fan": "ch02", "light": "ch01"},
api_key="your-api-key"
)
limma = Limma(config)
limma.execute_command("turn on the fan")
๐ New in v0.2.0: LLM Integration
Unified Interface for Multiple LLM Providers
from limma.llm import config, generate, chat
# Configure once, use anywhere
config(
provider="openai", # or "gemini", "mistral", "groq"
api_key="your-api-key",
model="gpt-4"
)
# Generate text
response = generate("Explain IoT in simple terms")
print(response)
# Interactive chat
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
break
print(f"AI: {chat(user_input)}")
Switch Providers Instantly
from limma.llm import config, generate
# Start with OpenAI
config(provider="openai", api_key="sk-...", model="gpt-4")
print(generate("Hello!"))
# Switch to Gemini (free tier available)
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
print(generate("Hello again!"))
# Switch to Groq for ultra-fast inference
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")
print(generate("Fast response!"))
Using Environment Variables
# .env file
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-key
LLM_MODEL=gpt-4
from limma.llm import config, generate
config() # Auto-loads from environment
print(generate("What's new in Python 3.12?"))
Supported LLM Providers
| Provider | Models | Use Case |
|---|---|---|
| OpenAI | GPT-4, GPT-3.5-turbo | General purpose, best quality |
| Google Gemini | Gemini 2.5/1.5 Flash | Free tier available, fast |
| Mistral | Mistral Large/Small | Open source, efficient |
| Groq | Mixtral, Llama 2 | Ultra-fast inference |
๐ New in v0.2.0: Voice Integration
Simple Voice Control
from limma.voice import speak, listen
# Text-to-speech
speak("Hello! I'm your voice-enabled LIMMA assistant")
# Speech recognition
try:
command = listen()
print(f"You said: {command}")
speak(f"Executing: {command}")
except Exception as e:
speak("Sorry, I didn't catch that")
VoiceAssistant Class
from limma.voice import VoiceAssistant
# Create a customized voice assistant
assistant = VoiceAssistant(
voice_rate=160, # Words per minute
voice_volume=0.8, # Volume (0.0 - 1.0)
voice_gender="female" # male, female, or neutral
)
# Speak with the configured voice
assistant.speak("How can I help you with your ESP devices?")
# Listen for commands
command = assistant.listen()
if command:
print(f"Command received: {command}")
# Change voice settings on the fly
assistant.set_voice_gender("male")
assistant.set_voice_rate(180)
assistant.speak("Voice settings updated")
Voice + ESP Control Combined
from limma import Limma, LimmaConfig
from limma.voice import VoiceAssistant
# Initialize voice and ESP control
voice = VoiceAssistant(voice_gender="female")
limma = Limma(LimmaConfig(esp_ip="auto", api_key="your-key"))
# Voice-controlled home automation
voice.speak("Voice control ready. Say a command.")
while True:
command = voice.listen(timeout=5)
if command:
if "exit" in command.lower():
voice.speak("Goodbye!")
break
success = limma.execute_command(command)
if success:
voice.speak("Command executed successfully")
else:
voice.speak("Failed to execute command")
Voice Customization Examples
from limma.voice import VoiceAssistant
va = VoiceAssistant()
# List available system voices
available_voices = va.get_available_voices()
print(f"Available voices: {available_voices}")
# Test different voice genders
va.set_voice_gender("female")
va.speak("This is the female voice")
va.set_voice_gender("male")
va.speak("This is the male voice")
# Adjust speech rate and volume
va.set_voice_rate(200) # Faster speech
va.set_voice_volume(0.5) # Quieter
va.speak("This is fast and quiet")
va.set_voice_rate(120) # Slower speech
va.set_voice_volume(1.0) # Louder
va.speak("This is slow and loud")
๐ฏ Complete Example: Voice + LLM + ESP Control
from limma import Limma, LimmaConfig
from limma.voice import VoiceAssistant
from limma.llm import config as llm_config, generate
# Configure LLM for command understanding
llm_config(
provider="gemini", # Free tier
api_key="your-gemini-key",
model="gemini-2.5-flash"
)
# Initialize voice assistant
voice = VoiceAssistant(voice_gender="female")
# Setup ESP control
limma = Limma(LimmaConfig(
esp_ip="auto",
application_type="home",
device_map={"fan": "ch02", "light": "ch01", "ac": "ch03"},
api_key="limma-api-key"
))
# Intelligent voice-controlled automation
voice.speak("Smart home system activated")
while True:
command = voice.listen()
if command:
if "exit" in command.lower():
voice.speak("Shutting down")
break
# Use LLM to understand complex commands
enhanced_command = generate(
f"Convert this home automation request into a simple command: '{command}'. "
f"Available devices: fan, light, ac. Response should be brief."
)
print(f"Original: {command}")
print(f"Interpreted: {enhanced_command}")
# Execute on ESP
if limma.execute_command(enhanced_command):
voice.speak("Done")
else:
voice.speak("I couldn't do that")
๐ API Reference
Core Modules
Limmaโ Main SDK controllerLimmaConfigโ Configuration containerESPManagerโ ESP device operationsContextManagerโ Command history and contextNetworkUtilsโ Network discovery utilities
๐ LLM Module (limma.llm)
config(**kwargs)โ Set provider, API key, modelgenerate(prompt, **kwargs)โ Single text generationgenerate_stream(prompt, **kwargs)โ Streaming responsechat(message)โ Conversational interfacereset_chat()โ Clear conversation history
๐ Voice Module (limma.voice)
speak(text, **kwargs)โ Text-to-speech conversionlisten(timeout)โ Speech recognitionVoiceAssistantโ Class-based voice interfaceset_voice_rate(rate)โ Adjust speech speedset_voice_volume(volume)โ Adjust volumeset_voice_gender(gender)โ Change voice genderget_available_voices()โ List system voicessimple_conversation(prompt)โ Quick Q&A
๐ก๏ธ Error Handling
from limma.voice.exceptions import ListenTimeoutError, AudioCaptureError
from limma.llm.exceptions import AuthenticationError, ModelNotFoundError
try:
response = generate("Hello")
except AuthenticationError:
print("Check your API key")
except ModelNotFoundError:
print("Invalid model name")
๐ฆ Dependencies
- Core:
requests - LLM Module:
requests(no additional deps) - Voice Module:
SpeechRecognition,pyttsx3,pyaudio
๐ License
Licensed under the Apache License 2.0. See LICENSE for details.
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
โญ Support
If you find LIMMA useful, please give it a star on GitHub!
Now LIMMA is not just an ESP control SDK โ it's a complete toolkit for building intelligent, voice-controlled, multi-provider AI automation systems. ๐
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file limma-0.2.0.tar.gz.
File metadata
- Download URL: limma-0.2.0.tar.gz
- Upload date:
- Size: 19.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df92f04f0fcbc1eff5f268bbc0a37058d5f2622ceb12d514bee48c09beff5d34
|
|
| MD5 |
c11cd143415753cccac1dccc43cba92b
|
|
| BLAKE2b-256 |
245a0a226c3a5b0f60ef99d3148402f13de45266555d84237de53bbf9963911a
|
File details
Details for the file limma-0.2.0-py3-none-any.whl.
File metadata
- Download URL: limma-0.2.0-py3-none-any.whl
- Upload date:
- Size: 20.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e14fa33f178a81ca418ef96f67b8bc8b6a1c1773b936ca2fd055cc800b0c9722
|
|
| MD5 |
ec9f4b38d2a56d9a1deead3744b439cd
|
|
| BLAKE2b-256 |
89325e691ac9ce84ef75378a5fa47781b1046088c362cd021df096169d336088
|