Skip to main content

Language Interface Model for Machine Automation - A Python framework for controlling ESP8266/ESP32 devices with natural language, featuring both text and voice interfaces with LLM integration

Project description

LIMMA

Language Interface Model for Machine Automation

Downloads PyPI Version Python Version License GitHub Stars

LIMMA is a Python SDK that lets you control ESP8266/ESP32 devices using natural language commands.
It connects with the LIMMA API to translate user instructions into device function calls, then executes them over your local network.


Author: Yash Kumar Firoziya


โœจ Features

Core Features

  • ๐Ÿ”Œ ESP Device Management โ€“ connect, reset, WiFi setup, status check
  • ๐ŸŒ Natural Language โ†’ Device Control โ€“ powered by the LIMMA server
  • ๐Ÿง  Context Manager โ€“ remembers previous commands for context-aware parsing
  • ๐Ÿ“ก Network Utilities โ€“ auto-discover ESP devices in your LAN
  • โš™๏ธ Flexible Execution โ€“ supports wait(), device mapping, and reply messages

๐Ÿ†• New in v0.2.0+ โ€“ Multi-Provider LLM Integration

  • ๐Ÿค– Unified LLM Interface โ€“ Single API for multiple LLM providers via limma.llm
  • ๐Ÿ”„ Provider Agnostic โ€“ Switch between OpenAI, Gemini, Mistral, Groq with one line
  • ๐Ÿง  Consistent Message Format โ€“ Same request style across all providers
  • ๐Ÿ” Flexible Configuration โ€“ Use env vars, inline setup, or config files
  • ๐Ÿš€ No Vendor Lock-In โ€“ Swap providers without rewriting logic
  • ๐Ÿ“œ Role-Based Conversations โ€“ System, user, assistant message support
  • ๐Ÿ›ก๏ธ Automatic Token Handling โ€“ Prevents context overflows

๐Ÿ†• New in v0.2.0+ โ€“ Voice Capabilities

  • ๐ŸŽ™๏ธ Cross-Platform Voice โ€“ Speech recognition & text-to-speech via limma.voice
  • ๐Ÿ”Š Voice Customization โ€“ Adjust rate, volume, and gender (male/female/neutral)
  • ๐Ÿ—ฃ๏ธ Simple API โ€“ Both standalone functions and VoiceAssistant class
  • ๐ŸŽฏ Beginner-Friendly โ€“ Add voice control to your projects in minutes
  • ๐ŸŽš๏ธ Voice Settings โ€“ List available voices, change gender on the fly

๐Ÿ“ฆ Installation

pip install limma

Install with Optional Dependencies

# For LLM features
pip install limma[llm]

# For voice features
pip install limma[voice]

# For all features
pip install limma[all]

๐Ÿš€ Quick Start

Basic ESP Control

from limma import Limma, LimmaConfig

config = LimmaConfig(
    esp_ip="192.168.1.100",
    application_type="home",
    device_map={"fan": "ch02", "light": "ch01"},
    api_key="your-api-key"
)

limma = Limma(config)
limma.execute_command("turn on the fan")

๐Ÿ†• New in v0.2.0: LLM Integration

Unified Interface for Multiple LLM Providers

from limma.llm import config, generate, chat

# Configure once, use anywhere
config(
    provider="openai",  # or "gemini", "mistral", "groq"
    api_key="your-api-key",
    model="gpt-4"
)

# Generate text
response = generate("Explain IoT in simple terms")
print(response)

# Interactive chat
while True:
    user_input = input("You: ")
    if user_input.lower() == "exit":
        break
    print(f"AI: {chat(user_input)}")

Switch Providers Instantly

from limma.llm import config, generate

# Start with OpenAI
config(provider="openai", api_key="sk-...", model="gpt-4")
print(generate("Hello!"))

# Switch to Gemini (free tier available)
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
print(generate("Hello again!"))

# Switch to Groq for ultra-fast inference
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")
print(generate("Fast response!"))

Using Environment Variables

# .env file
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-key
LLM_MODEL=gpt-4
from limma.llm import config, generate

config()  # Auto-loads from environment
print(generate("What's new in Python 3.12?"))

Supported LLM Providers

Provider Models Use Case
OpenAI GPT-4, GPT-3.5-turbo General purpose, best quality
Google Gemini Gemini 2.5/1.5 Flash Free tier available, fast
Mistral Mistral Large/Small Open source, efficient
Groq Mixtral, Llama 2 Ultra-fast inference

๐Ÿ†• New in v0.2.0: Voice Integration

Simple Voice Control

from limma.voice import speak, listen

# Text-to-speech
speak("Hello! I'm your voice-enabled LIMMA assistant")

# Speech recognition
try:
    command = listen()
    print(f"You said: {command}")
    speak(f"Executing: {command}")
except Exception as e:
    speak("Sorry, I didn't catch that")

VoiceAssistant Class

from limma.voice import VoiceAssistant

# Create a customized voice assistant
assistant = VoiceAssistant(
    voice_rate=160,      # Words per minute
    voice_volume=0.8,    # Volume (0.0 - 1.0)
    voice_gender="female"  # male, female, or neutral
)

# Speak with the configured voice
assistant.speak("How can I help you with your ESP devices?")

# Listen for commands
command = assistant.listen()
if command:
    print(f"Command received: {command}")
    
# Change voice settings on the fly
assistant.set_voice_gender("male")
assistant.set_voice_rate(180)
assistant.speak("Voice settings updated")

Voice + ESP Control Combined

from limma import Limma, LimmaConfig
from limma.voice import VoiceAssistant

# Initialize voice and ESP control
voice = VoiceAssistant(voice_gender="female")
limma = Limma(LimmaConfig(esp_ip="auto", api_key="your-key"))

# Voice-controlled home automation
voice.speak("Voice control ready. Say a command.")

while True:
    command = voice.listen(timeout=5)
    if command:
        if "exit" in command.lower():
            voice.speak("Goodbye!")
            break
        success = limma.execute_command(command)
        if success:
            voice.speak("Command executed successfully")
        else:
            voice.speak("Failed to execute command")

Voice Customization Examples

from limma.voice import VoiceAssistant

va = VoiceAssistant()

# List available system voices
available_voices = va.get_available_voices()
print(f"Available voices: {available_voices}")

# Test different voice genders
va.set_voice_gender("female")
va.speak("This is the female voice")

va.set_voice_gender("male") 
va.speak("This is the male voice")

# Adjust speech rate and volume
va.set_voice_rate(200)  # Faster speech
va.set_voice_volume(0.5)  # Quieter
va.speak("This is fast and quiet")

va.set_voice_rate(120)  # Slower speech
va.set_voice_volume(1.0)  # Louder
va.speak("This is slow and loud")

๐ŸŽฏ Complete Example: Voice + LLM + ESP Control

from limma import Limma, LimmaConfig
from limma.voice import VoiceAssistant
from limma.llm import config as llm_config, generate

# Configure LLM for command understanding
llm_config(
    provider="gemini",  # Free tier
    api_key="your-gemini-key",
    model="gemini-2.5-flash"
)

# Initialize voice assistant
voice = VoiceAssistant(voice_gender="female")

# Setup ESP control
limma = Limma(LimmaConfig(
    esp_ip="auto",
    application_type="home",
    device_map={"fan": "ch02", "light": "ch01", "ac": "ch03"},
    api_key="limma-api-key"
))

# Intelligent voice-controlled automation
voice.speak("Smart home system activated")

while True:
    command = voice.listen()
    
    if command:
        if "exit" in command.lower():
            voice.speak("Shutting down")
            break
            
        # Use LLM to understand complex commands
        enhanced_command = generate(
            f"Convert this home automation request into a simple command: '{command}'. "
            f"Available devices: fan, light, ac. Response should be brief."
        )
        
        print(f"Original: {command}")
        print(f"Interpreted: {enhanced_command}")
        
        # Execute on ESP
        if limma.execute_command(enhanced_command):
            voice.speak("Done")
        else:
            voice.speak("I couldn't do that")

๐Ÿ“š API Reference

Core Modules

  • Limma โ€“ Main SDK controller
  • LimmaConfig โ€“ Configuration container
  • ESPManager โ€“ ESP device operations
  • ContextManager โ€“ Command history and context
  • NetworkUtils โ€“ Network discovery utilities

๐Ÿ†• LLM Module (limma.llm)

  • config(**kwargs) โ€“ Set provider, API key, model
  • generate(prompt, **kwargs) โ€“ Single text generation
  • generate_stream(prompt, **kwargs) โ€“ Streaming response
  • chat(message) โ€“ Conversational interface
  • reset_chat() โ€“ Clear conversation history

๐Ÿ†• Voice Module (limma.voice)

  • speak(text, **kwargs) โ€“ Text-to-speech conversion
  • listen(timeout) โ€“ Speech recognition
  • VoiceAssistant โ€“ Class-based voice interface
    • set_voice_rate(rate) โ€“ Adjust speech speed
    • set_voice_volume(volume) โ€“ Adjust volume
    • set_voice_gender(gender) โ€“ Change voice gender
    • get_available_voices() โ€“ List system voices
    • simple_conversation(prompt) โ€“ Quick Q&A

๐Ÿ›ก๏ธ Error Handling

from limma.voice.exceptions import ListenTimeoutError, AudioCaptureError
from limma.llm.exceptions import AuthenticationError, ModelNotFoundError

try:
    response = generate("Hello")
except AuthenticationError:
    print("Check your API key")
except ModelNotFoundError:
    print("Invalid model name")

๐Ÿ“ฆ Dependencies

  • Core: requests
  • LLM Module: requests (no additional deps)
  • Voice Module: SpeechRecognition, pyttsx3, pyaudio

๐Ÿ“„ License

Licensed under the Apache License 2.0. See LICENSE for details.


๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.


โญ Support

If you find LIMMA useful, please give it a star on GitHub!


Now LIMMA is not just an ESP control SDK โ€“ it's a complete toolkit for building intelligent, voice-controlled, multi-provider AI automation systems. ๐Ÿš€

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

limma-0.2.2.tar.gz (20.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

limma-0.2.2-py3-none-any.whl (20.0 kB view details)

Uploaded Python 3

File details

Details for the file limma-0.2.2.tar.gz.

File metadata

  • Download URL: limma-0.2.2.tar.gz
  • Upload date:
  • Size: 20.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for limma-0.2.2.tar.gz
Algorithm Hash digest
SHA256 2d14f91f6f4dd678cd0dc07e227f460b3d209a9ac901c5d4324514d7be0191bc
MD5 ccfd8278d481dd8c1c705ae64d445f4b
BLAKE2b-256 16fb0a0c67e172a37a5b234973a0856766dc414c836e66e0ac9b532e49793d28

See more details on using hashes here.

File details

Details for the file limma-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: limma-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 20.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for limma-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9fec70e9159f45d9620d995abfbfa1acd726e56e37881ee1dc71d6b72b6e27b2
MD5 c7b2b9e66f45b3a2ff09794e3fe13937
BLAKE2b-256 07c9902c60ff42e55f14d034b0f0bbc57af3a78bbbb7348a5bb4c8ee83390597

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page