Lunar Ring auxiliary tools and modules for programming interactive exhibitions
Project description
Introduction
Welcome to Lunar Tools, a comprehensive toolkit designed to facilitate the programming of interactive exhibitions. Our suite of simple, modular tools is crafted to offer a seamless and hopefully bug-free experience for both exhibitors and visitors.
Installation
Make sure you have python>=3.10.
From PyPI (recommended)
pip install lunar-tools
For PyTorch-accelerated features (GPU rendering, torch tensor support):
pip install lunar-tools[torch]
From GitHub (latest development version)
pip install git+https://github.com/lunarring/lunar_tools
Ubuntu
On Ubuntu, you may have to install additional dependencies for sound playback/recording.
sudo apt-get install libasound2-dev libportaudio2
For running the midi controllers, you might have to create a symlink:
cd /usr/lib/x86_64-linux-gnu/
sudo ln -s alsa-lib/libasound_module_conf_pulse.so libasound_module_conf_pulse.so
API Keys
Many features in lunar-tools require API keys from various services. Set these as environment variables in your shell configuration file (e.g. ~/.bash_profile or ~/.zshrc or ~/.bashrc):
# AI/LLM Services
export OPENAI_API_KEY="XXX" # For GPT models, DALL-E, TTS, and STT
export GEMINI_API_KEY="XXX" # For Google Gemini models
export DEEPSEEK_API_KEY="XXX" # For DeepSeek models
# Image Generation
export REPLICATE_API_TOKEN="XXX" # For Replicate (SDXL models)
export FAL_KEY="XXX" # For fal.ai (Flux models)
export GLIF_API_KEY="XXX" # For Glif API
# Audio/Voice Services
export ELEVEN_API_KEY="XXX" # For ElevenLabs TTS
export DEEPGRAM_API_KEY="XXX" # For Deepgram speech-to-text
# Health Monitoring
export TELEGRAM_BOT_TOKEN="XXX" # For Telegram health reporting
export TELEGRAM_CHAT_ID="XXX" # Your Telegram chat ID
Only add the keys for services you plan to use. Each module will let you know if a required API key is missing.
Inputs
Runnable input snippets live in examples/inputs. Launch them from the repo root to validate your hardware and copy/paste the relevant code into your own project.
🎙️ Audio Recorder
examples/inputs/audio_recorder_example.py exposes lt.AudioRecorder through two
CLI flags so you can verify your microphone pipeline without touching code.
python examples/inputs/audio_recorder_example.py --seconds 5 --output myvoice.mp3
📸 Webcam + Renderer
examples/inputs/webcam_live_renderer.py pairs lt.WebCam with lt.Renderer
and displays a live preview window for whichever camera ID (or auto-probed
device) you pass in.
python examples/inputs/webcam_live_renderer.py --cam-id auto
🎚️ Meta Inputs
examples/inputs/meta_input_inspector.py uses lt.MetaInput to detect a MIDI
controller (or keyboard fallback) and continuously prints one slider + one
button so you can confirm your mappings on the spot.
python examples/inputs/meta_input_inspector.py
🎞️ Movie Reader
examples/inputs/movie_reader_example.py wraps lt.MovieReader
with a CLI so you can inspect frame shapes, counts, and FPS before embedding
any mp4 into your pipeline.
python examples/inputs/movie_reader_example.py my_movie.mp4 --max-frames 10
Outputs
Runnable output demos live in examples/outputs. Each script is a ready-to-run showcase that you can copy into your own pipeline or execute as-is.
🔊 Play Sounds
examples/outputs/sound_playback_generated_sine.py demonstrates lt.SoundPlayer
by first writing a generated 440 Hz sine to disk, then streaming a 660 Hz tone
directly from memory via play_audiosegment.
python examples/outputs/sound_playback_generated_sine.py
🖥️ Real-time Display
examples/outputs/display_multi_backend_example.py spins up lt.Renderer and
cycles through NumPy, Pillow, and Torch backends (whichever are installed)
to render random RGBA frames in one looping window.
python examples/outputs/display_multi_backend_example.py
note you can speed-up opengl render calls by upto a factor of 3 by disabling VSYNC on your system On Ubuntu do: Run nvidia-settings 2. Screen 0 > OpenGl > Sync to VBLank -> off
⏱️ Realtime Console Updates
examples/outputs/realtime_console_updates_example.py combines
lt.FPSTracker, lt.LogPrint, and dynamic_print to stream live progress
messages while measuring per-segment timings.
python examples/outputs/realtime_console_updates_example.py
🧾 Log Printer
examples/outputs/logprint_example.py showcases lt.LogPrint formatting,
highlighting how to stream colored, timestamped console output.
python examples/outputs/logprint_example.py
🎬 Movie Saver
examples/outputs/movie_saver_example.py creates a short mp4 using random RGB frames so you can validate codec support and file permissions.
python examples/outputs/movie_saver_example.py --output my_movie.mp4 --frames 10 --fps 24
📡 Communication
📍 Local IP Detection
lunar_tools.comms.get_local_ip inspects network interfaces to determine the best IP to share with peers. Run the example below to print the detected address or see a friendly warning if one cannot be determined (for example, on air-gapped machines).
python examples/comms/get_local_ip_example.py
🌐 WebRTC Data Channels
Low-latency data channel built on WebRTC for streaming numpy arrays, JSON blobs, PNG previews, and log text.
Sender (hosts an embedded signaling server and streams mixed payloads):
python examples/comms/webrtc_data_sender.py --session demo
Receiver (auto-discovers the sender session via the cached signaling endpoint):
python examples/comms/webrtc_data_receiver.py --session demo
--sender-ipdefaults to the detected local address (vialunar_tools.comms.utils.get_local_ip).- When the sender hosts the embedded signaling server it stores the endpoint details per session in
~/.lunar_tools/webrtc_sessions.json. Receivers can omit--sender-ipto reuse the most recent entry for the requested session, which keeps the bootstrap process simple. - If you prefer using your own signaling server, start it separately (or pass
--no-serverin the sender example) and point both peers to the samehttp://<sender-ip>:<port>URL.
🎛️ OSC
High-level OSC helper built on python-osc. The receiver example spawns the live grid visualizer, and the sender emits demo sine/triangle waves.
Receiver:
python examples/comms/osc_receiver.py --ip 0.0.0.0 --port 8003
Sender:
python examples/comms/osc_sender.py --ip 127.0.0.1 --port 8003 --channels /env1 /env2 /env3
🔁 ZMQ Pair Endpoint
One-to-one ZeroMQ stream that carries JSON blobs, compressed images, and raw PCM audio. Start the receiver first on the same machine (or pass --ip 0.0.0.0 if you want to accept remote peers), then launch the sender.
Receiver (binds locally):
python examples/comms/zmq_receiver.py --port 5556
Sender (connects to the receiver):
python examples/comms/zmq_sender.py --ip 127.0.0.1 --port 5556
ZMQPairEndpoint uses ZeroMQ's PAIR pattern, which is strictly one-to-one: exactly one sender and one receiver must be connected, and neither side can reconnect while the other is running. If you need fan-out/fan-in or resilient reconnection, prefer REQ/REP, PUB/SUB, or ROUTER/DEALER and stitch together the behavior you need on top of the raw zmq library.
🔊 Voice & Speech
Voice-focused demos live in examples/voice. Each script below can be run directly from the repo root and pairs with the API snippets that follow.
🗣️ RealTimeVoice (OpenAI)
examples/voice/realtime_voice_example.py is an interactive CLI that lets you start/pause/mute a RealTimeVoice session, inject messages, and update instructions on the fly.
python examples/voice/realtime_voice_example.py
🎤 Deepgram Realtime Transcribe
examples/voice/deepgram_realtime_transcribe_example.py uses
lt.RealTimeTranscribe (Deepgram SDK) to stream microphone audio and print live transcripts.
Set DEEPGRAM_API_KEY before running.
python examples/voice/deepgram_realtime_transcribe_example.py
📝 Speech-to-Text (OpenAI)
examples/voice/openai_speech_to_text_example.py records a short microphone clip and prints the transcript, with an optional flag to save the text to disk.
python examples/voice/openai_speech_to_text_example.py --seconds 5 --output transcript.txt
🔈 Text-to-Speech (OpenAI)
examples/voice/openai_text_to_speech_example.py
converts text to speech, saves it to an mp3, and can optionally stream the audio
immediately with --play-inline.
python examples/voice/openai_text_to_speech_example.py --text "Testing 1 2 3" --voice nova --play-inline
🎶 Text-to-Speech (ElevenLabs)
examples/voice/elevenlabs_text_to_speech_example.py targets the ElevenLabs API with inline playback plus flags for stability, similarity, style, and speaker boost.
python examples/voice/elevenlabs_text_to_speech_example.py --text "Hi from ElevenLabs" --voice-id EXAVITQu4vr4xnSDxMaL --play-inline
🎨 Image Generation
🖼️ DALL·E 3
examples/ai/dalle3_generate_example.py calls
lt.Dalle3ImageGenerator, saves the resulting PNG, and prints the revised prompt.
python examples/ai/dalle3_generate_example.py --prompt "A red house with snow and a chimney"
🚀 SDXL Turbo
examples/ai/sdxl_turbo_example.py uses the Replicate-powered
lt.SDXL_TURBO helper and stores the PNG plus source URL for reference.
python examples/ai/sdxl_turbo_example.py --prompt "An astronaut riding a rainbow unicorn" --width 768 --height 512
🍌 Nano Banana Edit (Gradio)
examples/ai/nano_banana_edit_gradio.py launches a Gradio UI for interactive Flux/Nano Banana edits—drop in prompts, tweak sliders, and preview changes.
python examples/ai/nano_banana_edit_gradio.py
Health status reporting via telegram
Obtain a bot here: https://docs.tracardi.com/qa/how_can_i_get_telegram_bot/ Next you will need to update your bashrc or bash_profile with the telegram bot env variables.
export TELEGRAM_BOT_TOKEN='XXX'
export TELEGRAM_CHAT_ID='XXX'
See examples/health/telegram_health_reporter_example.py for a runnable heartbeat + alert demo (requires the env vars above):
python examples/health/telegram_health_reporter_example.py --name "My Exhibit" --interval 2 --count 5
Devinfos
Testing
pip install pytest
make sure you are in base folder
python -m pytest lunar_tools/tests/
Get requirements
pipreqs . --force
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lunar_tools-0.3.tar.gz.
File metadata
- Download URL: lunar_tools-0.3.tar.gz
- Upload date:
- Size: 81.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f2077c38275b878016c4c7c87d13b8c0db057cbe4510a715374f703b03ff1fe
|
|
| MD5 |
7a247cd369e3317e68060fbf804c63d2
|
|
| BLAKE2b-256 |
f6d6e6ac4e82ff0b2e2ae989a7edddbb0911db48d7a754ecf1ca186f99528011
|
File details
Details for the file lunar_tools-0.3-py3-none-any.whl.
File metadata
- Download URL: lunar_tools-0.3-py3-none-any.whl
- Upload date:
- Size: 70.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c5845cbed92877d1847728d6e6d2b3ab9804a65e2078daa76a4e92a59f6cb99
|
|
| MD5 |
6d8f4128d5e2feea893885ab4f3f323d
|
|
| BLAKE2b-256 |
5416fe97dca013615a9e077c83a1fb43c73b85d26fddbe2621f0d32ee7feb095
|