Skip to main content

Local inference and model management.

Project description

Moondream Station Logo

Moondream Station: Local Visual Reasoning

Home Page Discord Follow @moondreamai License


Video showing Moondream Station running in a terminal

How It Works

🚀 Launches Local Server
All inference runs on your device

🔧 Control via CLI
Caption images, answer questions, and manage settings

🌐 Access via HTTP
Connect to http://localhost:2020/v1 through REST or our Python, Node, or OpenAI client

Installation

Install from PyPI:

$ pip install moondream-station

Install from source:

$ git clone https://github.com/m87-labs/moondream-station.git
$ cd moondream-station
$ pip install -e .

That's it! Moondream Station will automatically set itself up.

Usage

Launch Moondream Station

To fire up Moondream Station, execute this command in your terminal:

$ moondream-station

Model Management

By default, Moondream Station uses the latest model your machine supports. If you want to view or activate other Moondream models, use the following commands:

  • models - List available models
  • models switch <model> - Switch to a model

Service Control

We like to think Moondream has 20/20 vision; that’s why, by default, we launch Moondream Station on port 2020. If that port is taken, Moondream Station will try a nearby free port. Additionally, you can control the port and status of the inference service with the following commands:

  • start [port] - Start REST server (default: port 2020)
  • stop - Stop server
  • restart - Restart server

Inference

Access via HTTP: Point any of our inference clients at your Moondream Station; for example, with our python client you can do:

import moondream as md
from PIL import Image

# connect to Moondream Station
model = md.vl(endpoint="http://localhost:2020/v1")

# Load an image
image = Image.open("path/to/image.jpg")

# Ask a question
answer = model.query(image, "What's in this image?")["answer"]
print("Answer:", answer)

For more information on our clients visit: Python, Node, Quick Start caption /Users/zekieldee/Desktop/code/moondream-station/chat.png Connect via CLI: Use all the capabilities of Moondream directly through your terminal. No need to touch any code!

  • infer <function> [args] - Run single inference
  • inference - Enter interactive inference mode

Settings

Control the number of workers, queue size, privacy settings, and more through Settings:

  • settings - Show configuration
  • settings set <key> <value> - Set setting value

Moondream Station collects anonymous usage metrics to help us improve the app. The following data is collected:

  • Event data: when you use features like caption, query, detect, or point.
  • Version information: active bootstrap, hypervisor, inference client, and model version.
  • System information: OS version, IP address, and Python version/runtime.

No personal information, images, or prompts/responses are ever collected. To opt out of logging, run: settings set logging false.

Utility

The utility functions provide insight into what Moondream Station is currently doing. To view statistics for your current session, use the session mode. To view a log of requests processed by Moondream Station, use the history command.

  • session - Show session stats
  • help - Show available commands
  • history - Show command history
  • reset - Reset app data & settings
  • clear - Clear screen
  • exit - Quit application

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

moondream_station_nell_02-0.1.6-py3-none-any.whl (48.9 kB view details)

Uploaded Python 3

File details

Details for the file moondream_station_nell_02-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for moondream_station_nell_02-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 ffd12d7cd1abbcff12f6b05b44eb8c056faeb4b2d050b5f37ef78a8d4ceef770
MD5 394add1cc17496c08b971272fd253064
BLAKE2b-256 7149a022c9e19ef5d09a432b88ba56b33c8dedb1f14b73261f6cf0acc7c3b573

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page