Command-line interface for the NeuroSkill real-time EEG analysis API
Project description
neuroskill
neuroskill is a command-line interface for the NeuroSkill real-time EXG analysis API. It communicates with a locally running Skill server over WebSocket or HTTP, giving you instant terminal access to EEG brain-state scores, sleep staging, session history, annotations, similarity search, and more.
This is the Python port of the TypeScript neuroskill CLI. It is a faithful port — same commands, same flags, same output format, same transport negotiation.
⚠️ Research Use Only. All metrics are experimental outputs derived from consumer-grade EXG hardware. They are not validated clinical measurements, not FDA/CE-cleared, and must not be used for diagnosis, treatment decisions, or any medical purpose.
Table of Contents
- Features
- Requirements
- Installation
- Quick Start
- Transport
- Commands
- Output Modes
- Global Options
- Examples
- How to Cite
- License
Features
- Real-time EEG scores — focus, relaxation, engagement, meditation, cognitive load, drowsiness
- Consciousness metrics — Lempel-Ziv Complexity proxy, wakefulness, information integration
- PPG / HRV — heart rate, RMSSD, SDNN, pNN50, LF/HF, SpO₂, Baevsky stress index
- Sleep staging — automatic per-epoch classification and session-level summary
- Session history — list, compare, and UMAP-project all past recording sessions
- Annotations — create timestamped labels and search them by free text or EEG similarity
- Interactive graph search — cross-modal 4-layer graph (labels → EEG → labels)
- Dual transport — WebSocket (full-duplex, live events) and HTTP REST (curl-friendly)
- Pipe-friendly —
--jsonflag emits clean JSON to stdout; informational lines go to stderr - Cross-platform — Python ≥ 3.9, Windows / macOS / Linux
Requirements
| Dependency | Version |
|---|---|
| Python | ≥ 3.9 |
| Skill server | running locally (auto-discovered via mDNS or lsof) |
Installation
Via pip / uv (recommended)
pip install neuroskill
or with uv:
uv tool install neuroskill
After installation the neuroskill binary is available globally:
neuroskill status
From source
git clone https://github.com/NeuroSkill-com/neuroskill-py
cd neuroskill-py
uv sync
uv run neuroskill status
Quick Start
# Full device / session / scores snapshot
neuroskill status
# Pipe raw JSON to jq (or python -m json.tool)
neuroskill status --json | python3 -m json.tool
neuroskill status --json | jq '.scores'
# Stream broadcast events for 10 seconds
neuroskill listen --seconds 10
# Print full help with examples
neuroskill --help
Transport
neuroskill auto-discovers the Skill server port via:
--port <n>flag (skips all discovery)- mDNS (
_skill._tcpservice advertisement, 5 s timeout) lsof/pgrepfallback (probes each TCP LISTEN port)
WebSocket (default)
Full-duplex, low-latency. Supports live event streaming. Used automatically when the server is reachable.
neuroskill status --ws # force WebSocket
HTTP REST
Request/response only. Compatible with curl, Python requests, or any HTTP client.
neuroskill status --http # force HTTP
# Equivalent curl call:
curl -s -X POST http://127.0.0.1:8375/ \
-H "Content-Type: application/json" \
-d '{"command":"status"}'
Auto (neither flag)
The CLI probes WebSocket first and silently falls back to HTTP. Informational messages go to stderr so JSON piping is never polluted.
Commands
| Command | Description |
|---|---|
status |
Full device / session / embeddings / scores snapshot |
session [index] |
All metrics + trends for one session (0 = latest, 1 = previous, …) |
sessions |
List all recording sessions across all days |
label "text" |
Create a timestamped annotation on the current moment |
search-labels "query" |
Search labels by free text (text / context / both modes) |
interactive "keyword" |
Cross-modal 4-layer graph search (labels → EEG → found labels) |
search |
ANN EEG-similarity search (auto: last session, k = 5) |
compare |
Side-by-side A/B metrics (auto: last 2 sessions) |
sleep [index] |
Sleep staging — index selects session (0 = latest) |
calibrations [list|get <id>] |
List calibration profiles or inspect one by ID |
calibrate |
Open calibration window and start immediately |
timer |
Open focus-timer window and start work phase immediately |
notify "title" ["body"] |
Show a native OS notification |
say "text" |
Speak text aloud via on-device TTS |
umap |
3-D UMAP projection with live progress bar |
listen |
Stream broadcast events for N seconds |
raw '{"command":"..."}' |
Send arbitrary JSON and print full response |
Output Modes
| Flag | Behaviour |
|---|---|
| (none) | Human-readable colored summary to stdout |
--json |
Raw JSON only — pipe-safe, no colors |
--full |
Human-readable summary and colorized JSON |
Global Options
--port <n> Connect to explicit port (skips mDNS discovery)
--ws Force WebSocket transport
--http Force HTTP REST transport
--json Output raw JSON (pipeable to jq / python -m json.tool)
--full Print JSON in addition to human-readable summary
--poll <n> (status) Re-poll every N seconds
--mode <m> Search mode for search-labels: text|context|both (default: text)
--k <n> Number of nearest neighbors for search / search-labels
--ef <n> HNSW ef parameter for search-labels (default: max(k×4, 64))
--k-text <n> (interactive) k for text-label search (default: 5)
--k-eeg <n> (interactive) k for EEG-similarity search (default: 5)
--k-labels <n> (interactive) k for label-proximity search (default: 3)
--reach <n> (interactive) temporal reach in minutes around EEG points (default: 10)
--dot (interactive) Output Graphviz DOT format
--context "..." (label) Long-form annotation body stored with the label
--at <utc> (label) Backdate to a specific unix second (default: now)
--voice <name> (say) Voice name (e.g. Jasper); uses server default when omitted
--profile <p> (calibrate) Profile name or UUID to run (default: active profile)
--seconds <n> (listen) Duration in seconds (default: 5)
--trends (sessions) Show per-session metric trends
--no-color Disable ANSI colors (also honours NO_COLOR env var)
--version Print CLI version and exit
--help Show full help with examples
Examples
# Device snapshot
neuroskill status
# Pipe scores to jq or python
neuroskill status --json | jq '.scores.focus'
neuroskill status --json | python3 -c "import sys,json; d=json.load(sys.stdin); print(d['scores']['focus'])"
# Poll status every 5 seconds
neuroskill status --poll 5
# Latest session metrics + trends
neuroskill session 0
# List sessions, show per-session metric trends
neuroskill sessions --trends
# Label the current moment
neuroskill label "started meditation"
neuroskill label "breathwork" --context "box breathing 4-4-4-4, 10 min"
neuroskill label "retrospective note" --at 1740412800
# Search past labels
neuroskill search-labels "meditation" --mode both --k 10
# 4-layer interactive graph search
neuroskill interactive "focus" --k-eeg 10 --reach 15
neuroskill interactive "anxiety" --dot | dot -Tsvg > graph.svg
# Sleep staging for latest session
neuroskill sleep 0
# Compare last two sessions
neuroskill compare
# ANN EEG similarity search
neuroskill search
# UMAP projection
neuroskill umap
# Calibration management
neuroskill calibrations
neuroskill calibrations get 3
neuroskill calibrate --profile "Eyes Open/Closed"
# Focus timer
neuroskill timer
# TTS and notification
neuroskill say "Calibration complete."
neuroskill say "Break time." --voice Jasper
neuroskill notify "Session done" "Great work!"
# Stream events for 30 seconds
neuroskill listen --seconds 30
# Send arbitrary JSON command
neuroskill raw '{"command":"status"}'
neuroskill raw '{"command":"search","start_utc":1740412800,"end_utc":1740415500,"k":3}'
# Force HTTP + specific port
neuroskill status --http --port 8375
How to Cite
If you use neuroskill or the Skill EEG platform in academic work, please cite it as:
BibTeX
@software{neuroskill2025,
title = {neuroskill: A Command-Line Interface for the Skill Real-Time EEG Analysis API},
author = {Nataliya Kosmyna and Eugene Hauptmann},
year = {2026},
version = {0.0.1},
url = {https://github.com/NeuroSkill-com/neuroskill},
note = {Research use only. Not a validated clinical tool.}
}
If you are citing the underlying Skill EEG analysis platform specifically:
@software{skill2025,
title = {NeuroSkill: Real-Time EEG Analysis Platform},
author = {Nataliya Kosmyna and Eugene Hauptmann},
year = {2026},
url = {https://neuroskill.com},
note = {Consumer-grade EEG processing pipeline with WebSocket and HTTP APIs.
Research use only. Not FDA/CE-cleared.}
}
License
GPLv3 — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuroskill_dev-0.0.1.tar.gz.
File metadata
- Download URL: neuroskill_dev-0.0.1.tar.gz
- Upload date:
- Size: 40.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6f685e556e3a0f174ebbe095541f0cf7e5e403fe290016c9ddfc19e6e74f9eb7
|
|
| MD5 |
fd7e4df1b87577719a5227e1d61fdf9c
|
|
| BLAKE2b-256 |
fbd35382d5ba15d7630717d006cac888631345dec2e5c904b6054814ba7e857e
|
File details
Details for the file neuroskill_dev-0.0.1-py3-none-any.whl.
File metadata
- Download URL: neuroskill_dev-0.0.1-py3-none-any.whl
- Upload date:
- Size: 41.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3d34ff2c8a0e5e43c7b46974ed908cd182360e84a0bd626366c552a65dcbfd92
|
|
| MD5 |
8735519c6617f17f6ab6aae8bfe84cb7
|
|
| BLAKE2b-256 |
c4f8470046aff43ee22d6aac0f0da76536f80ab589f0ce0052e7ecd461c99291
|