CLI for speeding up long-form talks by removing silence
Project description
Talks Reducer 
Talks Reducer shortens long-form presentations by removing silent gaps and optionally re-encoding them to smaller files. The project was renamed from jumpcutter to emphasize its focus on conference talks and screencasts.
Example
- 1h 37m, 571 MB — Original OBS video recording
- 1h 19m, 751 MB — Talks Reducer
- 1h 19m, 171 MB — Talks Reducer
--small
Changelog
See CHANGELOG.md.
Install GUI (Windows, macOS)
Go to the releases page and download the appropriate artifact:
-
Windows —
talks-reducer-windows-0.4.0.zip -
macOS —
talks-reducer.app.zipTroubleshooting: If launching the bundle (or running
python -m talks_reducer.gui) printsmacOS 26 (2600) or later required, have instead 16 (1600)!, make sure you're using a Python build that ships a modern Tk. The stock python.org 3.13.5 installer includes Tk 8.6 and has been verified to work.
When extracted on Windows the bundled talks-reducer.exe behaves like running
python -m talks_reducer.gui: double-clicking it launches the GUI
and passing a video file path (for example via Open with… or drag-and-drop
onto the executable) automatically queues that recording for processing.
Install CLI (Linux, Windows, macOS)
pip install talks-reducer
Note: FFmpeg is now bundled automatically with the package, so you don't need to install it separately. You you need, don't know actually.
The --small preset applies a 720p video scale and 128 kbps audio bitrate, making it useful for sharing talks over constrained
connections. Without --small, the script aims to preserve original quality while removing silence.
Example CLI usage:
talks-reducer --small input.mp4
Need to offload work to a remote Talks Reducer server? Pass --url with the
server address and the CLI will upload the input, wait for processing to finish,
and download the rendered video. You can also provide --host to expand to the
default Talks Reducer port (http://<host>:9005):
talks-reducer --url http://localhost:9005 demo.mp4
talks-reducer --host 192.168.1.42 demo.mp4
Remote jobs respect the same timing controls as the local CLI. Provide
--silent-threshold, --sounded-speed, or --silent-speed to tweak how the
server trims and accelerates segments without falling back to local mode.
Want to see progress as the remote server works? Add --server-stream so the
CLI prints live progress bars and log lines while you wait for the download.
Speech detection
Talks Reducer now relies on its built-in volume thresholding to detect speech. Adjust --silent_threshold if you need to fine-tune when segments count as silence. Dropping the optional Silero VAD integration keeps the install lightweight and avoids pulling in PyTorch.
When CUDA-capable hardware is available the pipeline leans on GPU encoders to keep export times low, but it still runs great on CPUs.
Simple web server
Prefer a lightweight browser interface? Launch the Gradio-powered simple mode with:
talks-reducer server
The browser UI mirrors the CLI timing controls with sliders for the silent threshold and playback speeds, so you can tune exports without leaving the remote workflow.
Want the server to live in your system tray instead of a terminal window? Use:
talks-reducer server-tray
Bundled Windows builds include the same behaviour: run
talks-reducer.exe --server to launch the tray-managed server directly from the
desktop shortcut without opening the GUI first.
Pass --debug to print verbose logs about the tray icon lifecycle, and
--tray-mode pystray-detached to try pystray's alternate detached runner. If
the icon backend refuses to appear, fall back to --tray-mode headless to keep
the web server running without a tray process. The tray menu highlights the
running Talks Reducer version and includes an Open GUI
item (also triggered by double-clicking the icon) that launches the desktop
Talks Reducer interface alongside an Open WebUI entry that opens the Gradio
page in your browser. Close the GUI window to return to the tray without
stopping the server. Launch the tray explicitly whenever you need it—either run
talks-reducer server-tray directly or start the GUI with
python -m talks_reducer.gui --server to boot the tray-managed server instead
of the desktop window. The GUI now runs standalone and no longer spawns the tray
automatically; the deprecated --no-tray flag is ignored for compatibility.
The tray command itself never launches the GUI automatically, so use the menu
item (or relaunch the GUI separately) whenever you want to reopen it. The tray
no longer opens a browser automatically—pass --open-browser if you prefer the
web page to launch as soon as the server is ready.
This opens a local web page featuring a drag-and-drop upload zone, a Small video checkbox that mirrors the CLI preset, a live progress indicator, and automatic previews of the processed output. The page header and browser tab title include the current Talks Reducer version so you can confirm which build the server is running. Once the job completes you can inspect the resulting compression ratio and download the rendered video directly from the page.
The desktop GUI mirrors this behaviour. Open Advanced settings to provide a
server URL and click Discover to scan your local network for Talks Reducer
instances listening on port 9005. The button now updates with the discovery
progress, showing the scanned/total host count as scanned / total. A new
Processing mode toggle lets you decide whether work stays local or uploads
to the configured server—the Remote option becomes available as soon as a
URL is supplied. Leave the toggle on Local to keep rendering on this
machine even if a server is saved; switch to Remote to hand jobs off while
the GUI downloads the finished files automatically.
Uploading and retrieving a processed video
- Open the printed
http://localhost:<port>address (the default port is9005). - Drag a video onto the Video file drop zone or click to browse and select one from disk.
- Small video starts enabled to apply the 720p/128 kbps preset. Clear the box before the upload finishes if you want to keep the original resolution and bitrate.
- Wait for the progress bar and log to report completion—the interface queues work automatically after the file arrives.
- Watch the processed preview in the Processed video player and click Download processed file to save the result locally.
Need to change where the server listens? Run talks-reducer server --host 0.0.0.0 --port 7860 (or any other port) to bind to a
different address.
Automating uploads from the command line
Prefer to script uploads instead of using the browser UI? Start the server and use the bundled helper to submit a job and save the processed video locally:
python -m talks_reducer.service_client --server http://127.0.0.1:9005/ --input demo.mp4 --output output/demo_processed.mp4
The helper wraps the Gradio API exposed by server.py, waits for processing to complete, then copies the rendered file to the
path you provide. Pass --small to mirror the Small video checkbox or --print-log to stream the server log after the
download finishes.
Windows installer packaging
The repository ships an Inno Setup script that wraps the PyInstaller GUI bundle
into a per-user installer named talks-reducer-<version>-setup.exe.
- Build the PyInstaller distribution so that
dist/talks-reducercontainstalks-reducer.exeand its support files (for example by runningscripts\build-gui.sh). - Install Inno Setup on a Windows machine.
- Compile the installer with:
iscc /DAPP_VERSION=$(python -c "import talks_reducer.__about__ as a; print(a.__version__)") ` /DSOURCE_DIR=..\dist\talks-reducer ` /DAPP_ICON=..\talks_reducer\resources\icons\app.ico ` scripts\talks-reducer-installer.iss
or use the convenience wrapper on Windows runners:bash scripts/build-installer.shOverride/DAPP_ICON=...or/DAPP_PUBLISHER=...(or setAPP_ICON/APP_PUBLISHERwhen calling the wrapper) if you need custom branding.
The installer defaults to C:\Users\%USERNAME%\AppData\Local\Programs\talks-reducer,
creates Start Menu and desktop shortcuts, and registers an Open with Talks
Reducer shell entry for files and folders so that you can launch the GUI with a
dropped path. Use the Additional Tasks page at install time to skip the optional
shortcuts or shell integration.
Faster PyInstaller builds
PyInstaller spends most of its time walking imports. To keep GUI builds snappy:
- Create a dedicated virtual environment for packaging the GUI and install only
the runtime dependencies you need (for example
pip install -r requirements.txt -r scripts/requirements-pyinstaller.txt). Avoid installing heavy ML stacks such as Torch or TensorFlow in that environment so PyInstaller never attempts to analyze them. - Use the committed
talks-reducer.specfile via./scripts/build-gui.sh. The spec excludes Torch, TensorFlow, TensorBoard, torchvision/torchaudio, Pandas, Qt bindings, setuptools' vendored helpers, and other bulky modules that previously slowed the analysis stage. SetPYINSTALLER_EXTRA_EXCLUDES=module1,module2if you need to drop additional imports for an experimental build. - Keep optional imports in the codebase lazy (wrapped in
try/exceptor moved inside functions) so the analyzer only sees the dependencies required for the shipping GUI.
The script keeps incremental build artifacts in build/ between runs. Pass
--clean to scripts/build-gui.sh when you want a full rebuild.
Contributing
See CONTRIBUTION.md for development setup details and guidance on sharing improvements.
License
Talks Reducer is released under the MIT License. See LICENSE for the full text.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file talks_reducer-0.8.3.tar.gz.
File metadata
- Download URL: talks_reducer-0.8.3.tar.gz
- Upload date:
- Size: 810.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb5d8d623427a90d6f1f2e733f39d9e26e05b9a54b30abada1882350a6058b6a
|
|
| MD5 |
8fff62717f24cbac27af64d990c4aaee
|
|
| BLAKE2b-256 |
5e86f46bc39540a037581db83f4d2983efba476a51be62ebf3579b5dfba50a88
|
File details
Details for the file talks_reducer-0.8.3-py3-none-any.whl.
File metadata
- Download URL: talks_reducer-0.8.3-py3-none-any.whl
- Upload date:
- Size: 779.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
410e4a044fd8ec5cac8bfefd85dfb4d5c4060b9871756482ebe019ab4fe07e1f
|
|
| MD5 |
297e85f9954dc5f474b07349d8d63b5f
|
|
| BLAKE2b-256 |
c7978a995251d1f85deb6ab9587728580291562756c6b548d7bd18aaf49f057f
|