Skip to main content

Speechless repo for sales call analysis

Project description

speechless

UV Installation Instructions

To install dependencies and manage the project, we use uv, a fast Python package manager and resolver. Follow the steps below to set up your environment.

Step 1: Install uv

You can install uv via pip:

pip install uv

Or with pipx:

pipx install uv

Verify the installation:

uv --version

Step 2 · Create (and activate) the virtual environment

Use uv to manage your environment:

uv venv source .venv/bin/activate

(If you prefer another tool like venv/virtualenv, activate it before continuing.)


Step 3 · Install dependencies

Install all runtime, dev, and extras in one step:

uv sync --all-extras --dev

This also creates or updates uv.lock automatically.


Step 4 · Run the project or test suite

If not already activated:

source .venv/bin/activate

Then run the tests:

uv run pytest


Step 5 · Run pre-commit hooks

Ensure your code stays clean:

uv run pre-commit run --all-files


Quick commands you might love:

Task Command
Update dependencies to newest allowed refs uv sync --upgrade
Re-create a fresh lockfile rm uv.lock && uv sync
Add a new development dependency uv add --dev

Step 6: Convert the model to ONNX format

To convert the model to ONNX format, run:

python export_to_onnx.py --checkpoint /path/to/checkpoint --onnx_model /path/to/onnx_model

Step 7: Add OPENAI_API_KEY and/or Set Up WHISPER_CPP_MODEL

The whisper_1 model requires an OpenAI subscription. As an alternative, you can use whisper.cpp.

To download a supported model:

# Linux
docker run -it --rm -v ./data/models:/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"

# Windows (PowerShell)
docker run -it --rm -v "$(pwd -W)/models":/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"

Once WHISPER_CPP_MODEL is set, inference is handled locally:

ffmpeg -i data/temp_results/uploaded_audio.mp3 -ar 16000 -ac 1 -c:a pcm_s16le data/audio/output.wav

Run whisper.cpp:

# Linux
docker run -it --rm -v ./data/models:/models -v ./data/audio:/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"

# Windows
docker run -it --rm -v "$(pwd -W)/data/models":/models -v "$(pwd -W)/data":/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

o2_speechless-0.0.9.tar.gz (24.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

o2_speechless-0.0.9-py3-none-any.whl (29.4 kB view details)

Uploaded Python 3

File details

Details for the file o2_speechless-0.0.9.tar.gz.

File metadata

  • Download URL: o2_speechless-0.0.9.tar.gz
  • Upload date:
  • Size: 24.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.11

File hashes

Hashes for o2_speechless-0.0.9.tar.gz
Algorithm Hash digest
SHA256 097b0c8a0f77753b93ca80425bf82ea4ece72352abbe65d1609abe98dbe83c9f
MD5 24f04b4b8a01ff6441a7b33482e6293d
BLAKE2b-256 83f01dec0a77a75e7ebe7f048eabe4489b837df2a0d2968d6ee31a61f1fa0832

See more details on using hashes here.

File details

Details for the file o2_speechless-0.0.9-py3-none-any.whl.

File metadata

File hashes

Hashes for o2_speechless-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 230a7be9d95acc814d7c91b125080279d81d58f43117de3a87634b3383cfb6e4
MD5 915d2523e3852c40b2154cc2009c86d2
BLAKE2b-256 b3fc3545cc5978c2c801b029d49d8ba8556b151da10b69d76962544cd04a7b26

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page