Skip to main content

Speechless repo for sales call analysis

Project description

speechless

UV Installation Instructions

To install dependencies and manage the project, we use uv, a fast Python package manager and resolver. Follow the steps below to set up your environment.

Step 1: Install uv

You can install uv via pip:

pip install uv

Or with pipx:

pipx install uv

Verify the installation:

uv --version

Step 2: Create a Virtual Environment (Optional but Recommended)

You can let uv manage the environment for you:

uv venv
source .venv/bin/activate

If you're using your own virtual environment tool (like venv or virtualenv), just activate it before proceeding.

Step 3: Install Dependencies

uv installs packages directly from pyproject.toml. To install all main and development dependencies:

uv pip compile pyproject.toml --output-file uv.lock
uv pip install --requirements uv.lock

Step 4: Run the Project or Tests

To activate the environment:

source .venv/bin/activate

To run the tests:

pytest

Step 5: Run pre-commit

To run pre-commit hooks, use:

uv run pre-commit run --all-files

Step 6: Convert the model to ONNX format

To convert the model to ONNX format, run:

python export_to_onnx.py --checkpoint /path/to/checkpoint --onnx_model /path/to/onnx_model

Step 7: Add OPENAI_API_KEY and/or Set Up WHISPER_CPP_MODEL

The whisper_1 model requires an OpenAI subscription. As an alternative, you can use whisper.cpp.

To download a supported model:

# Linux
docker run -it --rm -v ./data/models:/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"

# Windows (PowerShell)
docker run -it --rm -v "$(pwd -W)/models":/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"

Once WHISPER_CPP_MODEL is set, inference is handled locally:

ffmpeg -i data/temp_results/uploaded_audio.mp3 -ar 16000 -ac 1 -c:a pcm_s16le data/audio/output.wav

Run whisper.cpp:

# Linux
docker run -it --rm -v ./data/models:/models -v ./data/audio:/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"

# Windows
docker run -it --rm -v "$(pwd -W)/data/models":/models -v "$(pwd -W)/data":/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

o2_speechless-0.0.2.tar.gz (3.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

o2_speechless-0.0.2-py3-none-any.whl (3.2 kB view details)

Uploaded Python 3

File details

Details for the file o2_speechless-0.0.2.tar.gz.

File metadata

  • Download URL: o2_speechless-0.0.2.tar.gz
  • Upload date:
  • Size: 3.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.17

File hashes

Hashes for o2_speechless-0.0.2.tar.gz
Algorithm Hash digest
SHA256 1815b06511542a446eca8a673ba536f96936e739d28bd9dd864c7204621278ec
MD5 037a30200a44f44bc05bbd618b90181c
BLAKE2b-256 feb4922422111a90f3efe137c727c2b6f7e458ee507b8addc9c43e00dfa5ce75

See more details on using hashes here.

File details

Details for the file o2_speechless-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for o2_speechless-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2a70a19b7ad9504b51179ae9f33f6c152545a42f56eee59623e0185c11c6a6b6
MD5 80f20a38ebf179e5f4a9658626c18d73
BLAKE2b-256 0bc12dbbe1379c91013643099d11fab13a128cfca83fa700ecf176a7174afab7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page