Skip to main content

Speechless repo for sales call analysis

Project description

miya-speechless

UV Installation Instructions

To install the dependencies and manage the project, we now use uv. Follow the steps below to set up your environment with uv.

Step 1: Install uv

You can install uv by running the official installation script:

curl -LsSf https://astral.sh/uv/install.sh | sh

Alternatively, you can install it via Homebrew:

brew install astral-sh/uv/uv

Step 2: Verify uv Installation

After installing, verify that uv is available by running:

uv --version

Step 3: Install Dependencies

Once uv is installed, you can install the project dependencies by running the following command in the project root:

uv venv
source .venv/bin/activate
uv sync --dev

This will create a virtual environment and install all dependencies specified in the pyproject.toml file.

Step 4: Run the Project or Tests

You can now run the project or the tests using the uv environment:

To activate the environment:

source .venv/bin/activate

To run the tests:

uv run pytest

Step 5: Convert the model to onnx format

To convert the model to onnx format, run the following command:

uv run python export_to_onnx.py --checkpoint /path/to/checkpoint --onnx_model /path/to/onnx_model

Step 6: Run the streamlit app

Place the onnx model in the models directory.

To start the streamlit app, run the following command:

uv run streamlit run app/app.py

To start debug mode, run:

uv run python -m debugpy --listen 5678 -m streamlit run app/app.py

In the application, you can set the following parameters:

  • overlap: Set the overlap between the transcript and the diaretization (default: 0.1)
  • onset_threshold: Set the onset threshold for speaker start (default: 0.1)
  • offset_threshold: Set the offset threshold for speaker stop (default: 0.1)

Step 7: Add OPENAI_API_KEY and/or setup WHISPER_CPP_MODEL

whisper_1 requires a paid OpenAI subscription. An alternative is whisper.cpp

Download at least one supported model:

# Linux
docker run -it --rm -v ./data/models:/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"
# Windows
docker run -it --rm -v "$(pwd -W)/models":/models ghcr.io/ggerganov/whisper.cpp:main "./models/download-ggml-model.sh small /models"

When WHISPER_CPP_MODEL is set, expect the following to happen instead of calling OpenAI:

ffmpeg -i data/temp_results/uploaded_audio.mp3 -ar 16000 -ac 1 -c:a pcm_s16le data/audio/output.wav
# Linux
docker run -it --rm -v ./data/models:/models -v ./data/audio:/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"
# Windows
docker run -it --rm -v "$(pwd -W)/data/models":/models -v "$(pwd -W)/data":/audios ghcr.io/ggerganov/whisper.cpp:main "./build/bin/whisper-cli -m /models/ggml-small.bin -f /audios/output.wav -ml 16 -oj -l en"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

miya_speechless-0.0.26.tar.gz (131.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

miya_speechless-0.0.26-py3-none-any.whl (24.2 kB view details)

Uploaded Python 3

File details

Details for the file miya_speechless-0.0.26.tar.gz.

File metadata

  • Download URL: miya_speechless-0.0.26.tar.gz
  • Upload date:
  • Size: 131.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.20

File hashes

Hashes for miya_speechless-0.0.26.tar.gz
Algorithm Hash digest
SHA256 dc03fb5fde70eaa1e0ec3933c7d38f9d3e15f6560dc6d36acfb415fe560bbed0
MD5 82f2be56c2a2c42c9139b4864be30bf1
BLAKE2b-256 90d1b112f87b27b70877eac38f34d6c31eec868ca36e9f10d17b3206236e86f9

See more details on using hashes here.

File details

Details for the file miya_speechless-0.0.26-py3-none-any.whl.

File metadata

File hashes

Hashes for miya_speechless-0.0.26-py3-none-any.whl
Algorithm Hash digest
SHA256 dd8462230c64655bcbeb96568a25f03f329252c4ef2946a53ed1b0123a1ecc51
MD5 1b5ad22842929fbfa8d69ddc09451831
BLAKE2b-256 7d6b8fdc2f79062e06243f645ca6b2552ece28a533fe564519105e3d7bbfa1a7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page