Skip to main content

Gradio custom component to visualize pyannote's pipelines outputs

Project description

pyannote_viewer

PyPI - Version

Gradio custom component to visualize pyannote's pipelines outputs

Installation

pip install pyannote_viewer

Usage

import gradio as gr
from pyannote_viewer import PyannoteViewer
from pyannote.audio import Pipeline
import os


def apply_pipeline(audio: str) -> tuple:
    pipeline = Pipeline.from_pretrained(
        "pyannote/speech-separation-ami-1.0", use_auth_token=os.environ["HF_TOKEN"]
    )
    return pipeline(audio)


with gr.Blocks() as demo:
    audio = gr.Audio(type="filepath")
    btn = gr.Button("Apply separation pipeline")
    pyannote_viewer = PyannoteViewer(interactive=False)

    btn.click(fn=apply_pipeline, inputs=[audio], outputs=[pyannote_viewer])


if __name__ == "__main__":
    demo.launch()

PyannoteViewer

Initialization

name type default description
value
str
    | pathlib.Path
    | tuple[int, numpy.ndarray]
    | Callable
    | None
None A path, URL, or [sample_rate, numpy array] tuple (sample rate in Hz, audio data as a float or int numpy array) for the default value that SourceViewer component is going to take. If callable, the function will be called whenever the app loads to set the initial value of the component.
sources
list["upload" | "microphone"] | None
None A list of sources permitted for audio. "upload" creates a box where user can drop an audio file, "microphone" creates a microphone input. The first element in the list will be used as the default source. If None, defaults to ["upload", "microphone"], or ["microphone"] if `streaming` is True.
type
"numpy" | "filepath"
"numpy" The format the audio file is converted to before being passed into the prediction function. "numpy" converts the audio to a tuple consisting of: (int sample rate, numpy.array for the data), "filepath" passes a str path to a temporary file containing the audio.
label
str | None
None The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
every
float | None
None If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
show_label
bool | None
None if True, will display label.
container
bool
True If True, will place the component in a container - providing some extra padding around the border.
scale
int | None
None Relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
min_width
int
160 Minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
interactive
bool | None
None If True, will allow users to upload and edit an audio file. If False, can only be used to play audio. If not provided, this is inferred based on whether the component is used as an input or output.
visible
bool
True If False, component will be hidden.
streaming
bool
False If set to True when used in a `live` interface as an input, will automatically stream webcam feed. When used set as an output, takes audio chunks yield from the backend and combines them into one streaming audio output.
elem_id
str | None
None An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
elem_classes
list[str] | str | None
None An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
render
bool
True If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
format
"wav" | "mp3"
"wav" The file format to save audio files. Either 'wav' or 'mp3'. wav files are lossless but will tend to be larger files. mp3 files tend to be smaller. Default is wav. Applies both when this component is used as an input (when `type` is "format") and when this component is used as an output.
autoplay
bool
False Whether to automatically play the audio when the component is used as an output. Note: browsers will not autoplay audio files if the user has not interacted with the page yet.
show_download_button
bool | None
None If True, will show a download button in the corner of the component for saving audio. If False, icon does not appear. By default, it will be True for output components and False for input components.
show_share_button
bool | None
None If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
editable
bool
True If True, allows users to manipulate the audio file if the component is interactive. Defaults to True.
min_length
int | None
None The minimum length of audio (in seconds) that the user can pass into the prediction function. If None, there is no minimum length.
max_length
int | None
None The maximum length of audio (in seconds) that the user can pass into the prediction function. If None, there is no maximum length.
waveform_options
WaveformOptions | dict | None
None A dictionary of options for the waveform display. Options include: waveform_color (str), waveform_progress_color (str), show_controls (bool), skip_length (int), trim_region_color (str). Default is None, which uses the default values for these options.

Events

name description
stream This listener is triggered when the user streams the PyannoteViewer.
change Triggered when the value of the PyannoteViewer changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See .input() for a listener that is only triggered by user input.
clear This listener is triggered when the user clears the PyannoteViewer using the X button for the component.
play This listener is triggered when the user plays the media in the PyannoteViewer.
pause This listener is triggered when the media in the PyannoteViewer stops for any reason.
stop This listener is triggered when the user reaches the end of the media playing in the PyannoteViewer.
start_recording This listener is triggered when the user starts recording with the PyannoteViewer.
pause_recording This listener is triggered when the user pauses recording with the PyannoteViewer.
stop_recording This listener is triggered when the user stops recording with the PyannoteViewer.
upload This listener is triggered when the user uploads a file into the PyannoteViewer.

User function

The impact on the users predict function varies depending on whether the component is used as an input or output for an event (or both).

  • When used as an Input, the component only impacts the input signature of the user function.
  • When used as an output, the component only impacts the return signature of the user function.

The code snippet below is accurate in cases where the component is used as both an input and an output.

  • As output: Is passed, passes audio as one of these formats (depending on type): a str filepath, or tuple of (sample rate in Hz, audio data as numpy array). If the latter, the audio data is a 16-bit int array whose values range from -32768 to 32767 and shape of the audio data array is (samples,) for mono audio or (samples, channels) for multi-channel audio.
  • As input: Should return, expects audio data in any of these formats: a str or pathlib.Path filepath or URL to an audio file, or a bytes object (recommended for streaming), or a tuple of (sample rate in Hz, audio data as numpy array). Note: if audio is supplied as a numpy array, the audio will be normalized by its peak value to avoid distortion or clipping in the resulting audio.
def predict(
    value: str | tuple[int, numpy.ndarray] | None
) -> tuple[
       pyannote.core.annotation.Annotation,
       numpy.ndarray | pathlib.Path | str,
   ]
   | None:
    return value

WaveformOptions

@dataclasses.dataclass
class WaveformOptions:
    waveform_color: str | None = None
    waveform_progress_color: str | None = None
    trim_region_color: str | None = None
    show_recording_waveform: bool = True
    show_controls: bool = False
    skip_length: int | float = 5
    sample_rate: int = 44100

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyannote_viewer-1.0.2.tar.gz (223.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyannote_viewer-1.0.2-py3-none-any.whl (174.7 kB view details)

Uploaded Python 3

File details

Details for the file pyannote_viewer-1.0.2.tar.gz.

File metadata

  • Download URL: pyannote_viewer-1.0.2.tar.gz
  • Upload date:
  • Size: 223.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.14

File hashes

Hashes for pyannote_viewer-1.0.2.tar.gz
Algorithm Hash digest
SHA256 ee7f7a7d7d0c8773c2c2e3f55e8c283417bf78831c5385396e395c26a192d8bc
MD5 e1b5008e3a2cc4f0b81020f2d43f132b
BLAKE2b-256 6122f8c37912289c0f046086bed2d109fc40cc4761ba7aded0587bd7d03fdac6

See more details on using hashes here.

File details

Details for the file pyannote_viewer-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: pyannote_viewer-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 174.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.14

File hashes

Hashes for pyannote_viewer-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3f8917efb0a09f783584daebec8c82d559e74cec6c82b3907361c24e7b15c6b1
MD5 3277384ddd2c8d40ab84548e8d609da4
BLAKE2b-256 d5a102aa63071ed73a6cd13d2346516f502622619d2b79a2353f989e066d95b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page