Skip to main content

A library to perform automatic speech recognition with huggingface transformers.

Project description

Elpis Core Library

The Core Elpis Library, providing a quick api to :hugs: transformers for automatic-speech-recognition.

You can use the library to:

  • Perform standalone inference using a pretrained HFT model.
  • Fine tune a pretrained ASR model on your own dataset.
  • Generate text and Elan files from inference results for further analysis.

Documentation

Documentation for the library can be be found here.

Dependencies

While we try to be as machine-independant as possible, there are some dependencies you should be aware of when using this library:

  • Processing datasets (elpis.datasets.processing) requires librosa, which depends on having libsndfile installed on your computer. If you're using elpis within a docker container, you may have to manually install libsndfile.
  • Transcription (elpis.transcription.transcribe) requires ffmpeg if your audio you're attempting to transcribe needs to be resampled before it can be used. The default sample rate we assume is 16khz.
  • The preprocessing flow (elpis.datasets.preprocessing) is free of external dependencies.

Installation

You can install the elpis library with: pip3 install elpis

Usage

Below are some typical examples of use cases

Standalone Inference

from pathlib import Path

from elpis.transcriber.results import build_text
from elpis.transcriber.transcribe import build_pipeline, transcribe

# Perform inference
asr = build_pipeline(pretrained_location="facebook/wav2vec2-base-960h")
audio = Path("<to_some_audio_file.wav>")
annotations = transcribe(audio, asr) # Timed, per word annotation data

result = build_text(annotations) # Combine annotations to extract all text
print(result)

# Build output files
text_file = output_dir / "test.txt"
with open(text_file, "w") as output_file:
    output_file.write(result)

Fine-tuning a Pretrained Model on Local Dataset

from pathlib import Path
from typing import List

from elpis.datasets import Dataset
from elpis.datasets.dataset import CleaningOptions
from elpis.datasets.preprocessing import process_batch
from elpis.models import ElanOptions, ElanTierSelector
from elpis.trainer.job import TrainingJob, TrainingOptions
from elpis.trainer.trainer import train
from elpis.transcriber.results import build_elan, build_text
from elpis.transcriber.transcribe import build_pipeline, transcribe

files: List[Path] = [...] # A list of paths to the files to include.

dataset = Dataset(
    name="dataset",
    files=files,
    cleaning_options=CleaningOptions(), # Default cleaning options
    # Elan data extraction info- required if dataset includes .eaf files.
    elan_options=ElanOptions(
        selection_mechanism=ElanTierSelector.NAME, selection_value="Phrase"
    ),
)

# Setup
tmp_path = Path('...')

dataset_dir = tmp_path / "dataset"
model_dir = tmp_path / "model"
output_dir = tmp_path / "output"

# Make all directories
for directory in dataset_dir, model_dir, output_dir:
    directory.mkdir(exist_ok=True, parents=True)

# Preprocessing
batches = dataset.to_batches()
for batch in batches:
    process_batch(batch, dataset_dir)

# Train the model
job = TrainingJob(
    model_name="some_model",
    dataset_name="some_dataset",
    options=TrainingOptions(epochs=2, learning_rate=0.001),
    base_model="facebook/wav2vec2-base-960h"
)
train(
    job=job,
    output_dir=model_dir,
    dataset_dir=dataset_dir,
)

# Perform inference with pipeline
asr = build_pipeline(
    pretrained_location=str(model_dir.absolute()),
)
audio = Path("<to_some_audio_file.wav>")
annotations = transcribe(audio, asr)

# Build output files
text_file = output_dir / "test.txt"
with open(text_file, "w") as output_file:
    output_file.write(build_text(annotations))

elan_file = output_dir / "test.eaf"
eaf = build_elan(annotations)
eaf.to_file(str(elan_file))

print('voila ;)')

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

elpis-0.1.3.tar.gz (14.9 kB view details)

Uploaded Source

Built Distribution

elpis-0.1.3-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file elpis-0.1.3.tar.gz.

File metadata

  • Download URL: elpis-0.1.3.tar.gz
  • Upload date:
  • Size: 14.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.10.6 Darwin/21.5.0

File hashes

Hashes for elpis-0.1.3.tar.gz
Algorithm Hash digest
SHA256 895a9887d2def2aff17949f0de1e9d95e82480e8253ac65049778397b27bc2f3
MD5 935822f4df5a6c9e366ea6fe2b099647
BLAKE2b-256 539b48204259d5307e14fb7bb2232bbcd7e3ca9841ab9b900d37a00f5d071366

See more details on using hashes here.

File details

Details for the file elpis-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: elpis-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 17.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.10.6 Darwin/21.5.0

File hashes

Hashes for elpis-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9ab99462b46405c622e6ef0b2940fc045975e2f96eca524cefe2438b849bcce9
MD5 c1fafde44f5aa121461050f91c384f14
BLAKE2b-256 dced8b7c1cfd310c8571e38b0593f926417fe1e7141747e4669e545987e6cc16

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page