Skip to main content

Create natural sounding audio from text, clone voices and use them. Convert voice to voice. Bark model.

Project description

SpeechCraft

Text2Speech, Voice-Cloning and Voice2Voice conversion

Ever wanted to create natural sounding speech from text, clone a voice or sound like someone else? SpeechCraft is ideal for creating voiceovers, audiobooks, or just having fun.

Features:

  • Text2speech synthesis with the 🐶 Bark model of Suno.ai
    • Generate text in different languages
    • Supports emotions & singing.
  • Speaker generation / embedding generation aka voice cloning
  • Voice2voice synthesis: given an audio file, generate a new audio file with the voice of a different speaker.
  • Convenient deployment ready web API with FastTaskAPI
  • Automatic download of models

Quick Links

Also check-out other socaity projects for generative AI:

Example generations and cloned voices

https://github.com/SocAIty/SpeechCraft/assets/7961324/dbf905ea-df37-4e52-9e93-a9833352459d

The hermine voice was generated with the voice_clone_test_voice_1.wav file with around 11 seconds of clear speech.

https://github.com/SocAIty/SpeechCraft/assets/7961324/71a039c7-e665-4576-91c7-729052e05b03

Installation

The code runs fine on Windows and Linux.

With PIP

# from PyPi (without web API)
pip install speechcraft
# with web API
pip install speechcraft[full]
# or from GitHub for the newest version.
pip install git+https://github.com/SocAIty/speechcraft

To use a GPU don't forget to install pytorch GPU with your correct cuda version. For example:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Requirements:

  • python >= 3.5 <= 3.10

Or clone and work with the repository.

  1. Clone the repository.
  2. (Optional) Create a virtual environment. With python -m venv venv and activate it with venv/Scripts/activate.
  3. Install pip install .
  4. Don't forget to install fairseq and pytorch GPU.

Usage

We provide three ways to use the text-to-speech functionality.

  1. Direct module import and inference
  2. By deploying and calling the web service
  3. As part of the socaity SDK. # coming soon

Inference from script

from speechcraft import text2voice, voice2embedding, voice2voice

# simple text2speech synthesis
text = "I love society [laughs]! [happy] What a day to make voice overs with artificial intelligence."
audio_numpy, sample_rate = text2voice(text, speaker_name="en_speaker_3")

# speaker embedding generation
embedding = voice2embedding(audio_file="voice_sample_15s.wav", voice_name="hermine").save_to_speaker_lib()

# text2speech synthesis with cloned voice or embedding
audio_with_cloned_voice, sample_rate = text2voice(sample_text, voice=embedding)  # also works with voice="hermine"

# voice2voice synthesis
cloned_audio = voice2voice(audio_file="my_audio_file.wav", voice_name_or_embedding_path="hermine")

Use the following code to convert and save the audio file with the media-toolkit module.

from media_toolkit import AudioFile
audio = AudioFile().from_np_array(audio_numpy, sr=sample_rate)
audio.save("my_new_audio.wav")

Note: The first time your are using speechcraft it will download the models. These files are quite big and can take a while to download.

Web Service

image of openapi server

The usage of the webservice is documented in WebService.md.

Details and guidelines

🛠️ Hardware and Inference Speed

Bark has been tested and works on both CPU and GPU (pytorch 2.0+, CUDA 11.7 and CUDA 12.0).

On enterprise GPUs and PyTorch nightly, Bark can generate audio in roughly real-time. On older GPUs, default colab, or CPU, inference time might be significantly slower. For older GPUs or CPU you might want to consider using smaller models. Details can be found in out tutorial sections here.

The full version of Bark requires around 12GB of VRAM to hold everything on GPU at the same time. To use a smaller version of the models, which should fit into 8GB VRAM, set the environment flag SUNO_USE_SMALL_MODELS=True.

If you don't have hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our model playground here.

⚙️ Emotion and language with the Model

Bark is fully generative text-to-audio model devolved for research and demo purposes. It follows a GPT style architecture similar to AudioLM and Vall-E and a quantized Audio representation from EnCodec. It is not a conventional TTS model, but instead a fully generative text-to-audio model capable of deviating in unexpected ways from any given script. Different to previous approaches, the input text prompt is converted directly to audio without the intermediate use of phonemes. It can therefore generalize to arbitrary instructions beyond speech such as music lyrics, sound effects or other non-speech sounds.

Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on Discord!

  • [laughter]
  • [laughs]
  • [sighs]
  • [music]
  • [gasps]
  • [clears throat]
  • or ... for hesitations
  • for song lyrics
  • CAPITALIZATION for emphasis of a word
  • [MAN] and [WOMAN] to bias Bark toward male and female speakers, respectively

Supported Languages

Language Status
English (en)
German (de)
Spanish (es)
French (fr)
Hindi (hi)
Italian (it)
Japanese (ja)
Korean (ko)
Polish (pl)
Portuguese (pt)
Russian (ru)
Turkish (tr)
Chinese, simplified (zh)

To use a different language use the corresponding voice parameter to it like "de_speaker_1". You find preset voices and languages in the assets folder.

© License

SpeechCraft and Bark is licensed under the MIT License.

Voice Cloning guide

Make sure these things are NOT in your voice input: (in no particular order)

  • Noise (You can use a noise remover before)
  • Music (There are also music remover tools) (Unless you want music in the background)
  • A cut-off at the end (This will cause it to try and continue on the generation)

What makes for good prompt audio? (in no particular order)

  • Around ~7 to ~15 seconds of voice data
  • Clearly spoken
  • No weird background noises
  • Only one speaker
  • Audio which ends after a sentence ends
  • Regular/common voice (They usually have more success, it's still capable of cloning complex voices, but not as good at it)

Disclaimer

This repository is a merge of the orignal bark repository and bark-voice-cloning-HuBert-quantizer by gitmylo The credit goes to the original authors. Like the original authors, I am also not responsible for any misuse of this repository. Use at your own risk, and please act responsibly. Don't copy and publish the voice of a person without their consent.

Contribute

Any help with maintaining and extending the package is welcome. Feel free to open an issue or a pull request.

PLEASE LEAVE A :star: TO SUPPORT THIS WORK

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

speechcraft-0.0.7.tar.gz (2.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

SpeechCraft-0.0.7-py3-none-any.whl (2.6 MB view details)

Uploaded Python 3

File details

Details for the file speechcraft-0.0.7.tar.gz.

File metadata

  • Download URL: speechcraft-0.0.7.tar.gz
  • Upload date:
  • Size: 2.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.11

File hashes

Hashes for speechcraft-0.0.7.tar.gz
Algorithm Hash digest
SHA256 7ea3a0870561ccd41cef6d7c5a0052f5c44d5ed997fc1ec07977b639ae130aba
MD5 2b66ada59f19f09c140a5afbe198e93b
BLAKE2b-256 d61a50bdb1d1dfc645a43a940a219bbc44dfe34ff58dc0fd717d180f76f1365f

See more details on using hashes here.

File details

Details for the file SpeechCraft-0.0.7-py3-none-any.whl.

File metadata

  • Download URL: SpeechCraft-0.0.7-py3-none-any.whl
  • Upload date:
  • Size: 2.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.11

File hashes

Hashes for SpeechCraft-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 3df46f6fddf6ce633aa44788b556177bbdc2c24ec280f4bf0f7b695aa0f610ba
MD5 cfea060144f1ad11376bc2505cdc6ea7
BLAKE2b-256 977ab6fb2e127164c446a685f8061b0edbf7323ad99e3ecf1806371ed4f5deb5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page