Skip to main content

A simple FastAPI server to host XTTSv2

Project description

A simple FastAPI Server to run XTTSv2

There's a google collab version you can use it if your computer is weak. You can check out the guide

This project is inspired by silero-api-server and utilizes XTTSv2.

I created a Pull Request that has been merged into the dev branch of SillyTavern: here.

The TTS module or server can be used in any way you prefer.

Installation

To begin, install the xtts-api-server package using pip:

pip install xtts-api-server

I strongly recommend installing PyTorch with CUDA support to leverage the processing power of your video card, which will enhance the speed of the entire process:

pip install torch==2.1.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118

Starting Server

python -m xtts_api_server will run on default ip and port (localhost:8020)

usage: xtts_api_server [-h] [-hs HOST] [-p PORT] [-sf SPEAKER_FOLDER] [-o OUTPUT] [-t TUNNEL_URL] [-ms MODEL_SOURCE] [--lowvram]

Run XTTSv2 within a FastAPI application

options:
  -h, --help show this help message and exit
  -hs HOST, --host HOST
  -p PORT, --port PORT
  -sf SPEAKER_FOLDER, --speaker_folder The folder where you get the samples for tts
  -o OUTPUT, --output Output folder
  -t TUNNEL_URL, --tunnel URL of tunnel used (e.g: ngrok, localtunnel)
  -ms MODEL_SOURCE, --model-source ["api","apiManual","local"]
  --lowvram The mode in which the model will be stored in RAM and when the processing will move to VRAM, the difference in speed is small

If you want your host to listen, use -hs 0.0.0.0

The -t or --tunnel flag is needed so that when you get speakers via get you get the correct link to hear the preview. More info here

Model-source defines in which format you want to use xtts:

  1. local - loads a version 2.0.2 model into the models folder and uses XttsConfig and inference.
  2. apiManual - loads a version 2.0.2 model into the models folder and uses the tts_to_file function from the TTS api
  3. api - will load the latest version of the model..

The first time you run or generate, you may need to confirm that you agree to use XTTS.

API Docs

API Docs can be accessed from http://localhost:8020/docs

Voice Samples

You can find the sample in this repository, also by default samples will be saved to /output/output.wav or you can change this, more details in the API documentation

Selecting Folder

You can change the folders for speakers and the folder for output via the API.

Get Speakers

Once you have at least one file in your speakers folder, you can get its name via API and then you only need to specify the file name.

Note on creating samples for quality voice cloning

The following post is a quote by user Material1276 from reddit

Some suggestions on making good samples

Keep them about 7-9 seconds long. Longer isn't necessarily better.

Make sure the audio is down sampled to a Mono, 22050Hz 16 Bit wav file. You will slow down processing by a large % and it seems cause poor quality results otherwise (based on a few tests). 24000Hz is the quality it outputs at anyway!

Using the latest version of Audacity, select your clip and Tracks > Resample to 22050Hz, then Tracks > Mix > Stereo to Mono. and then File > Export Audio, saving it as a WAV of 22050Hz

If you need to do any audio cleaning, do it before you compress it down to the above settings (Mono, 22050Hz, 16 Bit).

Ensure the clip you use doesn't have background noises or music on e.g. lots of movies have quiet music when many of the actors are talking. Bad quality audio will have hiss that needs clearing up. The AI will pick this up, even if we don't, and to some degree, use it in the simulated voice to some extent, so clean audio is key!

Try make your clip one of nice flowing speech, like the included example files. No big pauses, gaps or other sounds. Preferably one that the person you are trying to copy will show a little vocal range. Example files are in here

Make sure the clip doesn't start or end with breathy sounds (breathing in/out etc).

Using AI generated audio clips may introduce unwanted sounds as its already a copy/simulation of a voice, though, this would need testing.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xtts_api_server-0.4.3.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

xtts_api_server-0.4.3-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file xtts_api_server-0.4.3.tar.gz.

File metadata

  • Download URL: xtts_api_server-0.4.3.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.23.0

File hashes

Hashes for xtts_api_server-0.4.3.tar.gz
Algorithm Hash digest
SHA256 1a3b7a04f840fcc4a6da04f25f58deb7a91aaa557554ff4330e320fb564fc4ce
MD5 7bd1a014fb1c4da6755217b6b1239c04
BLAKE2b-256 f6a88e53d4bc018aef9ace8d4d8b37902214b874e351787ebe4f57660c9c6845

See more details on using hashes here.

File details

Details for the file xtts_api_server-0.4.3-py3-none-any.whl.

File metadata

File hashes

Hashes for xtts_api_server-0.4.3-py3-none-any.whl
Algorithm Hash digest
SHA256 edda5ff8cdac2ce2b82171a63b6ab1fcf05d1df1291dab5a9e42cb36ff4f34f9
MD5 df21c3261352393ffe2d63a0ceb57bde
BLAKE2b-256 5f83956afeaa23ab87863009a1435e5c374517039e36aa3780079e274e94c6b7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page