Skip to main content

Generate audiobooks (mp3 files per chapter) from ebooks (currently only supports the epub format).

Project description

audiobook-generator - Generate audiobooks (epub) from ebooks (one mp3 per chapter)

Flow

graph TD;
    A[Input file] --> B["Convert to text and chapterize (Optionally extract out the cover image)"];
    B --> C[Transform to audio files];

Usage

Using Google Colab (This is the most convenient way to run it without even the need of owning a GPU)

  • Click this button Open In Colab to open the Colab notebook.
  • Upload your epub file to the root directory of the Colab runtime.
  • Run the code cells in sequence, and after you click the run button of the last cell, you can leave the browser tab and let it do all the hard work. Once all the audio files are generated, the notebook will zip them all and upload to your Dropbox.

Running locally

Prerequisites

  • Python 3.10+ (This program was tested on 3.12)
  • (Optional) Install espeak-ng (On Debian/Ubuntu, run apt install -y espeak-ng)
  • (Development only) uv

For End Users

  • You don't need to clone this repository and you can install either way:
    • Using pip: python -m pip install audiobook-generator (virtual environment highly recommended)
    • Using pipx: pipx install audiobook-generator
    • NOTE For Windows users, there is one extra step needed to make cuda(Nvidia) GPU is used when available:
      • If using pip and virtual environment, run this after the above pip install command (with the virtual environment activated first)
        • pip install torch --index-url https://download.pytorch.org/whl/cu124 --force
      • If using pipx, run this command instead:
        • pipx runpip audiobook-generator install torch --index-url https://download.pytorch.org/whl/cu124 --force
      • Technical details on why this is needed is described at the "Why you need that extra pip install step for Windows?" section.
  • Convert your epub file to audiobooks via the command
    • abg <epub path> <audio output directory>
  • If you want to see all the command line switches, just run abg -h

For Development

Running in development
  • This program uses uv for dependency management and execution in development, install it first if you haven't done so.
  • To run the program from its source:
    • Clone this repository and cd inside.
    • (Do it ONCE only at the first time) Run uv sync to create the virtual environment in the .env directory and download all the dependencies.
    • Then run the following command
      • uv run -m audiobook_generator.main ...

CI/CD Pipeline

Automatic publishing a new version via GitHub Actions ()

Prerequisites

  1. Create 2 environments testpypi and pypi at https://github.com/houtianze/audiobook-generator/settings/environments
  2. Configure the Publisher settings at testpypi and pypi accordingly:
    • testpypi:
     Repository: houtianze/audiobook-generator
     Workflow: python-publish-testpypi.yml
     Environment name: testpypi
    
    • pypi:
     Repository: houtianze/audiobook-generator
     Workflow: python-publish.yml
     Environment name: pypi
    

Publishing

  1. Tag a new version git tag v1.x.y
  2. Push to GitHub git push --tags
  3. Create a release on GitHub using the tag either using its website or run gh release create v1.x.y --generate-notes (You need to install the GitHub CLI from https://cli.github.com/ and auth yourself first)
  4. Relax and let

CPU or GPU?

The selection to run the model on CPU or GPU is automatic, meaning:

  • On Windows/WSL/Linux, If you have Nvidia graphic card with the driver properly installed, the model will be loaded to GPU (cuda) and executed, otherwise, the CPU is used (which is slower compared to GPU)
  • On Mac, you need to set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 for it to run on GPU (because at the time of writing, the MPS support in PyTorch is not complete and it won't work without the CPU fallback), otherwise it will run on CPU.

Why you need that extra pip install step for Windows?

(Thanks to @notimp for spotting this issue.)

If you go to pytorch, you will see that to install pytorch (only) on Windows, you need to specify the --index-url parameter (e.g. pip3 install torch --index-url https://download.pytorch.org/whl/cu124). When using uv for development, this is handled by this section of the pyproject.toml file:

[tool.uv.sources]
torch = [
    { index = "pytorch-cu124", marker = "sys_platform == 'win32'" },
]

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

So running under uv in development, the torch dependency are installed correctly. However when it's packaged and published to PyPI, it seems that this special specification of the torch index part is not respected by pip and when you run pip (or pipx) install, it just runs pip install torch on Windows without that --index-url parameter, which installs a version that doesn't support cuda/GPU. Currently I don't see how I can resolve this in packaging as I guess python package specification may not support different dependency installation parameters on different platforms, or maybe I haven't digged deep enough. So for now, this extra step is required to install the correct version of torch on Windows.

Misc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

audiobook_generator-1.0.7.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

audiobook_generator-1.0.7-py3-none-any.whl (11.3 kB view details)

Uploaded Python 3

File details

Details for the file audiobook_generator-1.0.7.tar.gz.

File metadata

  • Download URL: audiobook_generator-1.0.7.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for audiobook_generator-1.0.7.tar.gz
Algorithm Hash digest
SHA256 02d405a017e94e15b735ab170a7c71ff24bc823155c70005d54eb44302a39098
MD5 6793ae280bb030aa244d607b54d64f8e
BLAKE2b-256 7226c3e2ee76e75cadcc2a947b5658dd2ae2ec0524fd71ddf1a6e68c4d1990c3

See more details on using hashes here.

Provenance

The following attestation bundles were made for audiobook_generator-1.0.7.tar.gz:

Publisher: python-publish.yml on houtianze/audiobook-generator

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file audiobook_generator-1.0.7-py3-none-any.whl.

File metadata

File hashes

Hashes for audiobook_generator-1.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 851d037f716da05d5ca6024f91de23edfccf131cfe705e0bb8c6e933354d8e5c
MD5 1aac44ac32e3c660c3b5e56143568034
BLAKE2b-256 cf4452942ac464c34d94dfedd76303fd9bcdbec3659cbbba59668fda068a5d57

See more details on using hashes here.

Provenance

The following attestation bundles were made for audiobook_generator-1.0.7-py3-none-any.whl:

Publisher: python-publish.yml on houtianze/audiobook-generator

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page