Skip to main content

Generate audiobooks (mp3 files per chapter) from ebooks (currently only supports the epub format).

Project description

audiobook-generator - Generate audiobooks (epub) from ebooks (one mp3 per chapter)

Flow

graph TD;
    A[Input file] --> B["Convert to text and chapterize (Optionally extract out the cover image)"];
    B --> C[Transform to audio files];

Usage

Running locally

Prerequisites

  • Python 3.10+ (This program was tested on 3.12)
  • (Optional) Install espeak-ng (On Debian/Ubuntu, run apt install -y espeak-ng)
  • (Development only) uv

For End Users

  • You don't need to clone this repository and you can install either way:
    • Using pip: python -m pip install audiobook-generator (virtual environment highly recommended)
    • Using pipx: pipx install audiobook-generator
    • NOTE For Windows users, there is one extra step needed to make cuda(Nvidia) GPU is used when available:
      • If using pip and virtual environment, run this after the above pip install command (with the virtual environment activated first)
        • pip install torch --index-url https://download.pytorch.org/whl/cu124 --force
      • If using pipx, run this command instead:
        • pipx runpip audiobook-generator install torch --index-url https://download.pytorch.org/whl/cu124 --force
      • Technical details on why this is needed is described at the "Why you need that extra pip install step for Windows?" section.
  • Convert your epub file to audiobooks via the command
    • abg <epub path> <audio output directory>
  • If you want to see all the command line switches, just run abg -h

For Development

  • This program uses uv for dependency management and execution in development, install it first if you haven't done so.
  • To run the program from its source:
    • Clone this repository and cd inside.
    • (Do it ONCE only at the first time) Run uv sync to create the virtual environment in the .env directory and download all the dependencies.
    • Then run the following command
      • uv run -m audiobook_generator.main ...

Using Google Colab (if your epub is short and can be converted under 30 minutes)

  • Click this button Open In Colab to open the Colab notebook.
  • Upload your epub file to the root directory of the Colab runtime.
  • Run the code cells in sequence, and after you click the run button of the last cell, you will be prompted to give the notebook access to your Google Drive, which the fully converted audiobook will be uploaded to.

CI/CD Pipeline

Automatic publishing a new version via GitHub Actions ()

Prerequisites

  1. Create 2 environments testpypi and pypi at https://github.com/houtianze/audiobook-generator/settings/environments
  2. Configure the Publisher settings at testpypi and pypi accordingly:
    • testpypi:
     Repository: houtianze/audiobook-generator
     Workflow: python-publish-testpypi.yml
     Environment name: testpypi
    
    • pypi:
     Repository: houtianze/audiobook-generator
     Workflow: python-publish.yml
     Environment name: pypi
    

Publishing

  1. Tag a new version git tag v1.x.y
  2. Push to GitHub git push --tags
  3. Create a release on GitHub using the tag either using its website or run gh release create v1.x.y --generate-notes (You need to install the GitHub CLI from https://cli.github.com/ and auth yourself first)
  4. Relax and let

CPU or GPU?

The selection to run the model on CPU or GPU is automatic, meaning:

  • On Windows/WSL/Linux, If you have Nvidia graphic card with the driver properly installed, the model will be loaded to GPU (cuda) and executed, otherwise, the CPU is used (which is slower compared to GPU)
  • On Mac, you need to set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 for it to run on GPU (because at the time of writing, the MPS support in PyTorch is not complete and it won't work without the CPU fallback), otherwise it will run on CPU.

Why you need that extra pip install step for Windows?

(Thanks to @notimp for spotting this issue.)

If you go to pytorch, you will see that to install pytorch (only) on Windows, you need to specify the --index-url parameter (e.g. pip3 install torch --index-url https://download.pytorch.org/whl/cu124). When using uv for development, this is handled by this section of the pyproject.toml file:

[tool.uv.sources]
torch = [
    { index = "pytorch-cu124", marker = "sys_platform == 'win32'" },
]

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

So running under uv in development, the torch dependency are installed correctly. However when it's packaged and published to PyPI, it seems that this special specification of the torch index part is not respected by pip and when you run pip (or pipx) install, it just runs pip install torch on Windows without that --index-url parameter, which installs a version that doesn't support cuda/GPU. Currently I don't see how I can resolve this in packaging as I guess python package specification may not support different dependency installation parameters on different platforms, or maybe I haven't digged deep enough. So for now, this extra step is required to install the correct version of torch on Windows.

Misc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

audiobook_generator-1.0.8.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

audiobook_generator-1.0.8-py3-none-any.whl (11.4 kB view details)

Uploaded Python 3

File details

Details for the file audiobook_generator-1.0.8.tar.gz.

File metadata

  • Download URL: audiobook_generator-1.0.8.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for audiobook_generator-1.0.8.tar.gz
Algorithm Hash digest
SHA256 b7630ab430e683c7097a3fd123b921a19d5762aa185ab6e785e9d58c605868c0
MD5 15bb10f4295e163223600c2db0209451
BLAKE2b-256 0bb9a8b39ebc1cf5f91ad7ba411e6a140c4a73c34079b469f2e97e32942d73b1

See more details on using hashes here.

Provenance

The following attestation bundles were made for audiobook_generator-1.0.8.tar.gz:

Publisher: python-publish.yml on houtianze/audiobook-generator

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file audiobook_generator-1.0.8-py3-none-any.whl.

File metadata

File hashes

Hashes for audiobook_generator-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 6aff477c0cf7207687ee183f091b383074b51526fce803efa979ecf548bd2704
MD5 bab46de69a02c6a73981c88925b61ff1
BLAKE2b-256 1d243638178b0f76fec8cf82c2d239790afc013c17d397c4cd4f9dab719096b9

See more details on using hashes here.

Provenance

The following attestation bundles were made for audiobook_generator-1.0.8-py3-none-any.whl:

Publisher: python-publish.yml on houtianze/audiobook-generator

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page