Skip to main content

A utility to extract vocabulary lists from manga.

Project description

Japanese Vocabulary Extractor

This script allows you to automatically scan through various types of japanese media and generate a csv with all contained words for studying. Currently supported formats are:

  • Manga (as images)
  • Subtitles (ASS/SRT files) from anime, shows or movies
  • PDF and EPUB files
  • Text (txt) files

It also allows you to automatically add the english definitions of each word to the CSV, as well as furigana if desired.

The resulting CSV can be imported to Anki (if you add the english definitions) or Bunpro.

Installation

You need to have Python installed on your computer. I recommend using Python 3.12.

To install the Japanese Vocabulary Extractor, follow these steps:

  1. Open a terminal or command prompt on your computer.
  2. Type the following command and press Enter:
    pip install japanese-vocabulary-extractor
    

This will download and install the necessary files for the tool to work.

Usage

To use the Japanese Vocabulary Extractor, follow these steps:

  1. Open a terminal or command prompt on your computer.
  2. Type the following command and press Enter:
    jpvocab-extractor --type TYPE input_path
    

Replace TYPE with the type of media you are scanning: 'manga', 'subtitle', 'pdf', 'epub', 'txt' or 'generic'.

Replace input_path:

  • For manga, provide a folder containing the images.
  • For other types, provide the file or a folder with multiple files. Use quotation marks if the path has spaces.

This will create a vocab.csv file with all the words found.

Options

You can add options to the command to change its behavior. For example: To add English definitions to the CSV, include the --add-english option:

jpvocab-extractor --add-english --type TYPE input_path

Here is a list of all options:

  • --add-english: Looks up and adds the English translation of each word to the CSV file.
  • --furigana: Add furigana to all words in the CSV file. Note that this is quite primitive, it just adds the reading of the whole word in hiragana in brackets.
  • --id: Replaces each word with its JMDict ID in the CSV file. Incompatible with the --furigana flag.

Here are some manga specific options for handling multiple volumes:

  • --parent: Only relevant if processing a manga: provided folder contains multiple volumes. Each folder will be treated as its own volume.
  • --separate-vol: Only relevant if using --parent: each volume will be saved to a separate CSV file.
  • --combine-vol: Only relevant if using --separate-vol flag: all volumes will be combined into a single CSV file with their respective chapter name inserted as "#chapter1" above each section. This also removes duplicates that appeared in earlier volumes.

Here are all the available options shown together:

jpvocab-extractor [-h] [--parent] [--separate-vol] [--combine-vol] [--id] [--add-english] [--furigana] --type TYPE input_path

Bunpro

The setup you'd want for Bunpro isn't known yet, but I'll put the expected command that should work best for manga here:

jpvocab-extractor --parent --separate-vol --combine-vol --id --type manga input_path

This would combine all volumes with their own section into one CSV file, with JMDict IDs for each word.

For general creation of decks for media other than manga, you would only add the --id flag:

jpvocab-extractor --id --type TYPE input_path

Mokuro files

Bonus: Using this script with manga will also generate .mokuro and .html files for each volume, allowing you to read the manga with selectable text in your browser. For more details, visit the mokuro GitHub page linked at the bottom.

Notices

If you run into errors, look into the mokuro repository linked at the bottom. There might be some issues with python version compatibility.

Also important: This script is not perfect. The text recognition can make mistakes and some of the extracted vocab can be wrong. If this proves to be a big issue I will look for a different method to parse vocabulary from the text. Do not be alarmed by the warning about words with no definition, these are likely names, hallucinations/mistakes by the OCR algorithm or chinese symbols (sometimes found in subtitles).

TODO

  • Separate files/csv sections for each file for formats other than manga
  • Better furigana after each kanji instead of just the whole word
  • More advanced dictionary lookup functionality
  • Support more input formats (Games, VNs, Audio files?) Please suggest any you might want, even the ones listed already!
  • Support other output formats
  • Improve dictionary result accuracy to include one-character-kana words when translating to english (currently filtered out due to mostly useless answers)

Acknowledgements

This is hardly my work, I just stringed together some amazing libraries:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

japanese_vocabulary_extractor-1.0.1.tar.gz (21.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

japanese_vocabulary_extractor-1.0.1-py3-none-any.whl (24.4 kB view details)

Uploaded Python 3

File details

Details for the file japanese_vocabulary_extractor-1.0.1.tar.gz.

File metadata

File hashes

Hashes for japanese_vocabulary_extractor-1.0.1.tar.gz
Algorithm Hash digest
SHA256 8504ac59404af13b6976e60f31a84fc3051124cd386dfdf3c4050cede93faa64
MD5 008d4b19477e2b058fb80545b4ca0136
BLAKE2b-256 0f1a5bac2cd9ea9132f88fd5371d3b5ddd0109fbb5eab738d8f2496781074207

See more details on using hashes here.

File details

Details for the file japanese_vocabulary_extractor-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for japanese_vocabulary_extractor-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a5fa6a9cb032e646dc0420752a68f708d98bffc5c71a48b7a8a5d3ebcba7a044
MD5 12b85b36df543683dbace55c72f9d13e
BLAKE2b-256 dbedf98fc3850c4a2633f496e7a4f8dac47adda2872dda86a5f5217c25f60fe6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page