Skip to main content

This is an python API which allows you to get the transcripts/subtitles for a given YouTube video. It also works for automatically generated subtitles, supports translating subtitles and it does not require a headless browser, like other selenium based solutions do!

Project description

✨ YouTube Transcript API ✨

Donate Build Status Coverage Status MIT license Current Version Supported Python Versions

This is a python API which allows you to retrieve the transcript/subtitles for a given YouTube video. It also works for automatically generated subtitles, supports translating subtitles and it does not require a headless browser, like other selenium based solutions do!

Maintenance of this project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here. 💖

SearchAPI   

Install

It is recommended to install this module by using pip:

pip install youtube-transcript-api

You can either integrate this module into an existing application or just use it via a CLI.

API

The easiest way to get a transcript for a given video is to execute:

from youtube_transcript_api import YouTubeTranscriptApi

YouTubeTranscriptApi.get_transcript(video_id)

Note: By default, this will try to access the English transcript of the video. If your video has a different language, or you are interested in fetching a different language's transcript, please read the section below.

This will return a list of dictionaries looking somewhat like this:

[
    {
        'text': 'Hey there',
        'start': 7.58,
        'duration': 6.13
    },
    {
        'text': 'how are you',
        'start': 14.08,
        'duration': 7.58
    },
    # ...
]

Retrieve different languages

You can add the languages param if you want to make sure the transcripts are retrieved in your desired language (it defaults to english).

YouTubeTranscriptApi.get_transcript(video_id, languages=['de', 'en'])

It's a list of language codes in a descending priority. In this example it will first try to fetch the german transcript ('de') and then fetch the english transcript ('en') if it fails to do so. If you want to find out which languages are available first, have a look at list_transcripts().

If you only want one language, you still need to format the languages argument as a list

YouTubeTranscriptApi.get_transcript(video_id, languages=['de'])

Batch fetching of transcripts

To get transcripts for a list of video ids you can call:

YouTubeTranscriptApi.get_transcripts(["video_id1", "video_id2"], languages=['de', 'en'])

languages also is optional here.

Preserve formatting

You can also add preserve_formatting=True if you'd like to keep HTML formatting elements such as <i> (italics) and <b> (bold).

YouTubeTranscriptApi.get_transcripts(video_ids, languages=['de', 'en'], preserve_formatting=True)

List available transcripts

If you want to list all transcripts which are available for a given video you can call:

transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)

This will return a TranscriptList object which is iterable and provides methods to filter the list of transcripts for specific languages and types, like:

transcript = transcript_list.find_transcript(['de', 'en'])

By default this module always picks manually created transcripts over automatically created ones, if a transcript in the requested language is available both manually created and generated. The TranscriptList allows you to bypass this default behaviour by searching for specific transcript types:

# filter for manually created transcripts
transcript = transcript_list.find_manually_created_transcript(['de', 'en'])

# or automatically generated ones
transcript = transcript_list.find_generated_transcript(['de', 'en'])

The methods find_generated_transcript, find_manually_created_transcript, find_transcript return Transcript objects. They contain metadata regarding the transcript:

print(
    transcript.video_id,
    transcript.language,
    transcript.language_code,
    # whether it has been manually created or generated by YouTube
    transcript.is_generated,
    # whether this transcript can be translated or not
    transcript.is_translatable,
    # a list of languages the transcript can be translated to
    transcript.translation_languages,
)

and provide the method, which allows you to fetch the actual transcript data:

transcript.fetch()

Translate transcript

YouTube has a feature which allows you to automatically translate subtitles. This module also makes it possible to access this feature. To do so Transcript objects provide a translate() method, which returns a new translated Transcript object:

transcript = transcript_list.find_transcript(['en'])
translated_transcript = transcript.translate('de')
print(translated_transcript.fetch())

By example

from youtube_transcript_api import YouTubeTranscriptApi

# retrieve the available transcripts
transcript_list = YouTubeTranscriptApi.list_transcripts('video_id')

# iterate over all available transcripts
for transcript in transcript_list:

    # the Transcript object provides metadata properties
    print(
        transcript.video_id,
        transcript.language,
        transcript.language_code,
        # whether it has been manually created or generated by YouTube
        transcript.is_generated,
        # whether this transcript can be translated or not
        transcript.is_translatable,
        # a list of languages the transcript can be translated to
        transcript.translation_languages,
    )

    # fetch the actual transcript data
    print(transcript.fetch())

    # translating the transcript will return another transcript object
    print(transcript.translate('en').fetch())

# you can also directly filter for the language you are looking for, using the transcript list
transcript = transcript_list.find_transcript(['de', 'en'])  

# or just filter for manually created transcripts  
transcript = transcript_list.find_manually_created_transcript(['de', 'en'])  

# or automatically generated ones  
transcript = transcript_list.find_generated_transcript(['de', 'en'])

Using Formatters

Formatters are meant to be an additional layer of processing of the transcript you pass it. The goal is to convert the transcript from its Python data type into a consistent string of a given "format". Such as a basic text (.txt) or even formats that have a defined specification such as JSON (.json), WebVTT (.vtt), SRT (.srt), Comma-separated format (.csv), etc...

The formatters submodule provides a few basic formatters to wrap around you transcript data in cases where you might want to do something such as output a specific format then write that format to a file. Maybe to backup/store and run another script against at a later time.

We provided a few subclasses of formatters to use:

  • JSONFormatter
  • PrettyPrintFormatter
  • TextFormatter
  • WebVTTFormatter
  • SRTFormatter

Here is how to import from the formatters module.

# the base class to inherit from when creating your own formatter.
from youtube_transcript_api.formatters import Formatter

# some provided subclasses, each outputs a different string format.
from youtube_transcript_api.formatters import JSONFormatter
from youtube_transcript_api.formatters import TextFormatter
from youtube_transcript_api.formatters import WebVTTFormatter
from youtube_transcript_api.formatters import SRTFormatter

Provided Formatter Example

Lets say we wanted to retrieve a transcript and write that transcript as a JSON file in the same format as the API returned it as. That would look something like this:

# your_custom_script.py

from youtube_transcript_api import YouTubeTranscriptApi
from youtube_transcript_api.formatters import JSONFormatter

# Must be a single transcript.
transcript = YouTubeTranscriptApi.get_transcript(video_id)

formatter = JSONFormatter()

# .format_transcript(transcript) turns the transcript into a JSON string.
json_formatted = formatter.format_transcript(transcript)


# Now we can write it out to a file.
with open('your_filename.json', 'w', encoding='utf-8') as json_file:
    json_file.write(json_formatted)

# Now should have a new JSON file that you can easily read back into Python.

Passing extra keyword arguments

Since JSONFormatter leverages json.dumps() you can also forward keyword arguments into .format_transcript(transcript) such as making your file output prettier by forwarding the indent=2 keyword argument.

json_formatted = JSONFormatter().format_transcript(transcript, indent=2)

Custom Formatter Example

You can implement your own formatter class. Just inherit from the Formatter base class and ensure you implement the format_transcript(self, transcript, **kwargs) and format_transcripts(self, transcripts, **kwargs) methods which should ultimately return a string when called on your formatter instance.

class MyCustomFormatter(Formatter):
    def format_transcript(self, transcript, **kwargs):
        # Do your custom work in here, but return a string.
        return 'your processed output data as a string.'

    def format_transcripts(self, transcripts, **kwargs):
        # Do your custom work in here to format a list of transcripts, but return a string.
        return 'your processed output data as a string.'

CLI

Execute the CLI script using the video ids as parameters and the results will be printed out to the command line:

youtube_transcript_api <first_video_id> <second_video_id> ...  

The CLI also gives you the option to provide a list of preferred languages:

youtube_transcript_api <first_video_id> <second_video_id> ... --languages de en  

You can also specify if you want to exclude automatically generated or manually created subtitles:

youtube_transcript_api <first_video_id> <second_video_id> ... --languages de en --exclude-generated
youtube_transcript_api <first_video_id> <second_video_id> ... --languages de en --exclude-manually-created

If you would prefer to write it into a file or pipe it into another application, you can also output the results as json using the following line:

youtube_transcript_api <first_video_id> <second_video_id> ... --languages de en --format json > transcripts.json

Translating transcripts using the CLI is also possible:

youtube_transcript_api <first_video_id> <second_video_id> ... --languages en --translate de

If you are not sure which languages are available for a given video you can call, to list all available transcripts:

youtube_transcript_api --list-transcripts <first_video_id>

If a video's ID starts with a hyphen you'll have to mask the hyphen using \ to prevent the CLI from mistaking it for a argument name. For example to get the transcript for the video with the ID -abc123 run:

youtube_transcript_api "\-abc123"

Proxy

You can specify a https proxy, which will be used during the requests to YouTube:

from youtube_transcript_api import YouTubeTranscriptApi  

YouTubeTranscriptApi.get_transcript(video_id, proxies={"https": "https://user:pass@domain:port"})

As the proxies dict is passed on to the requests.get(...) call, it follows the format used by the requests library.

Using the CLI:

youtube_transcript_api <first_video_id> <second_video_id> --https-proxy https://user:pass@domain:port

Cookies

Some videos are age restricted, so this module won't be able to access those videos without some sort of authentication. To do this, you will need to have access to the desired video in a browser. Then, you will need to download that pages cookies into a text file. You can use the Chrome extension Cookie-Editor and select "Netscape" during export, or the Firefox extension cookies.txt.

Once you have that, you can use it with the module to access age-restricted videos' captions like so.

from youtube_transcript_api import YouTubeTranscriptApi  

YouTubeTranscriptApi.get_transcript(video_id, cookies='/path/to/your/cookies.txt')

YouTubeTranscriptApi.get_transcripts([video_id], cookies='/path/to/your/cookies.txt')

Using the CLI:

youtube_transcript_api <first_video_id> <second_video_id> --cookies /path/to/your/cookies.txt

Warning

This code uses an undocumented part of the YouTube API, which is called by the YouTube web-client. So there is no guarantee that it won't stop working tomorrow, if they change how things work. I will however do my best to make things working again as soon as possible if that happens. So if it stops working, let me know!

Contributing

To setup the project locally run (requires poetry to be installed):

poetry install --with test,dev

There's poe tasks to run tests, coverage, the linter and formatter (you'll need to pass all of those for the build to pass):

poe test
poe coverage
poe format
poe lint

If you just want to make sure that your code passes all the necessary checks to get a green build, you can simply run:

poe precommit

Donations

If this project makes you happy by reducing your development time, you can make me happy by treating me to a cup of coffee, or become a Sponsor of this project :)

Donate

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

youtube_transcript_api-0.6.3.tar.gz (612.1 kB view details)

Uploaded Source

Built Distribution

youtube_transcript_api-0.6.3-py3-none-any.whl (622.3 kB view details)

Uploaded Python 3

File details

Details for the file youtube_transcript_api-0.6.3.tar.gz.

File metadata

  • Download URL: youtube_transcript_api-0.6.3.tar.gz
  • Upload date:
  • Size: 612.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.9.20 Linux/6.5.0-1025-azure

File hashes

Hashes for youtube_transcript_api-0.6.3.tar.gz
Algorithm Hash digest
SHA256 4d1f6451ae508390a5279f98519efb45e091bf60d3cca5ea0bb122800ab6a011
MD5 10176b3148f5215ec565539a2fe7ba1d
BLAKE2b-256 d7f155ff16f7198bdf5204fd7be3c49122e07092a3da47bf4e1560989a4c0255

See more details on using hashes here.

File details

Details for the file youtube_transcript_api-0.6.3-py3-none-any.whl.

File metadata

File hashes

Hashes for youtube_transcript_api-0.6.3-py3-none-any.whl
Algorithm Hash digest
SHA256 297a74c1863d9df88f6885229f33a7eda61493d73ecb13ec80e876b65423e9b4
MD5 67d778af8fa8cd7c9a5b239bfeca9a12
BLAKE2b-256 80d4be6fd091d29ae49d93813e598769e7ab453419a4de640e1755bf20911cce

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page