Skip to main content

Video Crawler

Project description

vidcrawler

Crawls major videos sites like YouTube/Rumble/Bitchute/Brighteon for video content and outputs a json feed of all the videos that were found.

Platform Unit Tests

Actions Status Actions Status Actions Status

Scraper Tests

Actions Status Actions Status Actions Status Actions Status

Note that bitchute doesn't like the github runner's IP and will fail with a 403 forbidden. Actions Status

API

Command line

vidcrawler --input_crawl_json "fetch_list.json" --output_json "out_list.json"

Python

import json
from vidcrawler import crawl_video_sites
crawl_list = [
    [
        "Computing Forever",  # Can be whatever you want.
        "bitchute",  # Must be "youtube", "rumble", "bitchute" (and others).
        "hybm74uihjkf"  # The channel id on the service.
    ]
]
output = crawl_video_sites(crawl_list)
print(json.dumps(output))

"source" and "channel_id" are used to generate the video-platform-specific urls to fetch data. The "channel name" is echo'd back in the generated json feeds, but doesn't not affect the fetching process in any way.

Testing

Install vidcrawler and then the command vidcralwer_test will become available.

> pip install vidcrawler
> vidcrawler_test

Example input fetch_list.json

[
    [
        "Health Ranger Report",
        "brighteon",
        "hrreport"
    ],
    [
        "Sydney Watson",
        "youtube",
        "UCSFy-1JrpZf0tFlRZfo-Rvw"
    ],
    [
        "Computing Forever",
        "bitchute",
        "hybm74uihjkf"
    ],
    [
        "ThePeteSantilliShow",
        "rumble",
        "ThePeteSantilliShow"
    ],
    [
        "Macroaggressions",
        "odysee",
        "Macroaggressions"
    ]
]

Example Output:

[
  {
    "channel_name": "ThePeteSantilliShow",
    "title": "The damage this caused is now being totaled up",
    "date_published": "2022-05-17T05:00:11+00:00",
    "date_lastupdated": "2022-05-17T05:17:18.540084",
    "channel_url": "https://www.youtube.com/channel/UCXIJgqnII2ZOINSWNOGFThA",
    "source": "youtube.com",
    "url": "https://www.youtube.com/watch?v=bwqBudCzDrQ",
    "duration": 254,
    "description": "",
    "img_src": "https://i3.ytimg.com/vi/bwqBudCzDrQ/hqdefault.jpg",
    "iframe_src": "https://youtube.com/embed/bwqBudCzDrQ",
    "views": 1429
  },
  {
     "channel_name": "ThePeteSantilliShow",
     "title": "..."
  }
]

Releases

  • 1.0.10: Adds youtube-pull-channel which pulls all files down as mp3s for a channel.
  • 1.0.9: Fixes crawler for rumble and minor fixes + linting fixes.
  • 1.0.8: Readme correction.
  • 1.0.7: Fixes Odysee scraper by including image/webp thumbnail format.
  • 1.0.4: Fixes local_now() to be local timezone aware.
  • 1.0.3: Bump
  • 1.0.2: Updates testing
  • 1.0.1: improves command line
  • 1.0.0: Initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vidcrawler-1.0.10.tar.gz (32.5 kB view details)

Uploaded Source

Built Distribution

vidcrawler-1.0.10-py2.py3-none-any.whl (42.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file vidcrawler-1.0.10.tar.gz.

File metadata

  • Download URL: vidcrawler-1.0.10.tar.gz
  • Upload date:
  • Size: 32.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for vidcrawler-1.0.10.tar.gz
Algorithm Hash digest
SHA256 e4f9b12f3dc8bb58754745ea10fbf43d2e47cf3631f3ed99d9fa6a8f784cde42
MD5 a562b5cf0bb675eea431ed7296be87d8
BLAKE2b-256 6e4b08bf29c590abfb9b728fa25c7c93b9f1cfde790efb9c8f4b2bdbaedb0b27

See more details on using hashes here.

File details

Details for the file vidcrawler-1.0.10-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for vidcrawler-1.0.10-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 d43bf9f91b850f5ac01b956ac9b184e5042f233d3414e7a75506363d393316de
MD5 34fb49c4ccc148afc279370fe3a26dff
BLAKE2b-256 0eb0b91faef298faaa6c832f738c91eea463455c11aa21defec5b6718c69fb4e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page