Skip to main content

Download any video from a specific URL.

Project description

Downloads

Scrappo - Video Downloader Tool

Table of Contents

Description

Scrappo downloads any video from a given URL or a file containing a list of URLs. These URLs must be the actual video.

It's possible to also distinguish between a series or movies. The difference between these two options is that a series expects to have at least 1 season, therefore each season will be separate into its own folder containing each related episode.

Features

Feature
can be input a .txt file containing a list of URLs
download movies, series or related content
series will be separate into season folders
movies can also be separated in their own folder
videos already downloaded will be skipped
any errors regarding the videos will not stop program and will appear in the end.

Any new features are very welcomed.

Future features

Nothing at the moment.

Prerequisites

Python 3 must be installed.

Installation

pip --no-cache-dir install scrappo

or,

pip3 --no-cache-dir install scrappo

Usage

Command (shortcut) Command (full) Required Description
-u --urls REQUIRED a list of URLs or a path to a .txt file containing a list of URLs.
-o --output REQUIRED the path to the folder in which the videos will be downloaded.
-t --type REQUIRED the type of the videos to download. Choices are "movies" or "series".
--separate/--no-separate --- OPTIONAL if enabled, it will separate every movie into his own folder.
--shutdown/--no-shutdown --- OPTIONAL enable or disable shutting down computer when program is done

Important

URLs .txt file

The file containing a list of URLs must obey to certain requirements.

nameOfVideo1:::https://someurl.withavideo.org//video1
nameOfVideo2:::https://someurl.withavideo.org//video2
https://someurl.withavideo.org//video3
https://someurl.withavideo.org//video4
nameOfVideo5:::https://someurl.withavideo.org//video5

https://someurl.withavideo.org//video6
nameOfVideo7:::https://someurl.withavideo.org//video7
  • It could be added the name the file should have by inserting ':::' between the name and the URL.
  • If a URL does not have ':::', its assumed the name of the file will be 'movie#' or 'episode#' (# being the number according to the position (line) the URL has in the file), depending on if the type is 'movies' or 'series', respectively.
  • If type is 'series', the seasons should be separated by blank lines. This means, for instance in the example above, the season 1 has 5 episodes and the season 2 has 2 episodes. These episodes will be separated by folders with the name of corresponding season.
  • If type is 'movies', any blank lines will be ignored.

The following command will download all given URLs with the type 'movies'. These videos will be downloaded in the 'C:\Users<username>\Desktop\movies' folder:

scrappo --type movies --output "C:\Users\<username>\Desktop\movies" --urls "nameOfVideo1:::https://someurl.withavideo.org//video1" "nameOfVideo2:::https://someurl.withavideo.org//video2"

The same naming option is also available when not using a file. The command above will download 2 videos from two different sources which the names of those video files will be 'nameOfVideo1' and 'nameOfVideo2', respectively.

The command below will do the same thing but separating each movie into its own folder. The folder name will be the same name as the video file name.

scrappo --type movies --output "C:\Users\<username>\Desktop\movies" --urls "nameOfVideo1:::https://someurl.withavideo.org//video1" "nameOfVideo2:::https://someurl.withavideo.org//video2" --separate

The following command will download all URLs contained in the 'C:\Users<username>\Desktop\moviesToDownload.txt' file. Can also be used the '--separate' argument.

scrappo --type movies --output "C:\Users\<username>\Desktop\movies" --urls "C:\Users\<username>\Desktop\moviesToDownload.txt"

The command below will download all URLs contained in the 'C:\Users<username>\Desktop\seriesToDownload.txt'. Inside 'C:\Users<username>\Desktop\series' folder there will be at least a folder named 'season1' containing all related episodes. Keep in mind that the file needs to contain a blank line to indicate separation of seasons.

scrappo --type series --output "C:\Users\<username>\Desktop\series" --urls "C:\Users\<username>\Desktop\seriesToDownload.txt"

If '--shutdown' argument is used, when all videos are downloaded, the device will be shut down.

scrappo --type series --output "C:\Users\<username>\Desktop\series" --urls "C:\Users\<username>\Desktop\seriesToDownload.txt" --shutdown

Support

If any problems occurs, feel free to open an issue.

License

MIT

Status

Currently maintaining it.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrappo-1.0.0.tar.gz (12.5 kB view details)

Uploaded Source

Built Distribution

scrappo-1.0.0-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file scrappo-1.0.0.tar.gz.

File metadata

  • Download URL: scrappo-1.0.0.tar.gz
  • Upload date:
  • Size: 12.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for scrappo-1.0.0.tar.gz
Algorithm Hash digest
SHA256 aa021a916cfac114239f487173786375b51d2cc5e51e4fc808341d4c0a8cf6a8
MD5 4ca59174fd52d05a092fb31ab2379bd5
BLAKE2b-256 191456825061ca04bd40bb091b507e217d923e90a664753c7de7f1a342abf08d

See more details on using hashes here.

File details

Details for the file scrappo-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: scrappo-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 14.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for scrappo-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9cdaa43eee2cfe48b64180b562e7a365dfac2bc8b01dbd69b018edfe140e1810
MD5 3ad2eee651e4ee73d8795f3fe49a8419
BLAKE2b-256 606da1441887d34ec4a64497b8858e3d6bcc2b8ac100dc0bcb985bef64dc4691

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page