xk library
Project description
xk media library
A wise philosopher once told me: "the future is autotainment".
Manage and curate large media libraries. An index for your archive. Primary usage is local filesystem but also supports some virtual constructs like tracking online video playlists (eg. YouTube subscriptions) and scheduling browser tabs.
Install
Linux recommended but Windows setup instructions available.
pip install xklb
Should also work on Mac OS.
External dependencies
Required: ffmpeg
Some features work better with: mpv
, firefox
, fish
Getting started
Local media
1. Extract Metadata
For thirty terabytes of video the initial scan takes about four hours to complete.
After that, subsequent scans of the path (or any subpaths) are much quicker--only
new files will be read by ffprobe
.
library fsadd tv.db ./video/folder/
2. Watch / Listen from local files
library watch tv.db # the default post-action is to do nothing
library watch tv.db --post-action delete # delete file after playing
library listen finalists.db -k ask_keep # ask whether to keep file after playing
To stop playing press Ctrl+C in either the terminal or mpv
Online media
1. Download Metadata
Download playlist and channel metadata. Break free of the YouTube algo~
library tubeadd educational.db https://www.youtube.com/c/BranchEducation/videos
And you can always add more later--even from different websites.
library tubeadd maker.db https://vimeo.com/terburg
To prevent mistakes the default configuration is to download metadata for only the most recent 20,000 videos per playlist/channel.
library tubeadd maker.db --dl-config playlistend=1000
Be aware that there are some YouTube Channels which have many items--for example the TEDx channel has about 180,000 videos. Some channels even have upwards of two million videos. More than you could likely watch in one sitting--maybe even one lifetime. On a high-speed connection (>500 Mbps), it can take up to five hours to download the metadata for 180,000 videos.
1a. Get new videos for saved playlists
Tubeupdate will go through the list of added playlists and fetch metadata for any videos not previously seen.
library tubeupdate tube.db
2. Watch / Listen from websites
library watch maker.db
To stop playing press Ctrl+C in either the terminal or mpv
Tabs: visit websites on a schedule
tabs
is a way to organize your visits to URLs that you want to remember every once in a while.
The main benefit of tabs is that you can have a large amount of tabs saved (say 500 monthly tabs) and only the smallest amount of tabs to satisfy that goal (500/30) tabs will open each day. 17 tabs per day seems manageable--500 all at once does not.
The use-case of tabs are websites that you know are going to change: subreddits, games, or tools that you want to use for a few minutes daily, weekly, monthly, quarterly, or yearly.
1. Add your websites
library tabsadd tabs.db --frequency monthly --category fun \
https://old.reddit.com/r/Showerthoughts/top/?sort=top&t=month \
https://old.reddit.com/r/RedditDayOf/top/?sort=top&t=month
2. Add library tabs to cron
library tabs is meant to run once per day. Here is how you would configure it with crontab
:
45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db
Or with systemd
:
tee ~/.config/systemd/user/tabs.service
[Unit]
Description=xklb daily browser tabs
[Service]
Type=simple
RemainAfterExit=no
Environment="DISPLAY=:0"
ExecStart="/usr/bin/fish" "-c" "lb tabs /home/xk/lb/tabs.db"
tee ~/.config/systemd/user/tabs.timer
[Unit]
Description=xklb daily browser tabs timer
[Timer]
Persistent=yes
OnCalendar=*-*-* 9:58
[Install]
WantedBy=timers.target
systemctl --user daemon-reload
systemctl --user enable --now tabs.service
You can also invoke tabs manually:
library tabs tabs.db -L 1 # open one tab
Incremental surfing. ๐๐ totally rad!
List all subcommands
$ library
xk media library subcommands (v1.29.012)
local media:
lb fsadd Create a local media database; Add folders
lb fsupdate Refresh database: add new files, mark deleted
lb listen Listen to local and online media
lb watch Watch local and online media
lb search Search text and subtitles
lb read Read books
lb view View images
lb bigdirs Discover folders which take much room
lb dedupe Deduplicate local db files
lb relmv Move files/folders while preserving relative paths
lb christen Cleanse files by giving them a new name
lb mv-list Reach a target free space by moving data across mount points
lb scatter Scatter files across multiple mountpoints (mergerfs balance)
lb merge-dbs Merge multiple SQLITE files
lb copy-play-counts Copy play counts from multiple SQLITE files
online media:
lb tubeadd Create a tube database; Add playlists
lb tubeupdate Fetch new videos from saved playlists
lb redditadd Create a reddit database; Add subreddits
lb redditupdate Fetch new posts from saved subreddits
downloads:
lb download Download media
lb redownload Redownload missing media
lb block Prevent downloading specific URLs
lb merge-online-local Merge local and online metadata
playback:
lb now Print what is currently playing
lb next Play next file
lb stop Stop all playback
lb pause Pause all playback
statistics:
lb history Show some playback statistics
lb playlists List added playlists
lb download-status Show download status
lb disk-usage Print mount usage
browser tabs:
lb tabsadd Create a tabs database; Add URLs
lb tabs Open your tabs for the day
lb surf Load browser tabs in a streaming way (stdin)
mining:
lb reddit-selftext db selftext external links -> db media table
lb pushshift Convert Pushshift jsonl.zstd -> reddit.db format (stdin)
lb hnadd Create a hackernews database (this takes a few days)
lb extract-links Extract links from lists of web pages
lb cluster-sort Lines -> sorted by sentence similarity groups (stdin)
lb nouns Unstructured text -> compound nouns (stdin)
Examples
Watch online media on your PC
wget https://github.com/chapmanjacobd/library/raw/main/examples/mealtime.tw.db
library watch mealtime.tw.db
Listen to online media on a chromecast group
wget https://github.com/chapmanjacobd/library/raw/main/examples/music.tl.db
library listen music.tl.db -ct "House speakers"
Hook into HackerNews
wget https://github.com/chapmanjacobd/hn_mining/raw/main/hackernews_only_direct.tw.db
library watch hackernews_only_direct.tw.db --random --ignore-errors
Organize via separate databases.
library fsadd --audio both.db ./audiobooks/ ./podcasts/
library fsadd --audio audiobooks.db ./audiobooks/
library fsadd --audio podcasts.db ./podcasts/ ./another/more/secret/podcasts_folder/
Find large folders to curate
lb bigdirs
If you are looking for candidate folders for curation (ie. you need space but don't want to buy another hard drive). The bigdirs subcommand was written for that purpose:
$ lb bigdirs fs/d.db
You may filter by folder depth (similar to QDirStat or WizTree)
$ lb bigdirs --depth=3 audio.db
There is also an flag to prioritize folders which have many files which have been deleted (for example you delete songs you don't like--now you can see who wrote those songs and delete all their other songs...)
$ lb bigdirs --sort-by deleted audio.db
Find candidates for freeing up space by moving to another mount point
lb mv-list
The program takes a mount point and a xklb database file. If you don't have a database file you can create one like this:
$ lb fsadd --filesystem d.db ~/d/
But this should definitely also work with xklb audio and video databases:
$ lb mv-list /mnt/d/ video.db
The program will print a table with a sorted list of folders which are good candidates for moving. Candidates are determined by how many files are in the folder (so you don't spend hours waiting for folders with millions of tiny files to copy over). The default is 4 to 4000--but it can be adjusted via the --lower and --upper flags.
...
โโโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ 4.0 GB โ 7 โ /mnt/d/71_Mealtime_Videos/unsorted/Miguel_4K/ โ
โโโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ 5.7 GB โ 10 โ /mnt/d/71_Mealtime_Videos/unsorted/Bollywood_Premium/ โ
โโโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ 2.3 GB โ 4 โ /mnt/d/71_Mealtime_Videos/chief_wiggum/ โ
โโโโโโโโโโโโงโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
6702 other folders not shown
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Type "done" when finished
Type "more" to see more files
Paste a folder (and press enter) to toggle selection
Type "*" to select all files in the most recently printed table
Then it will give you a prompt:
Paste a path:
Wherein you can copy and paste paths you want to move from the table and the program will keep track for you.
Paste a path: /mnt/d/75_MovieQueue/720p/s11/
26 selected paths: 162.1 GB ; future free space: 486.9 GB
You can also press the up arrow or paste it again to remove it from the list:
Paste a path: /mnt/d/75_MovieQueue/720p/s11/
25 selected paths: 159.9 GB ; future free space: 484.7 GB
After you are done selecting folders you can press ctrl-d and it will save the list to a tmp file:
Paste a path: done
Folder list saved to /tmp/tmpa7x_75l8. You may want to use the following command to move files to an EMPTY folder target:
rsync -a --info=progress2 --no-inc-recursive --remove-source-files --files-from=/tmp/tmpa7x_75l8 -r --relative -vv --dry-run / jim:/free/real/estate/
Scatter your data across disks with mergerfs
If you use mergerfs, you'll likely be interested in this
library scatter -h
usage: library scatter [--limit LIMIT] [--policy POLICY] [--sort SORT] --srcmounts SRCMOUNTS database relative_paths ...
Balance size
$ library scatter -m /mnt/d1:/mnt/d2:/mnt/d3:/mnt/d4/:/mnt/d5:/mnt/d6:/mnt/d7 ~/lb/fs/scatter.db subfolder/of/mergerfs/mnt
Current path distribution:
โโโโโโโโโโโคโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโ
โ mount โ file_count โ total_size โ median_size โ time_created โ time_modified โ time_scanned โ
โโโโโโโโโโโชโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโก
โ /mnt/d1 โ 12793 โ 169.5 GB โ 4.5 MB โ Jan 27 โ Jul 19 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d2 โ 13226 โ 177.9 GB โ 4.7 MB โ Jan 27 โ Jul 19 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d3 โ 1 โ 717.6 kB โ 717.6 kB โ Jan 31 โ Jul 18 2022 โ yesterday โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d4 โ 82 โ 1.5 GB โ 12.5 MB โ Jan 31 โ Apr 22 2022 โ yesterday โ
โโโโโโโโโโโงโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโ
Simulated path distribution:
5845 files should be moved
20257 files should not be moved
โโโโโโโโโโโคโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโ
โ mount โ file_count โ total_size โ median_size โ time_created โ time_modified โ time_scanned โ
โโโโโโโโโโโชโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโก
โ /mnt/d1 โ 9989 โ 46.0 GB โ 2.4 MB โ Jan 27 โ Jul 19 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d2 โ 10185 โ 46.0 GB โ 2.4 MB โ Jan 27 โ Jul 19 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d3 โ 1186 โ 53.6 GB โ 30.8 MB โ Jan 27 โ Apr 07 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d4 โ 1216 โ 49.5 GB โ 29.5 MB โ Jan 27 โ Apr 07 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d5 โ 1146 โ 53.0 GB โ 30.9 MB โ Jan 27 โ Apr 07 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d6 โ 1198 โ 48.8 GB โ 30.6 MB โ Jan 27 โ Apr 07 2022 โ Jan 31 โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ /mnt/d7 โ 1182 โ 52.0 GB โ 30.9 MB โ Jan 27 โ Apr 07 2022 โ Jan 31 โ
โโโโโโโโโโโงโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโ
### Move 1182 files to /mnt/d7 with this command: ###
rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpmr1628ij / /mnt/d7
### Move 1198 files to /mnt/d6 with this command: ###
rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmp9yd75f6j / /mnt/d6
### Move 1146 files to /mnt/d5 with this command: ###
rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpfrj141jj / /mnt/d5
### Move 1185 files to /mnt/d3 with this command: ###
rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpqh2euc8n / /mnt/d3
### Move 1134 files to /mnt/d4 with this command: ###
rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmphzb0gj92 / /mnt/d4
Balance device inodes for specific subfolder
$ library scatter -m /mnt/d1:/mnt/d2 ~/lb/fs/scatter.db subfolder --group count --sort 'size desc'
Scatter the most recent 100 files
$ library scatter -m /mnt/d1:/mnt/d2 -l 100 -s 'time_modified desc' ~/lb/fs/scatter.db /
Scatter without mountpoints (limited functionality; only good for balancing fs inodes)
$ library scatter scatter.db /test/{0,1,2,3,4,5,6,7,8,9}
positional arguments:
database
relative_paths Paths to scatter, relative to the root of your mergerfs mount; any path substring is valid
options:
-h, --help show this help message and exit
--limit LIMIT, -L LIMIT, -l LIMIT, -queue LIMIT, --queue LIMIT
--policy POLICY, -p POLICY
--group GROUP, -g GROUP
--sort SORT, -s SORT Sort files before moving
--usage, -u Show disk usage
--verbose, -v
--srcmounts SRCMOUNTS, -m SRCMOUNTS
/mnt/d1:/mnt/d2
Pipe to mnamer
Rename poorly named files
pip install mnamer
mnamer --movie-directory ~/d/70_Now_Watching/ --episode-directory ~/d/70_Now_Watching/ \
--no-overwrite -b (library watch -p fd -s 'path : McCloud')
library fsadd ~/d/70_Now_Watching/
Music alarm clock (via termux crontab)
Wake up to your own music
30 7 * * * lb listen ./audio.db
Wake up to your own music only when you are not home (computer on local-only IP)
30 7 * * * timeout 0.4 nc -z 192.168.1.12 22 || lb listen --random
Wake up to your own music on your Chromecast speaker group only when you are home
30 7 * * * ssh 192.168.1.12 lb listen --cast --cast-to "Bedroom pair"
Pipe to lowcharts
$ lb watch -p f -col time_created | lowcharts timehist -w 80
Matches: 445183.
Each โ represents a count of 1896
[2022-04-13 03:16:05] [151689] โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[2022-04-19 07:59:37] [ 16093] โโโโโโโโ
[2022-04-25 12:43:09] [ 12019] โโโโโโ
[2022-05-01 17:26:41] [ 48817] โโโโโโโโโโโโโโโโโโโโโโโโโ
[2022-05-07 22:10:14] [ 36259] โโโโโโโโโโโโโโโโโโโ
[2022-05-14 02:53:46] [ 3942] โโ
[2022-05-20 07:37:18] [ 2371] โ
[2022-05-26 12:20:50] [ 517]
[2022-06-01 17:04:23] [ 4845] โโ
[2022-06-07 21:47:55] [ 2340] โ
[2022-06-14 02:31:27] [ 563]
[2022-06-20 07:14:59] [ 13836] โโโโโโโ
[2022-06-26 11:58:32] [ 1905] โ
[2022-07-02 16:42:04] [ 1269]
[2022-07-08 21:25:36] [ 3062] โ
[2022-07-15 02:09:08] [ 9192] โโโโ
[2022-07-21 06:52:41] [ 11955] โโโโโโ
[2022-07-27 11:36:13] [ 50938] โโโโโโโโโโโโโโโโโโโโโโโโโโ
[2022-08-02 16:19:45] [ 70973] โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[2022-08-08 21:03:17] [ 2598] โ
BTW, for some cols like time_deleted you'll need to specify a where clause so they aren't filtered out:
$ lb watch -p f -col time_deleted -w time_deleted'>'0 | lowcharts timehist -w 80
Pipe to rsync
Move files to your phone via syncthing
I used to use rsync to move files because I want deletions to stick.
I now use lb relmv
. But this is still a good rsync example:
function mrmusic
rsync -a --remove-source-files --files-from=(
library lt ~/lb/audio.db -s /mnt/d/80_Now_Listening/ -p f \
--moved /mnt/d/80_Now_Listening/ /mnt/d/ | psub
) /mnt/d/80_Now_Listening/ /mnt/d/
rsync -a --remove-source-files --files-from=(
library lt ~/lb/audio.db -w play_count=0 -u random -L 1200 -p f \
--moved /mnt/d/ /mnt/d/80_Now_Listening/ | psub
) /mnt/d/ /mnt/d/80_Now_Listening/
end
Backfill
Backfill reddit databases with pushshift data
https://github.com/chapmanjacobd/reddit_mining/
for reddit_db in ~/lb/reddit/*.db
set subreddits (sqlite-utils $reddit_db 'select path from playlists' --tsv --no-headers | grep old.reddit.com | sed 's|https://old.reddit.com/r/\(.*\)/|\1|' | sed 's|https://old.reddit.com/user/\(.*\)/|u_\1|' | tr -d "\r")
~/github/xk/reddit_mining/links/
for subreddit in $subreddits
if not test -e "$subreddit.csv"
echo "octosql -o csv \"select path,score,'https://old.reddit.com/r/$subreddit/' as playlist_path from `../reddit_links.parquet` where lower(playlist_path) = '$subreddit' order by score desc \" > $subreddit.csv"
end
end | parallel -j8
for subreddit in $subreddits
sqlite-utils upsert --pk path --alter --csv --detect-types $reddit_db media $subreddit.csv
end
library tubeadd --safe -i $reddit_db --playlist-db media
end
Datasette
Explore library
databases in your browser
pip install datasette
datasette tv.db
Usage
Add local media (fsadd)
$ library fsadd -h
usage: library fsadd [--audio | --video | --image | --text | --filesystem] -c CATEGORY [database] paths ...
The default database type is video:
library fsadd tv.db ./tv/
library fsadd --video tv.db ./tv/ # equivalent
You can also create audio databases. Both audio and video use ffmpeg to read metadata:
library fsadd --audio audio.db ./music/
Image uses ExifTool:
library fsadd --image image.db ./photos/
Text will try to read files and save the contents into a searchable database:
library fsadd --text text.db ./documents_and_books/
Create a text database and scan with OCR and speech-recognition:
library fsadd --text --ocr --speech-recognition ocr.db ./receipts_and_messages/
Create a video database and read internal/external subtitle files into a searchable database:
library fsadd --scan-subtitles tv.search.db ./tv/ ./movies/
Decode media to check for corruption (slow):
library fsadd --check-corrupt 100 tv.db ./tv/ # scan through 100 percent of each file to evaluate how corrupt it is (very slow)
library fsadd --check-corrupt 1 tv.db ./tv/ # scan through 1 percent of each file to evaluate how corrupt it is (takes about one second per file)
library fsadd --check-corrupt 5 tv.db ./tv/ # scan through 1 percent of each file to evaluate how corrupt it is (takes about ten seconds per file)
library fsadd --check-corrupt 5 --delete-corrupt 30 tv.db ./tv/ # scan 5 percent of each file to evaluate how corrupt it is, if 30 percent or more of those checks fail then the file is deleted
nb: the behavior of delete-corrupt changes between full and partial scan
library fsadd --check-corrupt 99 --delete-corrupt 1 tv.db ./tv/ # partial scan 99 percent of each file to evaluate how corrupt it is, if 1 percent or more of those checks fail then the file is deleted
library fsadd --check-corrupt 100 --delete-corrupt 1 tv.db ./tv/ # full scan each file to evaluate how corrupt it is, if there is _any_ corruption then the file is deleted
Normally only relevant filetypes are included. You can scan all files with this flag:
library fsadd --scan-all-files mixed.db ./tv-and-maybe-audio-only-files/
# I use that with this to keep my folders organized:
library watch -w 'video_count=0 and audio_count>=1' -pf mixed.db | parallel mv {} ~/d/82_Audiobooks/
Remove path roots with --force
library fsadd audio.db /mnt/d/Youtube/
[/mnt/d/Youtube] Path does not exist
library fsadd --force audio.db /mnt/d/Youtube/
[/mnt/d/Youtube] Path does not exist
[/mnt/d/Youtube] Building file list...
[/mnt/d/Youtube] Marking 28932 orphaned metadata records as deleted
Add online media (tubeadd)
$ library tubeadd -h
usage: library tubeadd [--audio | --video] [-c CATEGORY] [database] playlists ...
Create a dl database / add links to an existing database
library tubeadd dl.db https://www.youdl.com/c/BranchEducation/videos
Add links from a line-delimited file
library tubeadd reddit.db --playlist-file ./my_yt_subscriptions.txt
Add metadata to links already in a database table
library tubeadd reddit.db --playlist-db media
You can also include a category for file organization
library tubeadd -c Mealtime dl.db (cat ~/.jobs/todo/71_Mealtime_Videos)
Files will be saved to <download prefix>/<tubeadd category>/
For example:
library tubeadd -c Cool ...
library download D:\'My Documents'\ ...
Media will be downloaded to 'D:\My Documents\Cool\'
Fetch extra metadata:
By default tubeadd will quickly add media at the expense of less metadata.
If you plan on using `library download` then it doesn't make sense to use `--extra`.
Downloading will add the extra metadata automatically to the database.
You can always fetch more metadata later via tubeupdate:
library tubeupdate tw.db --extra
Add reddit media (redditadd)
$ library redditadd -h
usage: library redditadd [--lookback N_DAYS] [--praw-site bot1] [database] paths ...
Fetch data for redditors and reddits:
library redditadd https://old.reddit.com/r/coolgithubprojects/ https://old.reddit.com/user/Diastro
If you have a file with a list of subreddits you can do this:
library redditadd --subreddits --db 96_Weird_History.db (cat ~/mc/96_Weird_History-reddit.txt)
Likewise for redditors:
library redditadd --redditors --db idk.db (cat ~/mc/shadow_banned.txt)
Create / Update a Hacker News database (hnadd)
$ library hnadd -h
usage: library hnadd [--oldest] database
Fetch latest stories first:
library hnadd hn.db -v
Fetching 154873 items (33212696 to 33367569)
Saving comment 33367568
Saving comment 33367543
Saving comment 33367564
...
Fetch oldest stories first:
library hnadd --oldest hn.db
Add tabs (tabsadd)
$ library tabsadd -h
usage: library tabsadd [--frequency daily weekly (monthly) quarterly yearly] [--category CATEGORY] [--no-sanitize] DATABASE URLS ...
Adding one URL:
library tabsadd -f monthly -c travel ~/lb/tabs.db https://old.reddit.com/r/Colombia/top/?sort=top&t=month
Depending on your shell you may need to escape the URL (add quotes)
If you use Fish shell know that you can enable features to make pasting easier:
set -U fish_features stderr-nocaret qmark-noglob regex-easyesc ampersand-nobg-in-token
Also I recommend turning Ctrl+Backspace into a super-backspace for repeating similar commands with long args:
echo 'bind \b backward-kill-bigword' >> ~/.config/fish/config.fish
Importing from a line-delimitated file:
library tabsadd -f yearly -c reddit ~/lb/tabs.db (cat ~/mc/yearly-subreddit.cron)
Watch / Listen
$ library watch -h
usage: library watch [database] [optional args]
Control playback:
To stop playback press Ctrl-C in either the terminal or mpv
Create global shortcuts in your desktop environment by sending commands to mpv_socket:
echo 'playlist-next force' | socat - /tmp/mpv_socket
Override the default player (mpv):
library does a lot of things to try to automatically use your preferred media player
but if it doesn't guess right you can make it explicit:
library watch --player "vlc --vlc-opts"
Cast to chromecast groups:
library watch --cast --cast-to "Office pair"
library watch -ct "Office pair" # equivalent
If you don't know the exact name of your chromecast group run `catt scan`
Play media in order (similarly named episodes):
library watch --play-in-order
There are multiple strictness levels of --play-in-order:
library watch -O # equivalent
library watch -OO # above, plus ignores most filters
library watch -OOO # above, plus ignores fts and (include/exclude) filter during ordinal search
library watch -OOOO # above, plus starts search with parent folder
library watch --related # similar to -O but uses fts to find similar content
library watch -R # equivalent
library watch -RR # above, plus ignores most filters
library watch --cluster # cluster-sort to put similar paths closer together
All of these options can be used together but it will be a bit slow and the results might be mid-tier
as multiple different algorithms create a muddied signal (too many cooks in the kitchen):
library watch -RRCOO
Filter media by file siblings of parent directory:
library watch --sibling # only include files which have more than or equal to one sibling
library watch --solo # only include files which are alone by themselves
`--sibling` is just a shortcut for `--lower 2`; `--solo` is `--upper 1`
library watch --sibling --solo # you will always get zero records here
library watch --lower 2 --upper 1 # equivalent
You can be more specific via the `--upper` and `--lower` flags
library watch --lower 3 # only include files which have three or more siblings
library watch --upper 3 # only include files which have fewer than three siblings
library watch --lower 3 --upper 3 # only include files which are three siblings inclusive
library watch --lower 12 --upper 25 -OOO # on my machine this launches My Mister 2018
Play recent partially-watched videos (requires mpv history):
library watch --partial # play newest first
library watch --partial old # play oldest first
library watch -P o # equivalent
library watch -P p # sort by percent remaining
library watch -P t # sort by time remaining
library watch -P s # skip partially watched (only show unseen)
The default time used is "last-viewed" (ie. the most recent time you closed the video)
If you want to use the "first-viewed" time (ie. the very first time you opened the video)
library watch -P f # use watch_later file creation time instead of modified time
You can combine most of these options, though some will be overridden by others.
library watch -P fo # this means "show the oldest videos using the time I first opened them"
library watch -P pt # weighted remaining (percent * time remaining)
Print instead of play:
library watch --print --limit 10 # print the next 10 files
library watch -p -L 10 # print the next 10 files
library watch -p # this will print _all_ the media. be cautious about `-p` on an unfiltered set
Printing modes
library watch -p # print as a table
library watch -p a # print an aggregate report
library watch -p b # print a bigdirs report (see library bigdirs -h for more info)
library watch -p f # print fields (defaults to path; use --cols to change)
# -- useful for piping paths to utilities like xargs or GNU Parallel
library watch -p d # mark deleted
library watch -p w # mark watched
Some printing modes can be combined
library watch -p df # print files for piping into another program and mark them as deleted within the db
library watch -p bf # print fields from bigdirs report
Check if you have downloaded something before
library watch -u duration -p -s 'title'
Print an aggregate report of deleted media
library watch -w time_deleted!=0 -p=a
โโโโโโโโโโโโโคโโโโโโโโโโโโโโโคโโโโโโโโโโคโโโโโโโโโโ
โ path โ duration โ size โ count โ
โโโโโโโโโโโโโชโโโโโโโโโโโโโโโชโโโโโโโโโโชโโโโโโโโโโก
โ Aggregate โ 14 days, 23 โ 50.6 GB โ 29058 โ
โ โ hours and 42 โ โ โ
โ โ minutes โ โ โ
โโโโโโโโโโโโโงโโโโโโโโโโโโโโโงโโโโโโโโโโงโโโโโโโโโโ
Total duration: 14 days, 23 hours and 42 minutes
Print an aggregate report of media that has no duration information (ie. online or corrupt local media)
library watch -w 'duration is null' -p=a
Print a list of filenames which have below 1280px resolution
library watch -w 'width<1280' -p=f
Print media you have partially viewed with mpv
library watch --partial -p
library watch -P -p # equivalent
library watch -P -p f --cols path,progress,duration # print CSV of partially watched files
library watch --partial -pa # print an aggregate report of partially watched files
View how much time you have watched
library watch -w play_count'>'0 -p=a
See how much video you have
library watch video.db -p=a
โโโโโโโโโโโโโคโโโโโโโโโโคโโโโโโโโโโคโโโโโโโโโโ
โ path โ hours โ size โ count โ
โโโโโโโโโโโโโชโโโโโโโโโโชโโโโโโโโโโชโโโโโโโโโโก
โ Aggregate โ 145769 โ 37.6 TB โ 439939 โ
โโโโโโโโโโโโโงโโโโโโโโโโงโโโโโโโโโโงโโโโโโโโโโ
Total duration: 16 years, 7 months, 19 days, 17 hours and 25 minutes
View all the columns
library watch -p -L 1 --cols '*'
Open ipython with all of your media
library watch -vv -p --cols '*'
ipdb> len(media)
462219
Set the play queue size:
By default the play queue is 120--long enough that you likely have not noticed
but short enough that the program is snappy.
If you want everything in your play queue you can use the aid of infinity.
Pick your poison (these all do effectively the same thing):
library watch -L inf
library watch -l inf
library watch --queue inf
library watch -L 99999999999999999999999
You may also want to restrict the play queue.
For example, when you only want 1000 random files:
library watch -u random -L 1000
Offset the play queue:
You can also offset the queue. For example if you want to skip one or ten media:
library watch --skip 10 # offset ten from the top of an ordered query
Repeat
library watch # listen to 120 random songs (DEFAULT_PLAY_QUEUE)
library watch --limit 5 # listen to FIVE songs
library watch -l inf -u random # listen to random songs indefinitely
library watch -s infinite # listen to songs from the band infinite
Constrain media by search:
Audio files have many tags to readily search through so metadata like artist,
album, and even mood are included in search.
Video files have less consistent metadata and so only paths are included in search.
library watch --include happy # only matches will be included
library watch -s happy # equivalent
library watch --exclude sad # matches will be excluded
library watch -E sad # equivalent
Search only the path column
library watch -O -s 'path : mad max'
library watch -O -s 'path : "mad max"' # add "quotes" to be more strict
Double spaces are parsed as one space
library watch -s ' ost' # will match OST and not ghost
library watch -s toy story # will match '/folder/toy/something/story.mp3'
library watch -s 'toy story' # will match more strictly '/folder/toy story.mp3'
You can search without -s but it must directly follow the database due to how argparse works
library watch my.db searching for something
Constrain media by arbitrary SQL expressions:
library watch --where audio_count = 2 # media which have two audio tracks
library watch -w "language = 'eng'" # media which have an English language tag
(this could be audio _or_ subtitle)
library watch -w subtitle_count=0 # media that doesn't have subtitles
Constrain media to duration (in minutes):
library watch --duration 20
library watch -d 6 # 6 mins ยฑ10 percent (ie. between 5 and 7 mins)
library watch -d-6 # less than 6 mins
library watch -d+6 # more than 6 mins
Duration can be specified multiple times:
library watch -d+5 -d-7 # should be similar to -d 6
If you want exact time use `where`
library watch --where 'duration=6*60'
Constrain media to file size (in megabytes):
library watch --size 20
library watch -S 6 # 6 MB ยฑ10 percent (ie. between 5 and 7 MB)
library watch -S-6 # less than 6 MB
library watch -S+6 # more than 6 MB
Constrain media by time_created / time_played / time_deleted / time_modified:
library watch --created-within '3 days'
library watch --created-before '3 years'
Constrain media by throughput:
Bitrate information is not explicitly saved.
You can use file size and duration as a proxy for throughput:
library watch -w 'size/duration<50000'
Constrain media to portrait orientation video:
library watch --portrait
library watch -w 'width<height' # equivalent
Constrain media to duration of videos which match any size constraints:
library watch --duration-from-size +700 -u 'duration desc, size desc'
Constrain media to online-media or local-media:
Not to be confused with only local-media which is not "offline" (ie. one HDD disconnected)
library watch --online-media-only
library watch --online-media-only -i # and ignore playback errors (ie. YouTube video deleted)
library watch --local-media-only
Specify media play order:
library watch --sort duration # play shortest media first
library watch -u duration desc # play longest media first
You can use multiple SQL ORDER BY expressions
library watch -u 'subtitle_count > 0 desc' # play media that has at least one subtitle first
Post-actions -- choose what to do after playing:
library watch --post-action keep # do nothing after playing (default)
library watch -k delete # delete file after playing
library watch -k softdelete # mark deleted after playing
library watch -k ask_keep # ask whether to keep after playing
library watch -k ask_delete # ask whether to delete after playing
library watch -k move # move to "keep" dir after playing
library watch -k ask_move # ask whether to move to "keep" folder
The default location of the keep folder is ./keep/ (relative to the played media file)
You can change this by explicitly setting an *absolute* `keep-dir` path:
library watch -k ask_move --keep-dir /home/my/music/keep/
library watch -k ask_move_or_delete # ask after each whether to move to "keep" folder or delete
Experimental options:
Duration to play (in seconds) while changing the channel
library watch --interdimensional-cable 40
library watch -4dtv 40
Playback multiple files at once
library watch --multiple-playback # one per display; or two if only one display detected
library watch --multiple-playback 4 # play four media at once, divide by available screens
library watch -m 4 --screen-name eDP # play four media at once on specific screen
library watch -m 4 --loop --crop # play four cropped videos on a loop
library watch -m 4 --hstack # use hstack style
Search captions / subtitles
$ library search -h
usage: library search
Search text databases and subtitles
$ library search fts.db boil
7 captions
/mnt/d/70_Now_Watching/DidubeTheLastStop-720p.mp4
33:46 I brought a real stainless steel boiler
33:59 The world is using only stainless boilers nowadays
34:02 The boiler is old and authentic
34:30 - This boiler? - Yes
34:44 I am not forcing you to buy this boilerโฆ
34:52 Who will give her a one liter stainless steel boiler for one Lari?
34:54 Glass boilers cost two
Search and open file
$ library search fts.db dashi --open
History
$ library history -h
usage: library history [--frequency daily weekly (monthly) yearly] [--limit LIMIT] DATABASE [(all) watching watched created modified deleted]
Explore history through different facets
$ library history video.db watched
Finished watching:
โโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโคโโโโโโโโโโโโโ
โ time_period โ duration_sum โ duration_avg โ size_sum โ size_avg โ
โโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก
โ 2022-11 โ 4 days, 16 hours and 20 minutes โ 55.23 minutes โ 26.3 GB โ 215.9 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2022-12 โ 23 hours and 20.03 minutes โ 35.88 minutes โ 8.3 GB โ 213.8 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023-01 โ 17 hours and 3.32 minutes โ 15.27 minutes โ 14.3 GB โ 214.1 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023-02 โ 4 days, 5 hours and 60 minutes โ 23.17 minutes โ 148.3 GB โ 561.6 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023-03 โ 2 days, 18 hours and 18 minutes โ 11.20 minutes โ 118.1 GB โ 332.8 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023-05 โ 5 days, 5 hours and 4 minutes โ 45.75 minutes โ 152.9 GB โ 932.1 MB โ
โโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโงโโโโโโโโโโโโโ
$ library history video.db created --frequency yearly
Created media:
โโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโคโโโโโโโโโโโโโ
โ time_period โ duration_sum โ duration_avg โ size_sum โ size_avg โ
โโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก
โ 2005 โ 9.78 minutes โ 1.95 minutes โ 16.9 MB โ 3.4 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2006 โ 7 hours and 10.67 minutes โ 5 minutes โ 891.1 MB โ 10.4 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2007 โ 1 day, 17 hours and 33 minutes โ 8.55 minutes โ 5.9 GB โ 20.3 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2008 โ 5 days, 16 hours and 10 minutes โ 17.02 minutes โ 20.7 GB โ 43.1 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2009 โ 24 days, 2 hours and 56 minutes โ 33.68 minutes โ 108.4 GB โ 105.2 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2010 โ 1 month, 1 days and 1 minutes โ 35.52 minutes โ 124.2 GB โ 95.7 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2011 โ 2 months, 14 days, 1 hour and 22 minutes โ 55.93 minutes โ 222.0 GB โ 114.9 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2012 โ 2 months, 22 days, 19 hours and 17 minutes โ 45.50 minutes โ 343.6 GB โ 129.6 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2013 โ 3 months, 11 days, 21 hours and 48 minutes โ 42.72 minutes โ 461.1 GB โ 131.7 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2014 โ 3 months, 7 days, 10 hours and 22 minutes โ 46.80 minutes โ 529.6 GB โ 173.1 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2015 โ 2 months, 21 days, 23 hours and 36 minutes โ 36.73 minutes โ 452.7 GB โ 139.2 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2016 โ 3 months, 26 days, 7 hours and 59 minutes โ 39.48 minutes โ 603.4 GB โ 139.9 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2017 โ 3 months, 10 days, 2 hours and 19 minutes โ 31.78 minutes โ 543.5 GB โ 117.5 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2018 โ 3 months, 21 days, 20 hours and 56 minutes โ 30.98 minutes โ 607.5 GB โ 114.8 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2019 โ 5 months, 23 days, 2 hours and 30 minutes โ 35.77 minutes โ 919.7 GB โ 129.7 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2020 โ 7 months, 16 days, 10 hours and 58 minutes โ 26.15 minutes โ 1.2 TB โ 93.9 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2021 โ 7 months, 21 days, 9 hours and 40 minutes โ 39.93 minutes โ 1.3 TB โ 149.9 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2022 โ 17 years, 3 months, 0 days and 21 hours โ 19.62 minutes โ 35.8 TB โ 77.5 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023 โ 15 years, 3 months, 24 days and 1 hours โ 17.57 minutes โ 27.6 TB โ 60.2 MB โ
โโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโงโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโ
โ title_path โ duration โ time_created โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโก
โ [Eng Sub] TVB Drama | The King Of Snooker ๆก็ๅคฉ็ 07/20 | Adam Cheng | 2009 #Chinesedrama โ 43.85 minutes โ yesterday โ
โ https://www.youtube.com/watch?v=zntYD1yLrG8 โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ [Eng Sub] TVB Drama | The King Of Snooker ๆก็ๅคฉ็ 08/20 | Adam Cheng | 2009 #Chinesedrama โ 43.63 minutes โ yesterday โ
โ https://www.youtube.com/watch?v=zQnSfoWrh-4 โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ [Eng Sub] TVB Drama | The King Of Snooker ๆก็ๅคฉ็ 06/20 | Adam Cheng | 2009 #Chinesedrama โ 43.60 minutes โ yesterday โ
โ https://www.youtube.com/watch?v=Qiax1kFyGWU โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ [Eng Sub] TVB Drama | The King Of Snooker ๆก็ๅคฉ็ 04/20 | Adam Cheng | 2009 #Chinesedrama โ 43.45 minutes โ yesterday โ
โ https://www.youtube.com/watch?v=NT9C3PRrlTA โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ [Eng Sub] TVB Drama | The King Of Snooker ๆก็ๅคฉ็ 02/20 | Adam Cheng | 2009 #Chinesedrama โ 43.63 minutes โ yesterday โ
โ https://www.youtube.com/watch?v=MjpCiTawlTE โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโ
$ library history video.db deleted
Deleted media:
โโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโคโโโโโโโโโโโโโ
โ time_period โ duration_sum โ duration_avg โ size_sum โ size_avg โ
โโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโก
โ 2023-04 โ 1 year, 10 months, 3 days and 8 hours โ 4.47 minutes โ 1.6 TB โ 7.4 MB โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโค
โ 2023-05 โ 9 months, 26 days, 20 hours and 34 minutes โ 30.35 minutes โ 1.1 TB โ 73.7 MB โ
โโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโงโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโ
โ title_path โ duration โ subtitle_count โ time_deleted โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโก
โ Terminus (1987) โ 1 hour and โ 0 โ yesterday โ
โ /mnt/d/70_Now_Watching/Terminus_1987.mp4 โ 15.55 minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโค
โ Commodore 64 Longplay [062] The Transformers (EU) /mnt/d/71_Mealtime_Videos/Youtube/World_of_Longplays/Com โ 24.77 minutes โ 2 โ yesterday โ
โ modore_64_Longplay_062_The_Transformers_EU_[1RRX7Kykb38].webm โ โ โ โ
...
Open tabs
$ library tabs -h
usage: library tabs DATABASE
Tabs is meant to run **once per day**. Here is how you would configure it with `crontab`:
45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db
If things aren't working you can use `at` to simulate a similar environment as `cron`
echo 'fish -c "export DISPLAY=:0 && library tabs /full/path/to/tabs.db"' | at NOW
You can also invoke tabs manually:
library tabs -L 1 # open one tab
Print URLs
library tabs -w "frequency='yearly'" -p
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโ
โ path โ frequency โ time_valid โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก
โ https://old.reddit.com/r/Autonomia/top/?sort=top&t=year โ yearly โ Dec 31 1970 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
โ https://old.reddit.com/r/Cyberpunk/top/?sort=top&t=year โ yearly โ Dec 31 1970 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
โ https://old.reddit.com/r/ExperiencedDevs/top/?sort=top&t=year โ yearly โ Dec 31 1970 โ
...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโ
View how many yearly tabs you have:
library tabs -w "frequency='yearly'" -p a
โโโโโโโโโโโโโคโโโโโโโโโโ
โ path โ count โ
โโโโโโโโโโโโโชโโโโโโโโโโก
โ Aggregate โ 134 โ
โโโโโโโโโโโโโงโโโโโโโโโโ
Delete URLs
library tb -p -s cyber
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโ
โ path โ frequency โ time_valid โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก
โ https://old.reddit.com/r/cyberDeck/to โ yearly โ Dec 31 1970 โ
โ p/?sort=top&t=year โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
โ https://old.reddit.com/r/Cyberpunk/to โ yearly โ Aug 29 2023 โ
โ p/?sort=top&t=year โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
โ https://www.reddit.com/r/cyberDeck/ โ yearly โ Sep 05 2023 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโ
library tb -p -w "path='https://www.reddit.com/r/cyberDeck/'" --delete
Removed 1 metadata records
library tb -p -s cyber
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโ
โ path โ frequency โ time_valid โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโก
โ https://old.reddit.com/r/cyberDeck/to โ yearly โ Dec 31 1970 โ
โ p/?sort=top&t=year โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
โ https://old.reddit.com/r/Cyberpunk/to โ yearly โ Aug 29 2023 โ
โ p/?sort=top&t=year โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโ
Download media
$ library download -h
usage: library download database [--prefix /mnt/d/] --video | --audio
Download stuff in a random order.
library download dl.db --prefix ~/output/path/root/
Download stuff in a random order, limited to the specified playlist URLs.
library download dl.db https://www.youtube.com/c/BlenderFoundation/videos
Files will be saved to <lb download prefix>/<lb download category>/
For example:
library dladd Cool ...
library download D:\'My Documents'\ ...
Media will be downloaded to 'D:\My Documents\Cool\'
Print list of queued up downloads
library download --print
Print list of saved playlists
library playlists dl.db -p a
Print download queue groups
library download-status audio.db
โโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโ
โ category โ ie_key โ duration โ never_downloaded โ errors โ
โโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโก
โ 81_New_Music โ Soundcloud โ โ 10 โ 0 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ 81_New_Music โ Youtube โ 10 days, 4 hours โ 1 โ 2555 โ
โ โ โ and 20 minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Playlist-less media โ Youtube โ 7.68 minutes โ 99 โ 1 โ
โโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโ
Download Status (download-status)
$ library download-status -h
usage: library download-status [database]
Print download queue groups
library download-status video.db
โโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโ
โ category โ ie_key โ duration โ never_downloaded โ errors โ
โโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโก
โ 71_Mealtime_Videos โ Youtube โ 3 hours and 2.07 โ 76 โ 0 โ
โ โ โ minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ 75_MovieQueue โ Dailymotion โ โ 53 โ 0 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ 75_MovieQueue โ Youtube โ 1 day, 18 hours โ 30 โ 0 โ
โ โ โ and 6 minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Dailymotion โ Dailymotion โ โ 186 โ 198 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Uncategorized โ Youtube โ 1 hour and 52.18 โ 1 โ 0 โ
โ โ โ minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Vimeo โ Vimeo โ โ 253 โ 49 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Youtube โ Youtube โ 2 years, 4 โ 51676 โ 197 โ
โ โ โ months, 15 days โ โ โ
โ โ โ and 6 hours โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโค
โ Playlist-less media โ Youtube โ 4 months, 23 โ 2686 โ 7 โ
โ โ โ days, 19 hours โ โ โ
โ โ โ and 33 minutes โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโ
Simulate --safe flag
library download-status video.db --safe
Show only download attempts with errors
library download-status video.db --errors
Update local media (fsupdate)
$ library fsupdate -h
usage: library fsupdate database
Update each path previously saved:
library fsupdate database
Update online media (tubeupdate)
$ library tubeupdate -h
usage: library tubeupdate [--audio | --video] [-c CATEGORY] [database]
Fetch the latest videos for every playlist saved in your database
library tubeupdate educational.db
Or limit to specific categories...
library tubeupdate -c "Bob Ross" educational.db
Run with --optimize to add indexes (might speed up searching but the size will increase):
library tubeupdate --optimize examples/music.tl.db
Fetch extra metadata:
By default tubeupdate will quickly add media.
You can run with --extra to fetch more details: (best resolution width, height, subtitle tags, etc)
library tubeupdate educational.db --extra https://www.youtube.com/channel/UCBsEUcR-ezAuxB2WlfeENvA/videos
Update reddit media (redditupdate)
$ library redditupdate -h
usage: library redditupdate [--audio | --video] [-c CATEGORY] [--lookback N_DAYS] [--praw-site bot1] [database]
Fetch the latest posts for every subreddit/redditor saved in your database
library redditupdate edu_subreddits.db
Convert pushshift data to reddit.db format
$ library pushshift -h
usage: library pushshift [database] < stdin
Download data (about 600GB jsonl.zst; 6TB uncompressed)
wget -e robots=off -r -k -A zst https://files.pushshift.io/reddit/submissions/
Load data from files via unzstd
unzstd --memory=2048MB --stdout RS_2005-07.zst | library pushshift pushshift.db
Or multiple (output is about 1.5TB SQLITE fts-searchable):
for f in psaw/files.pushshift.io/reddit/submissions/*.zst
echo "unzstd --memory=2048MB --stdout $f | library pushshift (basename $f).db"
library optimize (basename $f).db
end | parallel -j5
List playlists
$ library playlists -h
usage: library playlists [database] [--aggregate] [--fields] [--json] [--delete ...]
List of Playlists
library playlists
โโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ie_key โ title โ path โ
โโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก
โ Youtube โ Highlights of Life โ https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n โ
โโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Aggregate Report of Videos in each Playlist
library playlists -p a
โโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโคโโโโโโโโโโ
โ ie_key โ title โ path โ duration โ count โ
โโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโชโโโโโโโโโโก
โ Youtube โ Highlights of Life โ https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n โ 53.28 minutes โ 15 โ
โโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโโโโโโโงโโโโโโโโโโ
1 playlist
Total duration: 53.28 minutes
Print only playlist urls:
Useful for piping to other utilities like xargs or GNU Parallel.
library playlists -p f
https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n
Remove a playlist/channel and all linked videos:
library playlists --remove https://vimeo.com/canal180
Blocklist a channel
$ library block -h
usage: library block database [playlists ...]
Blocklist specific URLs (eg. YouTube channels, etc). With YT URLs this will block
videos from the playlist uploader
library block dl.db https://annoyingwebsite/etc/
Use with the all-deleted-playlists flag to delete any previously downloaded files from the playlist uploader
library block dl.db --all-deleted-playlists https://annoyingwebsite/etc/
Show large folders (bigdirs)
$ library bigdirs -h
usage: library bigdirs DATABASE [--limit (4000)] [--depth (0)] [--sort-by "deleted" | "played"] [--size=+5MB]
See what folders take up space
library bigdirs video.db
library bigdirs audio.db
library bigdirs fs.db
Copy play history (copy-play-counts)
$ library copy-play-counts -h
usage: library copy-play-counts DEST_DB SOURCE_DB ... [--source-prefix x] [--target-prefix y]
Copy play count information between databases
library copy-play-counts audio.db phone.db --source-prefix /storage/6E7B-7DCE/d --target-prefix /mnt/d
Dedupe music
$ library dedupe -h
usage: library [--audio | --id | --title | --filesystem] [--only-soft-delete] [--limit LIMIT] DATABASE
Dedupe your files
Re-optimize database
$ library optimize -h
usage: library optimize DATABASE [--force]
Optimize library databases
The force flag is usually unnecessary and it can take much longer
Re-download media (redownload)
$ library redownload -h
usage: library redownload DATABASE
If you have previously downloaded YouTube or other online media, but your
hard drive failed or you accidentally deleted something, and if that media
is still accessible from the same URL, this script can help to redownload
everything that was scanned-as-deleted between two timestamps.
List deletions:
$ library redownload news.db
Deletions:
โโโโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโ
โ time_deleted โ count โ
โโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโก
โ 2023-01-26T00:31:26 โ 120 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโค
โ 2023-01-26T19:54:42 โ 18 โ
โโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโค
โ 2023-01-26T20:45:24 โ 26 โ
โโโโโโโโโโโโโโโโโโโโโโโงโโโโโโโโโโ
Showing most recent 3 deletions. Use -l to change this limit
Mark videos as candidates for download via specific deletion timestamp:
$ library redownload city.db 2023-01-26T19:54:42
โโโโโโโโโโโโคโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโคโโโโโโโโโโโคโโโโโโโโคโโโโโโโโโโโโโโโโโโโคโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ size โ time_created โ time_modified โ time_downloaded โ width โ height โ fps โ duration โ path โ
โโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโชโโโโโโโโโโโชโโโโโโโโชโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก
โ 697.7 MB โ Apr 13 2022 โ Mar 11 2022 โ Oct 19 โ 1920 โ 1080 โ 30 โ 21.22 minutes โ /mnt/d/76_CityVideos/PRAIA DE BARRA DE JANGADA CANDEIAS JABOATรO โ
โ โ โ โ โ โ โ โ โ RECIFE PE BRASIL AVENIDA BERNARDO VIEIRA DE MELO-4Lx3hheMPmg.mp4
...
...or between two timestamps inclusive:
$ library redownload city.db 2023-01-26T19:54:42 2023-01-26T20:45:24
Merge online and local data (merge-online-local)
$ library merge-online-local -h
usage: library merge-online-local DATABASE
If you have previously downloaded YouTube or other online media, you can dedupe
your database and combine the online and local media records as long as your
files have the youtube-dl / yt-dlp id in the filename.
Convert selftext links to media table (reddit-selftext)
$ library reddit-selftext -h
usage: library reddit-selftext DATABASE
Extract URLs from reddit selftext from the reddit_posts table to the media table
Merge SQLITE databases (merge-dbs)
$ library merge-dbs -h
usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...]
Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided,
and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row
replacing all existing fields.
Upsert mode will update matching PK rows such that if a source row has a NULL field and
the destination row has a value then the value will be preserved instead of changed to the source row's NULL value.
Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db
Test first by using temp databases as the destination db.
Try out different modes / flags until you are satisfied with the behavior of the program
library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db
Merge database data and tables
library merge-dbs --upsert --pk path video.db tv.db movies.db
library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path audio-fts.db audio.db
library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db
library merge-dbs --only-new-rows --pk playlist_path,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v
Sort lines by similarity (cluster-sort)
$ library cluster-sort -h
usage: library cluster-sort [input_path | stdin] [output_path | stdout]
Group lines of text into sorted output
Move files preserving parent folder hierarchy (relmv)
$ library relmv -h
usage: library relmv [--dry-run] SOURCE ... DEST
Move files/folders without losing hierarchy metadata
Move fresh music to your phone every Sunday:
# move last weeks' music back to their source folders
library relmv /mnt/d/80_Now_Listening/ /mnt/d/
# move new music for this week
library relmv (
library listen ~/lb/audio.db --local-media-only --where 'play_count=0' --random -L 600 -p f
) /mnt/d/80_Now_Listening/
Automatic tab loader (surf)
$ library surf -h
usage: library surf [--count COUNT] [--target-hosts TARGET_HOSTS] < stdin
Streaming tab loader: press ctrl+c to stop.
Open tabs from a line-delimited file:
cat tabs.txt | library surf -n 5
You will likely want to use this setting in `about:config`
browser.tabs.loadDivertedInBackground = True
If you prefer GUI, check out https://unli.xyz/tabsender/
Clean filenames (christen)
$ library christen -h
usage: library christen DATABASE [--run]
Rename files to be somewhat normalized
Default mode is dry-run
library christen fs.db
To actually do stuff use the run flag
library christen audio.db --run
You can optionally replace all the spaces in your filenames with dots
library christen --dot-space video.db
You can expand all by running this in your browser console:
(() => { const readmeDiv = document.getElementById("readme"); const detailsElements = readmeDiv.getElementsByTagName("details"); for (let i = 0; i < detailsElements.length; i++) { detailsElements[i].setAttribute("open", "true"); } })();
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.