Create a SQLite database containing data pulled from Reddit about a single user.
Project description
reddit-user-to-sqlite
Stores all the content from a specific user in a SQLite database. This includes their comments and their posts.
Install
The PyPI package is reddit-user-to-sqlite
(PyPI Link). Install it globally using pipx:
pipx install reddit-user-to-sqlite
Usage
The CLI currently exposes a single command: user
. An archive
command is planned.
user
Fetches all comments and posts for a specific user.
reddit-user-to-sqlite user your_username
reddit-user-to-sqlite user your_username --db my-reddit-data.db
Params
Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.
username
: a case-insensitive string. The leading/u/
is optional (and ignored if supplied).- (optional)
--db
: the path to a sqlite file, which will be created or updated as needed. Defaults toreddit.db
.
archive
Reads the output of a Reddit GDPR archive and fetches additional info from the Reddit API (where possible). This allows you to store more than 1k posts/comments.
FYI: this behavior is built with the assumption that the archive that Reddit provides has the same format regardless of if you select
GDPR
orCCPA
as the request type. But, just to be on the safe side, I recommend selectingGDPR
during the export process until I'm able to confirm.
Params
Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.
archive_path
: the path to the (unzipped) archive directory on your machine. Don't rename/move the files that Reddit gives you.- (optional)
--db
: the path to a sqlite file, which will be created or updated as needed. Defaults toreddit.db
.
Viewing Data
The resulting SQLite database pairs well with Datasette, a tool for viewing SQLite in the web. Below is my recommended configuration.
First, install datasette
:
pipx install datasette
Then, add the recommended plugins (for rendering timestamps and markdown):
pipx inject datasette datasette-render-markdown datasette-render-timestamps
Finally, create a metadata.json
file with the following:
{
"databases": {
"reddit": {
"tables": {
"comments": {
"sort_desc": "timestamp",
"plugins": {
"datasette-render-markdown": {
"columns": ["text"]
},
"datasette-render-timestamps": {
"columns": ["timestamp"]
}
}
},
"posts": {
"sort_desc": "timestamp",
"plugins": {
"datasette-render-markdown": {
"columns": ["text"]
},
"datasette-render-timestamps": {
"columns": ["timestamp"]
}
}
},
"subreddits": {
"sort": "name"
}
}
}
}
}
Now when you run
datasette reddit.db --metadata metadata.json
You'll get a nice, formatted output:
Development
This section is people making changes to this package.
When in a virtual environment, run the following:
pip install -e '.[test]'
This installs the package in --edit
mode and makes its dependencies available. You can now run reddit-user-to-sqlite
to invoke the CLI.
Running Tests
In your virtual environment, a simple pytest
should run the unit test suite.
Motivation
I got nervous when I saw Reddit's notification of upcoming API changes. To ensure I could always access data I created, I wanted to make sure I had a backup in place before anything changed in a big way.
FAQs
Why do some of my posts say [removed]
even though I can see them?
If a post is removed, only the mods and the user who posted it can see its text. Since this tool currently runs without any authentication, those removed posts can't be fetched via the API.
To fetch data about your own removed posts, use the GDPR archive import This will be fixed in a future release, either by:
Why is the database missing data returned by the Reddit API?
While most Dogsheep projects grab the raw JSON output of their source APIs, Reddit's API has a lot of junk in it. So, I opted for a slimmed down approach.
If there's a field missing that you think would be useful, feel free to open an issue!
Does this tool refetch old data?
When running the user
command, yes. It fetches and updates up to 1k each of comments and posts and updates the local copy.
When running the archive
command, no. To cut down on API requests, it only fetches data about comments/posts that aren't yet in the database (since the archive may include many items).
Both of these may change in the future to be more in line with Reddit's per-subreddit archiving guidelines.
Releasing New Versions
these notes are mostly for myself (or other contributors)
- ensure tests pass
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for reddit_user_to_sqlite-0.3.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1d04de47cc5f5fba939410fd77e993cddc1c37130cb0ca0d884450c77623be0a |
|
MD5 | 7f9f829e783b31470d59700985892ffe |
|
BLAKE2b-256 | fecc0e57b17aec7da3434a0a4a6270cdee5f18a24a9e782a3dae688ac1f4b3f8 |
Hashes for reddit_user_to_sqlite-0.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac9a7adb1c41822d4b0c7a43bcd96f1251bd7f5d1579e649bfc264756e4d7fa6 |
|
MD5 | fd7c8a4852fffff80b9ee527badace64 |
|
BLAKE2b-256 | 93c2d987f087d6d43137914dbaf544ba77348808af5091a4277d10945187a5c1 |