Tools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.
Reason this release was yanked:
<=python3.11 compatible
Project description
wikiteam3
Countless MediaWikis are still waiting to be archived.
Image by @gledos
wikiteam3
is a fork of mediawiki-scraper
.
Why we fork mediawiki-scraper
Originally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.
Half a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks "code" reviewers.
So, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.
Everything still under GPLv3 license.
Installation/Upgrade
pip install wikiteam3 --upgrade
[!NOTE] For public MediaWiki, you don't need to install wikiteam3 locally. You can send an archive request (include the reason for the archive request, e.g. wiki is about to shutdown, need a wikidump to migrate to another wikifarm, etc.) to the wikiteam IRC channel. An online voiced member will run a wikibot job for your request.
Even more, we also accept DokuWiki and PukiWiki archive requests.
- wikiteam IRC (webirc): https://webirc.hackint.org/#irc://irc.hackint.org/wikiteam
- wikiteam IRC logs: https://irclogs.archivete.am/wikiteam
Dumpgenerator usage
usage: wikiteam3dumpgenerator [-h] [-v] [--cookies cookies.txt] [--delay 1.5]
[--retries 5] [--path PATH] [--resume] [--force]
[--user USER] [--pass PASSWORD]
[--http-user HTTP_USER]
[--http-pass HTTP_PASSWORD] [--insecure]
[--verbose] [--api_chunksize 50] [--api API]
[--index INDEX] [--index-check-threshold 0.80]
[--xml] [--curonly] [--xmlapiexport]
[--xmlrevisions] [--xmlrevisions_page]
[--namespaces 1,2,3] [--exnamespaces 1,2,3]
[--images] [--bypass-cdn-image-compression]
[--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z]
[--ia-wbm-booster {0,1,2,3}]
[--assert-max-pages 123]
[--assert-max-edits 123]
[--assert-max-images 123]
[--assert-max-images-bytes 123]
[--get-wiki-engine] [--failfast] [--upload]
[-g UPLOADER_ARGS]
[wiki]
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
--cookies cookies.txt
path to a cookies.txt file
--delay 1.5 adds a delay (in seconds) [NOTE: most HTTP servers
have a 5s HTTP/1.1 keep-alive timeout, you should
consider it if you wanna reuse the connection]
--retries 5 Maximum number of retries for
--path PATH path to store wiki dump at
--resume resumes previous incomplete dump (requires --path)
--force download it even if Wikimedia site or a recent dump
exists in the Internet Archive
--user USER Username if MediaWiki authentication is required.
--pass PASSWORD Password if MediaWiki authentication is required.
--http-user HTTP_USER
Username if HTTP authentication is required.
--http-pass HTTP_PASSWORD
Password if HTTP authentication is required.
--insecure Disable SSL certificate verification
--verbose
--api_chunksize 50 Chunk size for MediaWiki API (arvlimit, ailimit, etc.)
wiki URL to wiki (e.g. http://wiki.domain.org), auto
detects API and index.php
--api API URL to API (e.g. http://wiki.domain.org/w/api.php)
--index INDEX URL to index.php (e.g.
http://wiki.domain.org/w/index.php), (not supported
with --images on newer(?) MediaWiki without --api)
--index-check-threshold 0.80
pass index.php check if result is greater than (>)
this value (default: 0.80)
Data to download:
What info download from the wiki
--xml Export XML dump using Special:Export (index.php).
(supported with --curonly)
--curonly store only the latest revision of pages
--xmlapiexport Export XML dump using API:revisions instead of
Special:Export, use this when Special:Export fails and
xmlrevisions not supported. (supported with --curonly)
--xmlrevisions Export all revisions from an API generator
(API:Allrevisions). MediaWiki 1.27+ only. (not
supported with --curonly)
--xmlrevisions_page [[! Development only !]] Export all revisions from an
API generator, but query page by page MediaWiki 1.27+
only. (default: --curonly)
--namespaces 1,2,3 comma-separated value of namespaces to include (all by
default)
--exnamespaces 1,2,3 comma-separated value of namespaces to exclude
--images Generates an image dump
Image dump options:
Options for image dump (--images)
--bypass-cdn-image-compression
Bypass CDN image compression. (CloudFlare Polish,
etc.) [WARNING: This will increase CDN origin traffic,
and not effective for all HTTP Server/CDN, please
don't use this blindly.]
--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z
Only download images uploaded in the given time
interval. [format: ISO 8601 UTC interval] (only works
with api)
--ia-wbm-booster {0,1,2,3}
Download images from Internet Archive Wayback Machine
if possible, reduce the bandwidth usage of the wiki.
[0: disabled (default), 1: use earliest snapshot, 2:
use latest snapshot, 3: the closest snapshot to the
image's upload time]
Assertions:
What assertions to check before actually downloading, if any assertion
fails, program will exit with exit code 45. [NOTE: This feature requires
correct siteinfo API response from the wiki, and not working properly with
some wikis. But it's useful for mass automated archiving, so you can
schedule a re-run for HUGE wiki that may run out of your disk]
--assert-max-pages 123
Maximum number of pages to download
--assert-max-edits 123
Maximum number of edits to download
--assert-max-images 123
Maximum number of images to download
--assert-max-images-bytes 123
Maximum number of bytes to download for images [NOTE:
this assert happens after downloading images list]
Meta info:
What meta info to retrieve from the wiki
--get-wiki-engine returns the wiki engine
--failfast [lack maintenance] Avoid resuming, discard failing
wikis quickly. Useful only for mass downloads.
wikiteam3uploader params:
--upload (run `wikiteam3uplaoder` for you) Upload wikidump to
Internet Archive after successfully dumped
-g, --uploader-arg UPLOADER_ARGS
Arguments for uploader.
Downloading a wiki with complete XML history and images
wikiteam3dumpgenerator http://wiki.domain.org --xml --images
[!WARNING]
NTFS/Windows
users please note: When using--images
, because NTFS does not allow characters such as:*?"<>|
in filenames, some files may not be downloaded, please pay attention to theXXXXX could not be created by OS
error in yourerrors.log
. We will not make special treatment for NTFS/EncFS "path too long/illegal filename", highly recommend you to use ext4/xfs/btrfs, etc.- Introducing the "illegal filename rename" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3. - It will cause confusion to the final user of wikidump (usually the Wiki site administrator). - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive) - Using other file systems can solve all problems.
Manually specifying api.php
and/or index.php
If the script can't find itself the api.php
and/or index.php
paths, then you can provide them:
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
--xml --images
If you only want the XML histories, just use --xml
. For only the images, just --images
. For only the current version of every page, --xml --curonly
.
Resuming an incomplete dump
wikiteam3dumpgenerator \
--api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump
In the above example, --path
is only necessary if the download path (wikidump dir) is not the default.
[!NOTE]
en: When resuming an incomplete dump, the configuration in
config.json
will override the CLI parameters. (But not all CLI parameters will be ignored, checkconfig.json
for details)
wikiteam3dumpgenerator
will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.
Using wikiteam3uploader
usage: Upload wikidump to the Internet Archive. [-h] [-kf KEYS_FILE]
[-c {opensource,test_collection,wikiteam}]
[--dry-run] [-u]
[--bin-zstd BIN_ZSTD]
[--zstd-level {17,18,19,20,21,22}]
[--rezstd]
[--rezstd-endpoint URL]
[--bin-7z BIN_7Z]
[--parallel]
wikidump_dir
positional arguments:
wikidump_dir
options:
-h, --help show this help message and exit
-kf, --keys_file KEYS_FILE
Path to the IA S3 keys file. (first line: access key,
second line: secret key) [default:
~/.wikiteam3_ia_keys.txt]
-c, --collection {opensource,test_collection,wikiteam}
--dry-run Dry run, do not upload anything.
-u, --update Update existing item. [!! not implemented yet !!]
--bin-zstd BIN_ZSTD Path to zstd binary. [default: zstd]
--zstd-level {17,18,19,20,21,22}
Zstd compression level. [default: 17] If you have a
lot of RAM, recommend to use max level (22).
--rezstd [server-side recompression] Upload pre-compressed zstd
files to rezstd server for recompression with best
settings (which may eat 10GB+ RAM), then download
back. (This feature saves your lowend machine, lol)
--rezstd-endpoint URL
Rezstd server endpoint. [default: http://pool-
rezstd.saveweb.org/rezstd/] (source code:
https://github.com/yzqzss/rezstd)
--bin-7z BIN_7Z Path to 7z binary. [default: 7z]
--parallel Parallelize compression tasks
Requirements
[!NOTE]
Please make sure you have the following requirements before using
wikiteam3uploader
, and you don't need to install them if you don't wanna upload the dump to IA.
-
unbinded localhost port 62954 (for multiple processes compressing queue)
-
3GB+ RAM (~2.56GB for commpressing)
-
64-bit OS (required by 2G
wlog
size) -
7z
(binary)Debian/Ubuntu: install
p7zip-full
[!NOTE]
Windows: install https://7-zip.org and add
7z.exe
to PATH -
zstd
(binary)1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)
install from https://github.com/facebook/zstd[!NOTE]
Windows: add
zstd.exe
to PATH
Uploader usage
[!NOTE]
Read
wikiteam3uploader --help
and do not forget~/.wikiteam3_ia_keys.txt
before usingwikiteam3uploader
.
wikiteam3uploader {YOUR_WIKI_DUMP_PATH}
Checking dump integrity
TODO: xml2titles.py
If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:
grep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \
"</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c
You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:
580
580
580
5677
5677
If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished </page>
or </revision>
). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.
import wikidump to MediaWiki / wikidump data tips
[!IMPORTANT]
In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). https://www.mediawiki.org/wiki/Manual:Title.php#Article_name
[!NOTE]
WikiTeam3
useszstd
to compress.xml
and.txt
files, and7z
to pack images (media files).
zstd
is a very fast stream compression algorithm, you can usezstd -d
to decompress.zst
file/steam.
Contributors
WikiTeam is the Archive Team [GitHub] subcommittee on wikis. It was founded and originally developed by Emilio J. Rodríguez-Posada, a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont, balr0g and PiRSquared17.
Mediawiki-Scraper The Python 3 initiative is currently being led by Elsie Hupp, with contributions from Victor Gambier, Thomas Karcher, Janet Cobb, yzqzss, NyaMisty and Rob Kam
WikiTeam3 Every archivist who has uploaded a wikidump to the Internet Archive.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file wikiteam3-4.3.0.tar.gz
.
File metadata
- Download URL: wikiteam3-4.3.0.tar.gz
- Upload date:
- Size: 4.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.17.3 CPython/3.12.4 Linux/6.9.12-amd64
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a0fd19c88b549942b819d503117df863576cf87b08da40fd2dae009e58204187 |
|
MD5 | 0f2fc752443a7a67df9ab9574744bfc3 |
|
BLAKE2b-256 | 2db1e1f037b60683094ca50639ec8b160ec64d2dea3d5cd8cfbafab1e214e3bf |
File details
Details for the file wikiteam3-4.3.0-py3-none-any.whl
.
File metadata
- Download URL: wikiteam3-4.3.0-py3-none-any.whl
- Upload date:
- Size: 578.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: pdm/2.17.3 CPython/3.12.4 Linux/6.9.12-amd64
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a6d65a04c0a85cb0d229aea31b054f60f9f7ab4219a0481bd35577b4760130a |
|
MD5 | 90cd798626c6b8dcbb28071f6f696d88 |
|
BLAKE2b-256 | f7df8af904e6d15627562d5238e05813e93a97fb59141805cb5c11bc15443440 |