Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives.
Project description
📜 The Archive Query Log
Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives.
Start now by running your custom analysis/experiment, scraping your own query log, or just look at our example files.
Contents
Integrations
Running Experiments on the AQL
The data in the Archive Query Log is highly sensitive (still, you
can re-crawl everything from the Wayback Machine). For that reason, we ensure that custom experiments or
analyses can not leak sensitive data (please get in touch if you have questions) by
using TIRA as a platform for custom analyses/experiments. In TIRA, you submit a Docker image that
implements your experiment. Your software is then executed in sandboxed mode (without internet connection) to ensure
that your software does not leak sensitive information. After your software execution finished, administrators will
review your submission and unblind it so that you can access the outputs.
Please refer to our dedicated TIRA tutorial as starting point for your experiments.
Crawling
For running the CLI and crawl a query log on your own machine, please refer to the instructions for single-machine deployments. If instead you want to scale up and run the crawling pipelines on a cluster, please refer to the instructions for cluster deployments.
Single-Machine (PyPi/Docker)
To run the Archive Query Log CLI on your machine, you can either use our PyPi package or the Docker image. (If you absolutely need to, you can also install the Python CLI or the Docker image from source.)
Installation (PyPi)
First you need to install Python 3.10 and pipx (this allows you to install the AQL CLI in a virtual environment). Then, you can install the Archive Query Log CLI by running:
pipx install archive-query-log
Now you can run the Archive Query Log CLI by running:
aql --help
Installation (Python from source)
First install Python 3.10, and clone this repository. From inside the repository directory, create a virtual environment and activate it:
python3.10 -m venv venv/
source venv/bin/activate
Now you can install the Archive Query Log CLI by running:
pip install -e .
Note: The commands below use the syntax of the PyPi installation.
To run the same commands with the local Python installation, replace aql
with python -m archive_query_log
,
for example:
python -m archive_query_log --help
Installation (Docker)
You only need to install Docker.
Note: The commands below use the syntax of the PyPi installation.
To run the same commands with the Docker installation, replace aql
with docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml ghcr.io/webis-de/archive-query-log
,
for example:
docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml ghcr.io/webis-de/archive-query-log --help
Installation (Docker from source)
First install Docker, and clone this repository. From inside the repository directory, build the Docker image like this:
docker build -t aql .
Note: The commands below use the syntax of the PyPi installation.
To run the same commands with the Docker installation, replace aql
with docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml aql
,
for example:
docker run -it -v "$(pwd)"/config.override.yml:/workspace/config.override.yml aql --help
Configuration
Crawling the Archive Query Log requires access to an Elasticsearch cluster.
To configure access to the Elasticsearch cluster,
add a config.override.yml
file in the current directory, with the following contents.
Replace the placeholders with your actual credentials:
es:
host: "<HOST>"
port: 9200
username: "<USERNAME>"
password: "<PASSWORD>"
Add an archive service
aql archives add
Add a search provider
aql providers add
Build source pairs
aql sources build
Fetch captures
aql captures fetch
Cluster (Helm/Kubernetes)
Running the Archive Query Log on a cluster is recommended for large-scale crawls. We provide a Helm chart that automatically starts crawling and parsing jobs for you and stores the results in an Elasticsearch cluster.
Installation
Just install Helm and configure kubectl
for your cluster.
Configuration
Crawling the Archive Query Log requires access to an Elasticsearch cluster.
Configure the Elasticsearch credentials in a values.override.yaml
file like this:
elasticsearch:
host: "<HOST>"
port: 9200
username: "<USERNAME>"
password: "<PASSWORD>"
Deployment
Let's deploy the Helm chart on the cluster (we're testing first with --dry-run
to see if everything works):
helm upgrade --install --values helm/archive-query-log/values.override.yaml --dry-run archive-query-log helm/archive-query-log
If everything worked and the output looks good, you can remove the --dry-run
flag to actually deploy the chart.
Uninstall
If you no longer need the chart, you can uninstall it:
helm uninstall archive-query-log
Citation
If you use the Archive Query Log dataset or the crawling code in your research, please cite the following paper describing the AQL and its use-cases:
Heinrich Reimer, Sebastian Schmidt, Maik Fröbe, Lukas Gienapp, Harrisen Scells, Benno Stein, Matthias Hagen, and Martin Potthast. The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives. In Hsin-Hsi Chen et al., editors, 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023), pages 2848–2860, July 2023. ACM.
You can use the following BibTeX entry for citation:
@InProceedings{reimer:2023,
author = {{Jan Heinrich} Reimer and Sebastian Schmidt and Maik Fr{\"o}be and Lukas Gienapp and Harrisen Scells and Benno Stein and Matthias Hagen and Martin Potthast},
booktitle = {46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2023)},
doi = {10.1145/3539618.3591890},
editor = {Hsin{-}Hsi Chen and Wei{-}Jou (Edward) Duh and Hen{-}Hsen Huang and Makoto P. Kato and Josiane Mothe and Barbara Poblete},
ids = {potthast:2023u},
isbn = {9781450394086},
month = jul,
numpages = 13,
pages = {2848--2860},
publisher = {ACM},
site = {Taipei, Taiwan},
title = {{The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives}},
url = {https://dl.acm.org/doi/10.1145/3539618.3591890},
year = 2023
}
Development
Refer to the local Python installation instructions to set up the development environment and install the dependencies.
After having implemented a new feature, you should the check code format, inspect common LINT errors, and run all unit tests with the following commands:
flake8 archive_query_log # Code format
pylint archive_query_log # LINT errors
mypy archive_query_log # Static typing
bandit -c pyproject.toml -r archive_query_log # Security
pytest archive_query_log # Unit tests
Add new tests for parsers
At the moment, our workflow for adding new tests for parsers goes like this:
- Select the number of tests to run per service and the number of services.
- Auto-generate unit tests and download WARCs with generate_tests.py
- Run the tests.
- Failing tests will open a diff editor with the approval and a web browser tab with the Wayback URL.
- Use the web browser dev tools to find the query input field and search result CSS paths.
- Close diffs and tabs and re-run tests.
Contribute
If you've found an important search provider to be missing from this query log, please suggest it by creating an issue. We also very gratefully accept pull requests for adding search providers or new parser configurations!
If you're unsure about anything, post an issue, or contact us:
- heinrich.reimer@uni-jena.de
- s.schmidt@uni-leipzig.de
- maik.froebe@uni-jena.de
- lukas.gienapp@uni-leipzig.de
- harry.scells@uni-leipzig.de
- benno.stein@uni-weimar.de
- matthias.hagen@uni-jena.de
- martin.potthast@uni-leipzig.de
We're happy to help!
License
This repository is released under the MIT license.
Files in the data/
directory are exempt from this license.
If you use the AQL in your research, we'd be glad if you'd cite us.
Abstract
The Archive Query Log (AQL) is a previously unused, comprehensive query log collected at the Internet Archive over the last 25 years. Its first version includes 356 million queries, 166 million search result pages, and 1.7 billion search results across 550 search providers. Although many query logs have been studied in the literature, the search providers that own them generally do not publish their logs to protect user privacy and vital business data. Of the few query logs publicly available, none combines size, scope, and diversity. The AQL is the first to do so, enabling research on new retrieval models and (diachronic) search engine analyses. Provided in a privacy-preserving manner, it promotes open research as well as more transparency and accountability in the search industry.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file archive-query-log-0.1.9.tar.gz
.
File metadata
- Download URL: archive-query-log-0.1.9.tar.gz
- Upload date:
- Size: 37.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 233aa6af5fe90da6c094729f5604c2790b0f8eb2b64cd33bdc7b315f9fca2270 |
|
MD5 | 201eff6cc6f0a9ea737f90d64e592e61 |
|
BLAKE2b-256 | 7aacc901f3a6a509b60112a145421bf24f1fc49843eff9fbec99717cc253c048 |
File details
Details for the file archive_query_log-0.1.9-py3-none-any.whl
.
File metadata
- Download URL: archive_query_log-0.1.9-py3-none-any.whl
- Upload date:
- Size: 160.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ef00d0a1520c7104db4b6c598d48dc4b16b28d4764548b1db99896cb57304853 |
|
MD5 | ff7658769e3fecd965f7f536b2f322aa |
|
BLAKE2b-256 | f1215b219047e0f907813b14809bb84ba4370a96ffacfdd290909b2914db3689 |