An application for summarizing Arxiv results within the terminal
Project description
Arxiv Terminal
Arxiv Terminal is a command-line interface (CLI) tool for fetching, searching, and displaying papers from the arXiv preprint repository. The tool allows you to fetch papers from specified categories, search the fetched papers, and display their statistics.
Features
- Fetch paper abstracts from specified categories and save them in a local sqllite database.
- Show fetched papers and interatively open for more detailed abstracts
- Search fetched papers based on a query (Currently supports pattern + LSA semantic search)
Contributors
A special call out to ChatGPT (v4) which helped write and modify various code and documentation in this repository.
Installation
pip install arxivterminal
For local builds, you should have Poetry installed: User Guide. After installation you may clone and build this repo:
poetry install
poetry shell
arxiv <command>
# Build the wheels
poetry build
Usage
The CLI is invoked using the arxiv
command, followed by one of the available commands:
arxiv fetch [--num-days] [--categories]
: Fetch papers from the specified categories and store them in the database.arxiv delete_all
: Delete all papers from the database.arxiv show [--days-ago]
: Show papers fetched from the specified number of days ago.arxiv stats
: Show statistics of the papers stored in the database.arxiv search <query>
: Search papers in the database based on a query.
Examples
Fetch papers from the "cs.AI" and "cs.CL" categories from the last 7 days:
arxiv fetch --num-days 7 --categories cs.AI,cs.CL
Delete all papers from database:
arxiv delete_all
Show papers fetched in the last 7 days
arxiv show --days-ago 7
Display statistics of the papers stored in the database:
arxiv stats
Show papers containing the phrase "deep learning":
arxiv search "deep learning"
Show papers containing the phrase "deep learning" using LSA matching:
arxiv search -e "deep learning"
LSA Search Model
Note: This approach is likely to be replaced in the future by more robust methodology
The LSA search model is largely adapted from the implementation featured in the scikit-learn User Guide example. When used, the model is trained over the entire corpus of abstracts present in the user's local database. The model is persisted in the app cache folder and automatically reloaded on subsequent runs. During a search query, all abstracts from the database are encoded as n-dimensional vectors using the trained LSA model. The search query is also represented as a vector, and a cosine similarity is performed to find the top ranking items.
You may want to force a refresh of the underlying model after loading new papers. This can be done by using the -f
flag when performing a search:
arxiv search -e -f "deep learning"
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for arxivterminal-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17b60d2ceb0c8c2929ea69d296bad17814df7dee2809fb204d7005d3419a4413 |
|
MD5 | c37673090cf59e7ba89df285469bd394 |
|
BLAKE2b-256 | 7c48940ca7b6a047a57961759b5d0a452b0c403ffc008580c03dc9fa943accbd |