AI-powered academic paper synthesis tool
Project description
LitAI
AI-powered literature review assistant that understands your research questions and automatically finds papers, extracts insights, and synthesizes findings - all through natural conversation.
Why LitAI?
LitAI accelerates your research by turning hours of paper reading into minutes of focused insights:
- Find relevant papers fast: Natural language search across millions of papers
- Extract key insights: AI reads papers and pulls out claims with evidence
- Synthesize findings: Ask questions across multiple papers and get cited answers
- Build your collection: Manage PDFs locally with automatic downloads from ArXiv
Perfect for:
- Literature reviews for research papers
- Understanding a new field quickly
- Finding solutions to technical problems
- Discovering contradictions in existing work
- Building comprehensive reading lists
💡 Tip: Use the /questions command to see research unblocking questions organized by phase - from debugging experiments to contextualizing results.
Installation
Prerequisites
- Python 3.11 or higher
- OpenAI API key (Get one here)
First install uv, then:
# Install litai globally
uv tool install litai-research
# Alternative: using pipx
pipx install litai-research
Updates
# Get latest stable updates
uv tool upgrade litai-research
# Alternative: using pipx
pipx upgrade litai-research
Development/Pre-release
For the latest features (may have bugs):
# Install pre-release version
uv tool install --prerelease=allow litai-research
# Upgrade to latest pre-release
uv tool upgrade --prerelease=allow litai-research
# Alternative: using pipx
pipx install --prerelease litai-research
pipx upgrade --prerelease litai-research
Configuration
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY=sk-...
Get your API key from platform.openai.com/api-keys
Note: For best results, use the smartest models as they are better at understanding complex research questions and tool calling. LitAI defaults to GPT-5, the most capable model. You can switch to GPT-5-mini for faster, more affordable processing, or use any other model offered by OpenAI.
💡 Tip: You may be eligible for complimentary tokens by sharing data with OpenAI for model improvement. Learn more about the data sharing program.
Advanced Configuration
Configure LitAI using the /config command:
# Show current configuration
/config show
# Change model (defaults to gpt-5)
/config set llm.model gpt-5-mini # Use the faster, more affordable model
# Reset to defaults
/config reset
Configuration is stored in ~/.litai/config.json and persists across sessions.
Getting Started
1. Launch LitAI
litai
2. Set Up Your Research Context (Recommended)
Provide context about your research to get more tailored responses:
/prompt
This opens your default editor with a template where you can describe:
- Research Context: Your area of study and current focus
- Background & Expertise: Your academic/professional background
- Specific Interests: Particular topics, methods, or problems you're investigating
- Preferences: How you prefer information to be presented or synthesized
Example research context:
## Research Context
I'm a PhD student researching efficient transformer architectures for edge deployment. Currently focusing on knowledge distillation and pruning techniques for large language models.
## Background & Expertise
- Strong background in deep learning and PyTorch
- Experience with model compression techniques
- Familiar with transformer architectures and attention mechanisms
## Specific Interests
- Structured pruning methods that maintain model accuracy
- Hardware-aware neural architecture search
- Quantization techniques for transformers
## Preferences
- When synthesizing papers, please highlight actual compression ratios achieved
- I prefer concrete numbers over vague claims
- Interested in both positive and negative results
Why this matters: This context gets automatically included in every AI conversation, helping LitAI understand your expertise level and tailor responses accordingly. Without it, LitAI treats every user the same way.
3. Understanding LitAI's Two Modes
Normal Mode - Build your research context:
normal ▸ "Find papers about attention mechanisms"
normal ▸ "Add the Transformer paper to my collection"
normal ▸ /papers # View your collection
normal ▸ /note 1 # Add personal notes
normal ▸ /tag 1 -a transformers # Organize with tags
Synthesis Mode - Ask questions and analyze:
normal ▸ /synthesize # Enter synthesis mode
synthesis ▸ "What are the key findings across my transformer papers?"
synthesis ▸ "How do attention mechanisms work?"
synthesis ▸ "Compare BERT vs GPT architectures"
synthesis ▸ "Go deeper on the mathematical foundations"
synthesis ▸ exit # Return to normal mode
The Workflow:
- Normal Mode: Search, collect, and organize papers
- Synthesis Mode: Ask research questions and get AI analysis
- Switch freely:
/synthesizeto enter,exitto return
4. Build Your Research Workflow
For New Research Areas:
- Normal Mode:
"Find recent papers about [topic]"+"Add the most cited papers" - Synthesis Mode:
"What are the main approaches in this field?"+ follow-up questions
For Literature Reviews:
- Normal Mode: Build collection, add notes (
/note), organize with tags (/tag) - Synthesis Mode:
"Compare methodologies across my papers"+ deep analysis questions
For Keeping Current:
- Normal Mode:
/questions→ See research-unblocking prompts by phase - Synthesis Mode: Regular Q&A sessions to connect new papers to existing work
Key Insight: Normal mode = building context, Synthesis mode = asking questions
Features
🔍 Paper Discovery & Management
- Smart Search: Natural language queries across millions of papers via Semantic Scholar
- Intelligent Collection: Automatic duplicate detection and citation key generation
- PDF Integration: Automatic ArXiv downloads with local storage
- Flexible Organization: Tags, notes, and configurable paper list views
- Import Support: BibTeX file import for existing libraries
🤖 AI-Powered Analysis
- Key Point Extraction: Automatically extract main claims with evidence
- Deep Synthesis: Interactive synthesis mode for collaborative exploration
- Context-Aware: Multiple context depths (abstracts, notes, key points, full text)
- Agent Notes: AI-generated insights and summaries for papers
- Research Context: Personal research profile for tailored responses
💬 Interactive Experience
- Natural Language Interface: Chat naturally about your research
- Command Autocomplete: Tab completion for all commands and file paths
- Vi Mode Support: Optional vi-style keybindings
- Session Management: Persistent conversations with paper selections
- Research Questions: Built-in prompts to unblock research at any phase
⚙️ Advanced Features
- Configurable Display: Customize paper list columns and layout
- Tool Approval System: Control AI tool usage in all modes (queries and synthesis)
- Comprehensive Logging: Debug and track all operations
- Multi-LLM Support: OpenAI and Anthropic models with auto-detection
Command Reference
Essential Commands
/find <query> # Search for papers
/add <numbers> # Add papers from search results
/papers [page] # List your collection (with pagination)
/synthesize # Enter interactive synthesis mode
/note <number> # Manage paper notes
/tag <number> -a <tags> # Add tags to papers
/prompt # Set up your research context (recommended)
/questions # Show research-unblocking prompts
/help # Show all commands
Papers Command Options
/papers --tags # Show all tags with counts
/papers --notes # Show papers with notes
/papers 2 # Show page 2 of collection
Research Context Commands
/prompt # Edit your research context (opens in editor)
/prompt view # Display your current research context
/prompt append "text" # Add text to your existing context
/prompt clear # Delete your research context
Configuration
/config show # Display current settings
/config set llm.model gpt-4o-mini
/config set tool_approval false # Disable approval prompts (all modes)
/config set display.list_columns title,authors,tags,notes
Note: Configuration changes require restarting LitAI to take effect
Normal Mode vs Synthesis Mode
Normal Mode - Context building and management:
/find <query> # Search for papers
/add <numbers> # Add papers from search results
/papers [page] # List your collection
/note <number> # Add your personal notes
/tag <number> -a <tags> # Add tags to papers
/synthesize # Enter synthesis mode
Synthesis Mode - Question answering and analysis:
synthesis ▸ "What are the key insights from paper X?"
synthesis ▸ "How do these approaches compare?"
synthesis ▸ "Go deeper on the methodology"
synthesis ▸ "Add AI notes to paper 1" # Ask AI to generate analysis notes
synthesis ▸ /papers # Show full collection
synthesis ▸ /selected # Show papers in current session
synthesis ▸ /context key_points # Change context depth
synthesis ▸ /clear # Clear session (keep selected papers)
synthesis ▸ exit # Return to normal mode
Notes System
- Personal Notes (
/notein normal mode): Your own thoughts and observations - AI Notes (request in synthesis mode): Ask AI to generate insights and summaries for papers
Data Storage
LitAI stores all data locally in ~/.litai/:
litai.db- SQLite database with paper metadata and extractionspdfs/- Downloaded PDF fileslogs/litai.log- Application logs for debuggingconfig.json- User configurationuser_prompt.txt- Personal research context
FAQ
Why do paper searches sometimes fail?
Semantic Scholar's public API can experience high load, leading to search failures. If you encounter frequent issues:
- Wait a few minutes and try again
- Consider requesting a free API key for higher rate limits: Semantic Scholar API Key Form
License
This project is open source and available under the MIT License.
Acknowledgments
- Built with Semantic Scholar API
- Powered by OpenAI/Anthropic language models
Support
- Report issues: GitHub Issues
- Logs for debugging:
~/.litai/logs/litai.log
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file litai_research-0.1.3.tar.gz.
File metadata
- Download URL: litai_research-0.1.3.tar.gz
- Upload date:
- Size: 111.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c438785372ce953f9421a883d73d89f96bb984bc373339f2386f6021e4385082
|
|
| MD5 |
67d1575ca37339680bda3493cd8018c2
|
|
| BLAKE2b-256 |
b6c685931b3816e1bcfdf233bbcb39290e808fead300ee9f89ff931bdaedd9d4
|
File details
Details for the file litai_research-0.1.3-py3-none-any.whl.
File metadata
- Download URL: litai_research-0.1.3-py3-none-any.whl
- Upload date:
- Size: 87.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
543bb18b84bb26580e250a52f74ab66e212f3a19b046656b51a52ae7eaac4918
|
|
| MD5 |
95be7e4f7ff1712515d98c800d34640c
|
|
| BLAKE2b-256 |
3226feca55705f4f1fb7d5bca7c93ac4c36b605589c16da3a982a12d80f9a97b
|