AI-powered research assistant using LangGraph
Project description
๐ฌ AI Research Assistant
An intelligent, LLM-powered research assistant built with LangGraph that conducts comprehensive research through multi-agent collaboration, structured interviews, and automated report generation.
๐ Table of Contents
- Overview
- Key Features
- Architecture
- Project Structure
- Prerequisites
- Installation
- Configuration
- Quick Start
- Usage Examples
- Web Interface
- Testing
- CI/CD
- Development
- Contributing
- License
- Acknowledgments
- Support
๐ฏ Overview
The AI Research Assistant is a sophisticated research automation system that leverages large language models (LLMs) and multi-agent orchestration to conduct in-depth research on any topic. It simulates a team of expert analysts who collaboratively investigate a subject, conduct interviews, and produce comprehensive research reports.
What It Does
- Generates Expert Analysts: Creates a diverse team of AI analysts with specific expertise relevant to your research topic
- Conducts Structured Interviews: Each analyst performs deep-dive interviews using web search and knowledge retrieval
- Synthesizes Information: Aggregates insights from multiple perspectives into a cohesive narrative
- Produces Professional Reports: Generates well-structured, citation-rich research documents
Use Cases
- ๐ Academic Research: Literature reviews, topic exploration, comparative analysis
- ๐ผ Market Research: Industry analysis, competitive intelligence, trend identification
- ๐ Policy Analysis: Multi-stakeholder perspectives, impact assessment
- ๐ Technology Assessment: Emerging technologies, feasibility studies
- ๐ Knowledge Synthesis: Cross-domain insights, expert opinions compilation
โจ Key Features
๐ค Multi-Agent Research System
- Dynamic Analyst Generation: Creates specialized analysts tailored to your research question
- Parallel Interview Execution: Conducts multiple research streams simultaneously for efficiency
- Human-in-the-Loop: Review and provide feedback on generated analysts before research begins
- Flexible Configuration: Adjust analyst count, interview depth, and research parameters
๐ Advanced Search Integration
- Web Search: Tavily API integration for current information
- Wikipedia Integration: Encyclopedic knowledge base access
- Configurable Search Strategies: From minimal to comprehensive search modes
- Result Caching: Avoid redundant API calls and reduce costs
๐ Intelligent Report Generation
- Structured Output: Introduction, methodology, findings, analysis, and conclusions
- Citation Management: Automatic source tracking and reference formatting
- Markdown Export: Professional, readable report format
- Section-wise Generation: Parallel generation of report sections for speed
๐๏ธ Production-Ready Architecture
- Type Safety: Full mypy strict mode type checking
- Comprehensive Testing: 90%+ test coverage with unit and integration tests
- Structured Logging: Detailed observability with structlog
- Error Handling: Robust retry mechanisms and circuit breakers
- Configuration Management: Hydra-based hierarchical configuration system
๐จ User-Friendly Interface
- Gradio Web App: Beautiful, interactive web interface
- Real-time Progress: Live updates on research progress
- Analyst Review: Interactive analyst approval workflow
- Report Download: One-click download of final reports
๐ Security & Quality
- Automated CI/CD: GitHub Actions workflows for testing and deployment
- Security Scanning: 6 security tools for vulnerability detection
- Dependency Management: Automated updates via Dependabot
- Code Quality: Enforced via black, isort, flake8, and ruff
๐๏ธ Architecture
System Design
The Research Assistant follows a multi-agent graph-based architecture built on LangGraph:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Research Graph Flow โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
START โ create_analysts
โ
human_feedback (optional)
โ
โโโโโโโโโโโโดโโโโโโโโโโโ
โ โ
conduct_interview (Analyst 1) conduct_interview (Analyst 2)
โ โ
โโโโโโโโโโโโฌโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโ
โ โ โ
write_introduction write_sections write_conclusion
โ โ โ
โโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโ
โ
finalize_report
โ
END
Interview Graph (Per Analyst)
Each analyst executes a structured interview process:
START โ ask_question
โ
โโโโโโโดโโโโโโ
โ โ
search_web search_wikipedia (parallel)
โ โ
โโโโโโโฌโโโโโโ
โ
answer_question
โ
route (continue OR complete)
โ
save_interview
โ
END
Core Components
1. State Management
- InterviewState: Manages individual analyst interview sessions
- ResearchGraphState: Coordinates overall research workflow
- Type-Safe: Pydantic models with runtime validation
2. Node Functions
- analyst_nodes.py: Analyst generation and management
- interview_nodes.py: Question asking, answering, and routing
- report_nodes.py: Report section generation and finalization
3. Graph Builders
- interview_graph.py: Individual analyst interview orchestration
- research_graph.py: Multi-analyst research coordination
4. Tools & Utilities
- search.py: Web and Wikipedia search integration
- logging.py: Structured logging with context
- retry.py: Retry mechanisms and circuit breakers
- formatting.py: Report formatting utilities
๐ Project Structure
research-assistant/
โโโ .github/ # GitHub configuration
โ โโโ workflows/ # CI/CD workflows
โ โ โโโ tests.yml # Testing pipeline
โ โ โโโ security.yml # Security scanning
โ โ โโโ release.yml # Release automation
โ โ โโโ pr.yml # PR validation
โ โโโ dependabot.yml # Dependency updates
โ โโโ SETUP_GUIDE.md # CI/CD setup guide
โ โโโ CI_CD_DOCUMENTATION.md # Full CI/CD docs
โ
โโโ app/ # Web application
โ โโโ gradio_app.py # Gradio web interface
โ โโโ launch.sh # Unix launch script
โ โโโ requirements.txt # UI dependencies
โ
โโโ src/research_assistant/ # Main source code
โ โโโ config/ # Configuration management
โ โ โโโ config.py # Config loader
โ โ โโโ default.yaml # Default settings
โ โ โโโ llm/ # LLM configs
โ โ โโโ search/ # Search configs
โ โ โโโ experiment/ # Experiment presets
โ โ โโโ topic/ # Topic-specific configs
โ โ
โ โโโ core/ # Core data structures
โ โ โโโ schemas.py # Pydantic models
โ โ โโโ state.py # State definitions
โ โ
โ โโโ nodes/ # Graph node functions
โ โ โโโ analyst_nodes.py # Analyst operations
โ โ โโโ interview_nodes.py # Interview logic
โ โ โโโ report_nodes.py # Report generation
โ โ
โ โโโ graphs/ # Graph builders
โ โ โโโ interview_graph.py # Single interview
โ โ โโโ research_graph.py # Full research
โ โ
โ โโโ tools/ # External integrations
โ โ โโโ search.py # Search tools
โ โ
โ โโโ prompts/ # LLM prompts
โ โ โโโ analyst_prompts.py # Analyst generation
โ โ โโโ interview_prompts.py # Interview questions
โ โ โโโ report_prompts.py # Report writing
โ โ
โ โโโ types/ # Type definitions
โ โ โโโ protocols.py # Type protocols
โ โ โโโ validation.py # Runtime validation
โ โ
โ โโโ utils/ # Utility functions
โ โโโ logging.py # Structured logging
โ โโโ retry.py # Retry logic
โ โโโ exceptions.py # Custom exceptions
โ โโโ formatting.py # Formatters
โ
โโโ tests/ # Test suite
โ โโโ conftest.py # Shared fixtures
โ โโโ unit/ # Unit tests
โ โ โโโ test_schemas.py
โ โ โโโ test_nodes.py
โ โโโ integration/ # Integration tests
โ โโโ test_graph_execution.py
โ
โโโ pyproject.toml # Project metadata
โโโ environment.yaml # Conda environment
โโโ Makefile # Common commands
โโโ pytest.ini # Pytest configuration
โโโ coverage.toml # Coverage settings
โโโ TESTING_GUIDE.md # Testing documentation
โโโ TYPE_CHECKING_GUIDE.md # Type checking guide
โโโ README.md # This file
๐ง Prerequisites
System Requirements
- Python: 3.10, 3.11, or 3.12
- Operating System: Linux, macOS, or Windows
- Memory: 4GB RAM minimum, 8GB recommended
- Storage: 1GB free space
Accounts & Authentication
You'll need API keys from the following services:
Required
-
OpenAI API or Anthropic API: For LLM access
-
Tavily API: For web search
- Sign up: Tavily
- Get API key from dashboard
- Free tier: 1,000 searches/month
Optional
- LangSmith: For LLM tracing and debugging
- Sign up: LangSmith
- Free tier available
Python Dependencies
All dependencies are managed via pyproject.toml:
- LangChain: LLM orchestration framework
- LangGraph: Graph-based workflow engine
- Pydantic: Data validation and settings
- Gradio: Web interface framework
- Hydra: Configuration management
- structlog: Structured logging
๐ฅ Installation
Option 1: Quick Install (Recommended)
# Clone the repository
git clone https://github.com/yourusername/research-assistant.git
cd research-assistant
# Install with pip
pip install -e ".[dev]"
# Or with uv (faster)
uv pip install -e ".[dev]"
Option 2: Conda Environment
# Create environment from file
conda env create -f environment.yaml
# Activate environment
conda activate research-assistant
# Install package
pip install -e ".[dev]"
Option 3: Virtual Environment
# Create virtual environment
python -m venv venv
# Activate (Linux/Mac)
source venv/bin/activate
# Activate (Windows)
venv\Scripts\activate
# Install dependencies
pip install -e ".[dev]"
Verify Installation
# Check Python version
python --version # Should be 3.10+
# Verify installation
python -c "from src.research_assistant import __version__; print(__version__)"
# Run tests
pytest tests/ -v
# Check type safety
mypy src/
โ๏ธ Configuration
Environment Variables
Create a .env file in the project root:
# Copy the example file
cp .env.example .env
# Edit with your API keys
nano .env
Required variables:
# LLM Provider (choose one)
OPENAI_API_KEY=sk-... # OpenAI API key
ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key
# Search Provider
TAVILY_API_KEY=tvly-... # Tavily API key
# Optional: LangSmith Tracing
LANGCHAIN_TRACING_V2=true # Enable tracing
LANGCHAIN_API_KEY=ls__... # LangSmith API key
LANGCHAIN_PROJECT=research-assistant # Project name
Configuration System
The project uses Hydra for hierarchical configuration management:
Base Configuration (config/default.yaml)
# Default research settings
max_analysts: 3
max_interview_turns: 2
report_structure:
include_methodology: true
include_citations: true
llm:
model: gpt-4-turbo-preview
temperature: 0.7
max_tokens: 4000
search:
max_results: 5
include_domains: []
exclude_domains: []
LLM Configurations (config/llm/)
- openai.yaml: OpenAI GPT-4 settings
- openai_gpt4_turbo.yaml: GPT-4 Turbo optimized
- anthropic.yaml: Claude 3 settings
- local.yaml: Local LLM configuration
- cheap.yaml: Budget-friendly options
Search Configurations (config/search/)
- default.yaml: Balanced search strategy
- comprehensive.yaml: Maximum depth and breadth
- minimal.yaml: Quick, surface-level searches
- no_cache.yaml: Disable caching for fresh results
Experiment Presets (config/experiment/)
- quick_test.yaml: Fast testing (1 analyst, 1 turn)
- comprehensive.yaml: Deep research (5 analysts, 3 turns)
- budget_friendly.yaml: Cost-optimized settings
- production.yaml: Production-ready defaults
- local_llm.yaml: Local LLM configuration
Usage with Overrides
# Use specific LLM
python -m research_assistant llm=anthropic
# Use comprehensive search
python -m research_assistant search=comprehensive
# Combine multiple configs
python -m research_assistant llm=cheap search=minimal experiment=quick_test
# Override specific values
python -m research_assistant max_analysts=5 max_interview_turns=3
# Topic-specific preset
python -m research_assistant topic=ai_safety
๐ Quick Start
Basic Usage
from src.research_assistant.graphs.research_graph import run_research
# Define your research topic
topic = "The impact of large language models on education"
# Run research with defaults
final_report = run_research(
topic=topic,
max_analysts=3,
max_interview_turns=2
)
# Access the report
print(final_report["final_report"])
With Configuration
from src.research_assistant.config.config import load_config
from src.research_assistant.graphs.research_graph import create_research_system
# Load configuration
cfg = load_config(overrides=["llm=anthropic", "search=comprehensive"])
# Create research system
research_system = create_research_system(cfg)
# Execute research
result = research_system.invoke({
"topic": "Quantum computing applications in drug discovery",
"max_analysts": 4,
"max_interview_turns": 3
})
Streaming Updates
from src.research_assistant.graphs.research_graph import stream_research
# Stream research progress
for update in stream_research(
topic="Renewable energy storage solutions",
max_analysts=3
):
stage = update.get("stage", "unknown")
progress = update.get("progress", 0)
print(f"Stage: {stage} | Progress: {progress}%")
With Human Feedback
from src.research_assistant.graphs.research_graph import (
create_research_system,
continue_research
)
# Create system with checkpointing
research_system = create_research_system(cfg, enable_interrupt=True)
# Start research (will interrupt after analyst generation)
result = research_system.invoke({"topic": "AI ethics"})
# Review analysts
analysts = result["analysts"]
for analyst in analysts:
print(f"{analyst.name}: {analyst.role} - {analyst.affiliation}")
# Provide feedback
feedback = "Approve" # or provide specific feedback
# Continue research
final_result = continue_research(
research_system,
feedback=feedback,
checkpointer=checkpointer
)
๐ป Usage Examples
Example 1: Quick Research
# Using the CLI
python -m research_assistant \
--topic "Blockchain scalability solutions" \
--analysts 2 \
--turns 2 \
--output report.md
Example 2: Comprehensive Analysis
from src.research_assistant import ResearchAssistant
# Initialize assistant
assistant = ResearchAssistant(
llm_config="gpt-4-turbo",
search_config="comprehensive"
)
# Conduct in-depth research
report = assistant.research(
topic="Climate change adaptation strategies in coastal cities",
num_analysts=5,
interview_depth=3,
enable_human_review=True
)
# Export report
assistant.export_report(report, "climate_adaptation_report.pdf")
Example 3: Academic Literature Review
from src.research_assistant.presets import academic_research
# Use academic preset
report = academic_research(
research_question="What are the current limitations of transformer models?",
focus_areas=["architecture", "training", "inference"],
include_citations=True,
citation_style="APA"
)
Example 4: Market Research
from src.research_assistant.presets import market_research
# Analyze market trends
report = market_research(
industry="Electric Vehicles",
regions=["North America", "Europe", "Asia"],
timeframe="2020-2024",
include_competitors=True
)
๐ Web Interface
Launch the Gradio App
# Simple launch
python app/gradio_app.py
# Or use launch script (Linux/Mac)
./app/launch.sh
# Windows
app\launch.bat
# With options
python app/gradio_app.py --port 8080 --share
Interface Features
The web interface provides:
-
Research Configuration
- Topic input
- Analyst count (1-10)
- Interview depth (1-5)
- Configuration presets
-
Analyst Review
- View generated analysts
- Provide feedback
- Approve or regenerate
-
Real-time Progress
- Progress bar (0-100%)
- Stage descriptions
- Per-analyst tracking
- Duration estimates
-
Results Display
- Formatted report preview
- Download as Markdown
- Share link
-
Help & Tips
- Step-by-step guide
- Best practices
- Example topics
Access the Interface
Open your browser and navigate to:
http://localhost:7860
๐งช Testing
Run Tests
# Run all tests
pytest tests/
# Run with coverage
pytest tests/ --cov=src --cov-report=html
# Run specific test file
pytest tests/unit/test_schemas.py -v
# Run with markers
pytest tests/ -m "not slow"
# Run integration tests only
pytest tests/integration/ -v
Test Structure
tests/
โโโ conftest.py # Shared fixtures
โโโ unit/ # Unit tests (fast, isolated)
โ โโโ test_schemas.py # Pydantic model tests
โ โโโ test_nodes.py # Node function tests
โโโ integration/ # Integration tests (slower)
โโโ test_graph_execution.py # End-to-end workflow tests
Coverage Goals
- Target: 80%+ coverage
- Current: 90%+ coverage
- Minimum: 70% for new code
View coverage report:
pytest tests/ --cov=src --cov-report=html
open htmlcov/index.html
Test Fixtures
Common fixtures available in conftest.py:
mock_llm: Mocked LLM responsesmock_search: Mocked search resultssample_analyst: Pre-configured analystsample_state: Pre-configured statetemp_config: Temporary config file
See TESTING_GUIDE.md for comprehensive testing documentation.
๐ CI/CD
Automated Workflows
GitHub Actions workflows automatically:
Tests (tests.yml)
- โ Run linting (flake8, black, isort, ruff)
- โ Execute tests on Python 3.10, 3.11, 3.12
- โ Generate coverage reports
- โ Type check with mypy
Security (security.yml)
- ๐ Scan for vulnerabilities (Safety, pip-audit)
- ๐ Detect secrets (TruffleHog)
- ๐ Analyze code security (Bandit, Semgrep, CodeQL)
- ๐ Check license compliance
Release (release.yml)
- ๐ฆ Create GitHub releases
- ๐ฆ Publish to PyPI
- ๐ฆ Build Docker images
- ๐ฆ Generate changelogs
PR Validation (pr.yml)
- ๐ท๏ธ Auto-label PRs
- ๐ฌ Post coverage comments
- โ Enforce conventional commits
- ๐ Review dependencies
Local CI Commands
# Run all checks locally
make test-all
# Individual checks
make lint
make format
make type-check
make test
make coverage
# Fix issues
make format-fix
CI/CD Documentation
- ๐ Setup Guide - 15-minute setup
- ๐ Full Documentation - Complete reference
- ๐ Workflow Diagrams - Visual overview
๐จโ๐ป Development
Setting Up Development Environment
# Clone repository
git clone https://github.com/yourusername/research-assistant.git
cd research-assistant
# Install development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Verify setup
make verify
Development Workflow
-
Create a branch
git checkout -b feat/your-feature
-
Make changes
- Write code following style guide
- Add tests for new features
- Update documentation
-
Run checks
make lint make test make type-check
-
Commit changes
git commit -m "feat: add new feature"
Follow Conventional Commits
-
Push and create PR
git push origin feat/your-feature
Code Style
- Formatting: black (line length: 100)
- Import sorting: isort
- Linting: flake8, ruff
- Type checking: mypy (strict mode)
- Docstrings: Google style
Run formatters:
make format-fix
Adding New Features
1. Add a New Node
# src/research_assistant/nodes/your_nodes.py
from src.research_assistant.core.state import YourState
def your_node(state: YourState) -> dict:
"""Your node description.
Args:
state: The current state
Returns:
Updated state dict
"""
# Your logic here
return {"key": "value"}
2. Update Graph
# src/research_assistant/graphs/your_graph.py
from langgraph.graph import StateGraph
from src.research_assistant.nodes.your_nodes import your_node
def build_your_graph() -> StateGraph:
graph = StateGraph(YourState)
graph.add_node("your_node", your_node)
# Add edges...
return graph.compile()
3. Add Tests
# tests/unit/test_your_nodes.py
def test_your_node(mock_llm):
state = {"initial": "state"}
result = your_node(state)
assert result["key"] == "expected"
4. Update Documentation
- Add docstrings
- Update README if needed
- Add usage examples
Debugging
Enable Debug Logging
import os
os.environ["LOG_LEVEL"] = "DEBUG"
from src.research_assistant.utils.logging import get_logger
logger = get_logger(__name__)
LangSmith Tracing
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=your_key
export LANGCHAIN_PROJECT=your_project
View traces at smith.langchain.com
Graph Visualization
from src.research_assistant.graphs.research_graph import create_research_system
system = create_research_system(cfg)
system.get_graph().draw_mermaid_png(output_file="graph.png")
๐ค Contributing
We welcome contributions! Here's how you can help:
Types of Contributions
- ๐ Bug reports: Open an issue with details
- ๐ก Feature requests: Describe your idea in an issue
- ๐ Documentation: Improve or fix documentation
- ๐ง Code: Submit pull requests
- ๐งช Tests: Add or improve test coverage
- ๐ Translations: Help translate the interface
Contribution Process
- Fork the repository
- Create a feature branch
git checkout -b feat/amazing-feature
- Make your changes
- Follow code style guidelines
- Add tests for new features
- Update documentation
- Run checks
make test-all - Commit using conventional commits
git commit -m "feat: add amazing feature"
- Push to your fork
git push origin feat/amazing-feature
- Open a Pull Request
- Describe your changes
- Reference any related issues
- Wait for review
Commit Message Convention
Follow Conventional Commits:
feat:New featurefix:Bug fixdocs:Documentation changesstyle:Code style changesrefactor:Code refactoringtest:Test additions/changeschore:Maintenance tasksci:CI/CD changes
Examples:
feat(search): add DuckDuckGo search provider
fix(report): correct citation formatting
docs: update installation instructions
test: add integration tests for interview graph
Code Review Process
- All PRs require at least one approval
- CI checks must pass
- Code coverage should not decrease
- Follow style guidelines
Community Guidelines
- Be respectful and inclusive
- Provide constructive feedback
- Help others learn and grow
- Follow the Code of Conduct
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
MIT License Summary
โ Permitted:
- Commercial use
- Modification
- Distribution
- Private use
โ Forbidden:
- Liability
- Warranty
๐ Required:
- License and copyright notice
๐ Acknowledgments
Frameworks & Libraries
- LangChain: LLM application framework
- LangGraph: Graph-based workflow orchestration
- Pydantic: Data validation library
- Gradio: Web interface framework
- Hydra: Configuration management
Services
Inspiration
- Research assistant patterns from academic literature
- Multi-agent system designs from AI research
- LangChain community examples and tutorials
Contributors
Thanks to all contributors who have helped improve this project!
๐ฌ Support
Documentation
- ๐ Testing Guide: Comprehensive testing documentation
- ๐ Type Checking Guide: Type safety documentation
- ๐ CI/CD Setup: CI/CD configuration guide
- ๐ API Reference: API documentation (coming soon)
Getting Help
- ๐ Bug reports: Open an issue
- ๐ก Feature requests: Request a feature
- ๐ฌ Questions: GitHub Discussions
- ๐ง Email: your.email@example.com
Community
- ๐ฌ Discord: Join our server
- ๐ฆ Twitter: @your_handle
- ๐บ YouTube: Tutorial videos
FAQ
Q: How much does it cost to run? A: Costs depend on your LLM provider and usage. Budget-friendly configs available.
Q: Can I use local LLMs?
A: Yes! Use the llm=local configuration.
Q: How long does research take? A: 2-5 minutes for 2-3 analysts, 5-8 minutes for 5 analysts.
Q: Can I customize the report format?
A: Yes, modify report templates in src/research_assistant/prompts/report_prompts.py.
Q: Is there a hosted version? A: Not yet, but you can deploy it yourself using Docker or cloud platforms.
๐จ Status & Roadmap
Current Status
๐ข Active Development - Version 0.1.0
Note: This project is in active development. Features, APIs, and documentation are subject to change. For questions, issues, or contributions, please open an issue or pull request on GitHub.
Recent Updates
- โ Multi-agent research system
- โ Gradio web interface
- โ Comprehensive test suite
- โ CI/CD pipeline
- โ Type safety with mypy
- โ Structured logging
Roadmap
v0.2.0 (Next Release)
- PDF export support
- Report templates
- Cost tracking
- Performance metrics dashboard
v0.3.0
- Database integration for history
- RAG enhancement
- Multi-language support
- Advanced citation management
v1.0.0
- Production-ready release
- Docker deployment
- API server
- Cloud hosting guide
Contributing to Roadmap
Have ideas? Open a feature request!
๐ Project Stats
๐ Ready to Get Started?
-
Install the package
pip install -e ".[dev]"
-
Set up your environment
cp .env.example .env # Add your API keys to .env
-
Launch the web interface
python app/gradio_app.py -
Start researching! ๐ฌ
Built with โค๏ธ using LangChain and LangGraph
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_research_assistant-0.2.0.tar.gz.
File metadata
- Download URL: ai_research_assistant-0.2.0.tar.gz
- Upload date:
- Size: 114.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62110642e76018f8463adc78e5f30a7b50081541a468caf3c9cd65cea5f90c28
|
|
| MD5 |
95fbd23d1aac1495719df2b4bbf861c9
|
|
| BLAKE2b-256 |
160e43a90b922cb1438cf251d9382125ae7cd01f7e7abfebfba0f73dd7adf040
|
Provenance
The following attestation bundles were made for ai_research_assistant-0.2.0.tar.gz:
Publisher:
publish-pypi-package.yml on chrisjcc/research_assistant
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_research_assistant-0.2.0.tar.gz -
Subject digest:
62110642e76018f8463adc78e5f30a7b50081541a468caf3c9cd65cea5f90c28 - Sigstore transparency entry: 659938750
- Sigstore integration time:
-
Permalink:
chrisjcc/research_assistant@22f4572fb859f7de5f225f561a350687466f40be -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/chrisjcc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi-package.yml@22f4572fb859f7de5f225f561a350687466f40be -
Trigger Event:
release
-
Statement type:
File details
Details for the file ai_research_assistant-0.2.0-py3-none-any.whl.
File metadata
- Download URL: ai_research_assistant-0.2.0-py3-none-any.whl
- Upload date:
- Size: 114.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d0b12c3b797fd87ba7f9871a3763cfb76aded537bc8f7702dff7b82611453639
|
|
| MD5 |
3bd075a7a77dfe10ea8640de8a88cc64
|
|
| BLAKE2b-256 |
07e9e606934786a638f054be366b5cdd2718b8733490a62a4b005ad46b5deb09
|
Provenance
The following attestation bundles were made for ai_research_assistant-0.2.0-py3-none-any.whl:
Publisher:
publish-pypi-package.yml on chrisjcc/research_assistant
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_research_assistant-0.2.0-py3-none-any.whl -
Subject digest:
d0b12c3b797fd87ba7f9871a3763cfb76aded537bc8f7702dff7b82611453639 - Sigstore transparency entry: 659938752
- Sigstore integration time:
-
Permalink:
chrisjcc/research_assistant@22f4572fb859f7de5f225f561a350687466f40be -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/chrisjcc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi-package.yml@22f4572fb859f7de5f225f561a350687466f40be -
Trigger Event:
release
-
Statement type: