Skip to main content

A package for creating ML research assistant models through paper dataset creation and model fine-tuning

Project description

PaperTuner

PaperTuner is a Python package for creating research assistant models by processing academic papers and fine-tuning language models to provide methodology guidance and research approaches.

Features

  • Automated extraction of research papers from arXiv
  • Section extraction to identify problem statements, methodologies, and results
  • Generation of high-quality question-answer pairs for research methodology
  • Fine-tuning of language models with GRPO (Growing Rank Pruned Optimization)
  • Integration with Hugging Face for dataset and model sharing

Installation

pip install papertuner

Basic Usage

As a Command-Line Tool

1. Create a dataset from research papers

# Set up your environment variables
export GEMINI_API_KEY="your-api-key"
export HF_TOKEN="your-huggingface-token"  # Optional, for uploading to HF

# Run the dataset creation
papertuner-dataset --max-papers 100

2. Train a model

# Train using the created or an existing dataset
papertuner-train --model "Qwen/Qwen2.5-3B-Instruct" --dataset "densud2/ml_qa_dataset"

As a Python Library

from papertuner import ResearchPaperProcessor, ResearchAssistantTrainer

# Create a dataset
processor = ResearchPaperProcessor(
    api_key="your-api-key",
    hf_repo_id="your-username/dataset-name"
)
papers = processor.process_papers(max_papers=10)

# Train a model
trainer = ResearchAssistantTrainer(
    model_name="Qwen/Qwen2.5-3B-Instruct",
    lora_rank=64,
    output_dir="./model_output"
)
results = trainer.train("your-username/dataset-name")

# Test the model
question = "How would you design a transformer model for time series forecasting?"
response = trainer.run_inference(
    results["model"],
    results["tokenizer"],
    question,
    results["lora_path"]
)
print(response)

Configuration

You can configure the tool using environment variables or when initializing the classes:

  • GEMINI_API_KEY: API key for generating QA pairs
  • HF_TOKEN: Hugging Face token for uploading datasets and models
  • HF_REPO_ID: Hugging Face repository ID for the dataset
  • PAPERTUNER_DATA_DIR: Custom directory for storing data (default: ~/.papertuner/data)

License

MIT License

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

papertuner-0.0.7.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

papertuner-0.0.7-py3-none-any.whl (20.1 kB view details)

Uploaded Python 3

File details

Details for the file papertuner-0.0.7.tar.gz.

File metadata

  • Download URL: papertuner-0.0.7.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for papertuner-0.0.7.tar.gz
Algorithm Hash digest
SHA256 d78b77898062ddbe5a4ad6b87c7f8c38117119bc9161c9dec17372f2cb08ed48
MD5 6f5b46466f80a11ef28da34511937ed6
BLAKE2b-256 104ed771f0d42634e124543ed13c41a31e4774244423c561e99b93d318d2123c

See more details on using hashes here.

Provenance

The following attestation bundles were made for papertuner-0.0.7.tar.gz:

Publisher: release.yaml on Lyra-Lab/papertuner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file papertuner-0.0.7-py3-none-any.whl.

File metadata

  • Download URL: papertuner-0.0.7-py3-none-any.whl
  • Upload date:
  • Size: 20.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for papertuner-0.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 050073828e42ed747d89e083afbc7ad5cc77a893d782196fce523bbf9e518cb0
MD5 7b3d085d8314a75f156733444c85dc0c
BLAKE2b-256 f25a68b0b3ebbdd2f23ddc21d88a411d79256af16efb21e6412d07303fb5c9be

See more details on using hashes here.

Provenance

The following attestation bundles were made for papertuner-0.0.7-py3-none-any.whl:

Publisher: release.yaml on Lyra-Lab/papertuner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page