library for extracting reference from documents
Project description
ScrapeBiblio: PDF Reference Extraction and Verification Library
Powered by Scrapegraphai
This library is designed to extract references from a PDF file, check them against the Semantic Scholar database, and save the results to a Markdown file.
Overview
The library performs the following steps:
First usage: extracting references from
- Extract Text from PDF: Reads the content of a PDF file and extracts the text.
- Split Text into Chunks: Splits the extracted text into smaller chunks to manage large texts efficiently.
- Extract References: Uses the OpenAI API to extract references from the text.
- Save References: Saves the extracted references to a Markdown file.
- Check References in Semantic Scholar: (Optional) Checks if the extracted references are present in the Semantic Scholar database.
Installation and Setup
To install the required dependencies, you can use the following command:
pip install scrapebiblio
Ensure you have a .env file in the root directory of your project with the following content:
OPENAI_API_KEY="YOUR_OPENAI_KEY"
SEMANTIC_SCHOLARE_API_KEY="YOUR_SEMANTIC_SCHOLAR_KEY"
Usage
To use the library, ensure you have the required environment variables set and run the script. The extracted references will be saved to a Markdown file named references.md.
Example
Here is an example of how to use the library:
import logging
import os
from dotenv import load_dotenv
from biblio.find_reference import process_pdf
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
load_dotenv()
def main():
"""
Main function that processes a PDF, extracts text, and saves the references.
"""
pdf_path = 'test/558779153.pdf'
references_output_path = 'references.md'
openai_api_key = os.getenv('OPENAI_API_KEY')
semantic_scholar_api_key = os.getenv('SEMANTIC_SCHOLARE_API_KEY')
if not openai_api_key:
raise EnvironmentError("OPENAI_API_KEY environment variable not set.")
if not semantic_scholar_api_key:
raise EnvironmentError("SEMANTIC_SCHOLARE_API_KEY environment variable not set.")
logging.debug("Starting PDF processing...")
process_pdf(pdf_path, references_output_path, openai_api_key, semantic_scholar_api_key)
logging.debug("Processing completed.")
if __name__ == "__main__":
main()
Contributing
We welcome contributions to this project. If you would like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Make your changes.
- Submit a pull request with a detailed description of your changes.
License
This project is licensed under the MIT License. See the LICENSE file for more information.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scrapebiblio-1.1.0.tar.gz.
File metadata
- Download URL: scrapebiblio-1.1.0.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c9ebe62194f8aaf061ea72c0acd6beac770e4f8ee1982f5ab4a0c82a0d1c6e1
|
|
| MD5 |
330651f5e9370fe8a3ca555a24579473
|
|
| BLAKE2b-256 |
8428cb886d935779593bd5d2ed1ab430d9fb7edeab4df154908e4aa63f9ab204
|
File details
Details for the file scrapebiblio-1.1.0-py3-none-any.whl.
File metadata
- Download URL: scrapebiblio-1.1.0-py3-none-any.whl
- Upload date:
- Size: 10.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
613e0e914cac44430792e8fef821cb39a4e493024ddab21380310ed6dfcdb1da
|
|
| MD5 |
0e2ce076fa861d2563309f811dac8b6c
|
|
| BLAKE2b-256 |
0fc12cc0c8dbda00e296af7d9ba0f247b46197ce90d04442b088291b702c849f
|