Skip to main content

Extract CO2 emissions data from PDF sustainability reports using LLMs

Project description

information-extraction-pilot

Information-extraction-pilot is a retrieval-augmented generation (RAG) pipeline that surfaces CO₂ emissions data from corporate sustainability reports. It embeds PDF pages, ranks relevant context, and prompts a large language model to extract Scope 1–3 emissions into structured tables for downstream analysis.

Background

This pilot began as the team’s submission for the 2024 ClimateNLP workshop at ACL. The repository now serves as the maintained codebase for automating emissions extraction, while retaining the project’s research lineage.

This repository is organized as follows:

  • data: source data to be analyzed and the gold standard dataset
  • output: pipeline results
  • prompt: prompt templates and queries
  • src: pipeline source code
  • tests: automated checks for the pilot

Setup

Python environment

It is recommended to run the code in a virtual environment using at least Python 3.11:

If you are using pip, run

python3.11 -m venv co2_info_extraction pip install -r requirements.txt

to install all dependencies.

Other dependencies

Since the python package pdf2image is a wrapper around poppler, you will need to install it. See https://pypi.org/project/pdf2image/

Azure Authentication

This repository uses Azure modules, so you need to have access to it. The code relies on the presence of an .env file that stores your credentials. Configure your own authentication workflow with environment variables, see the description.

Azure Databricks

Furthermore, the repository uses mlflow for tracking of experiments. To set up access to the Mlflow Tracking Server on Azure Databricks, you need to create a personal access token. Follow the following steps:

  1. Log into Azure.
  2. Search for gist-mlflow-tracking-server to find the respective Databricks instance.
  3. Copy the URL which contains azuredatabricks.net and save it in the .env file as DATABRICKS_HOST variable.
  4. Save the variable MLFLOW_TRACKING_URI with the value databricks to the .env file.
  5. Launch the workspace and click on your initial in the upper right corner.
  6. Navigate to Settings > User > Developer > Access tokensand click on Manage. Generate a new access token and save it in the .env file as DATABRICKS_TOKEN variable. Be aware that it takes some time for the token to get activated, so you might get 401 authentication errors in the beginning when running the code. This should be resolved after some time.

Run of main.py

The script uses three dataclasses to manage configurations: MlflowParams, ConfigParams, and ExperimentParams. These can be customized directly in main.py or through external configuration files integrated into config.py.

Key Parameters

Parameters that can be updated through the helpers.update_dataclass() function.

ConfigParams:

  • gold_standard: Currently supports gist_2025 (default)

  • filename_list: List of filenames that will be input into the pipeline, can be adjusted manually or via the function helpers.get_file_paths

ExperimentParams:

  • emb_model: Name of the embedding model.

  • llm_model: Name of the LLM to use.

  • prompt_type: Type of prompt (default or custom_gaia).

  • search_query: Query passed to the pipeline.

  • year_min and year_max: Filters for data based on year.

Running the Script

Standard Execution

To run the pipeline, execute:

python main.py

Customizing Parameters

Modify the parameters in main.py by updating the relevant dataclass instances. For example:

helpers.update_dataclass(config_params, { 'filename_list': ['./data/pdfs/apple_2021_en.pdf'], }) helpers.update_dataclass(experiment_params, { 'prompt_type': 'custom_gaia', 'search_query': "What are the carbon emissions for the last 10 years?", })

Logging and Debugging

  • Set the desired log level in the logging.basicConfig() call, e.g., logging.DEBUG for verbose logs.

  • Outputs and errors will appear in the console.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

climatextract-0.1.1.tar.gz (60.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

climatextract-0.1.1-py3-none-any.whl (63.0 kB view details)

Uploaded Python 3

File details

Details for the file climatextract-0.1.1.tar.gz.

File metadata

  • Download URL: climatextract-0.1.1.tar.gz
  • Upload date:
  • Size: 60.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for climatextract-0.1.1.tar.gz
Algorithm Hash digest
SHA256 839fb50c16c43197fc3e87d6a07e628b6962a53f222fffdff530fb11934a9cc0
MD5 978276d0c7270520c4583c140146c84b
BLAKE2b-256 b464a60a2d4fa217de2c98b6c5b9c0e39836b6b22f86bb9082745978bf03f855

See more details on using hashes here.

File details

Details for the file climatextract-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: climatextract-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 63.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for climatextract-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c706b6944acbe26892e95a5fe3b9015533d9daa2f0f712622b3e1abfacad1c25
MD5 89e003af84b6c14cdb9aa90651093433
BLAKE2b-256 6a027b3370d2a1783638996e77035efcdfed04960fcb4e649a496c7a8112905b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page