Skip to main content

Extract CO2 emissions data from PDF sustainability reports using LLMs

Project description

information-extraction-pilot

Information-extraction-pilot is a retrieval-augmented generation (RAG) pipeline that surfaces CO₂ emissions data from corporate sustainability reports. It embeds PDF pages, ranks relevant context, and prompts a large language model to extract Scope 1–3 emissions into structured tables for downstream analysis.

Background

This pilot began as the team’s submission for the 2024 ClimateNLP workshop at ACL. The repository now serves as the maintained codebase for automating emissions extraction, while retaining the project’s research lineage.

This repository is organized as follows:

  • data: source data to be analyzed and the gold standard dataset
  • output: pipeline results
  • prompt: prompt templates and queries
  • src: pipeline source code
  • tests: automated checks for the pilot

Setup

Python environment

It is recommended to run the code in a virtual environment using at least Python 3.11:

If you are using pip, run

python3.11 -m venv co2_info_extraction pip install -r requirements.txt

to install all dependencies.

Other dependencies

Since the python package pdf2image is a wrapper around poppler, you will need to install it. See https://pypi.org/project/pdf2image/

Azure Authentication

This repository uses Azure modules, so you need to have access to it. The code relies on the presence of an .env file that stores your credentials. Configure your own authentication workflow with environment variables, see the description.

Azure Databricks

Furthermore, the repository uses mlflow for tracking of experiments. To set up access to the Mlflow Tracking Server on Azure Databricks, you need to create a personal access token. Follow the following steps:

  1. Log into Azure.
  2. Search for gist-mlflow-tracking-server to find the respective Databricks instance.
  3. Copy the URL which contains azuredatabricks.net and save it in the .env file as DATABRICKS_HOST variable.
  4. Save the variable MLFLOW_TRACKING_URI with the value databricks to the .env file.
  5. Launch the workspace and click on your initial in the upper right corner.
  6. Navigate to Settings > User > Developer > Access tokensand click on Manage. Generate a new access token and save it in the .env file as DATABRICKS_TOKEN variable. Be aware that it takes some time for the token to get activated, so you might get 401 authentication errors in the beginning when running the code. This should be resolved after some time.

Run of main.py

The script uses three dataclasses to manage configurations: MlflowParams, ConfigParams, and ExperimentParams. These can be customized directly in main.py or through external configuration files integrated into config.py.

Key Parameters

Parameters that can be updated through the helpers.update_dataclass() function.

ConfigParams:

  • gold_standard: Currently supports gist_2025 (default)

  • filename_list: List of filenames that will be input into the pipeline, can be adjusted manually or via the function helpers.get_file_paths

ExperimentParams:

  • emb_model: Name of the embedding model.

  • llm_model: Name of the LLM to use.

  • prompt_type: Type of prompt (default or custom_gaia).

  • search_query: Query passed to the pipeline.

  • year_min and year_max: Filters for data based on year.

Running the Script

Standard Execution

To run the pipeline, execute:

python main.py

Customizing Parameters

Modify the parameters in main.py by updating the relevant dataclass instances. For example:

helpers.update_dataclass(config_params, { 'filename_list': ['./data/pdfs/apple_2021_en.pdf'], }) helpers.update_dataclass(experiment_params, { 'prompt_type': 'custom_gaia', 'search_query': "What are the carbon emissions for the last 10 years?", })

Logging and Debugging

  • Set the desired log level in the logging.basicConfig() call, e.g., logging.DEBUG for verbose logs.

  • Outputs and errors will appear in the console.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

climatextract-0.2.1.tar.gz (61.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

climatextract-0.2.1-py3-none-any.whl (64.2 kB view details)

Uploaded Python 3

File details

Details for the file climatextract-0.2.1.tar.gz.

File metadata

  • Download URL: climatextract-0.2.1.tar.gz
  • Upload date:
  • Size: 61.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for climatextract-0.2.1.tar.gz
Algorithm Hash digest
SHA256 db1459c9d78b193c9cac292e6d0ccfbf78ab3ed12372dd0becde4f0b53c30c6f
MD5 5623c30508aba1c28a153c647ebfb091
BLAKE2b-256 3504483c285e1debfbba2aac854ff0afb91b2332632ac09a13bd0ad4ca591da6

See more details on using hashes here.

File details

Details for the file climatextract-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: climatextract-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 64.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for climatextract-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bc9e9b00fbe6b27d887ba10532f4cbe2f01a326ec85bdad1ae73539e9c682b37
MD5 8783bafa73f18c5c2871910bca3a82d8
BLAKE2b-256 42b2927d2b1b614b7e0e359b4873b5c15a6722ece748ea61fac7b57042e2906e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page