Skip to main content

Argument Mining Framework (AMF) is a comprehensive toolkit designed to streamline and unify various argument mining modules into a single platform.

Project description

AMF (Argument Mining Framework)

AMF is a comprehensive toolkit designed to streamline and unify various argument mining modules into a single platform. By leveraging the Argument Interchange Format (AIF), AMF enables seamless communication between different components, including segmenters, turnators, argument relation identifiers, and argument scheme classifiers.

Resources

Table of Contents

  1. Overview
  2. Installation
  3. Components
  4. Usage
  5. API Reference
  6. License

Overview

AMF provides a modular approach to argument mining, integrating various components into a cohesive framework. The main features include:

  • Argument Segmentation: Identifies and segments arguments within argumentative text.
  • Turnation: Determines dialogue turns within conversations.
  • Argument Relation Identification: Identifies argument relationships between argument units.
  • Argument Scheme Classification: Classifies arguments based on predefined schemes.

Installation

Prerequisites

Ensure you have Python installed on your system. AMF is compatible with Python 3.6 and above.

Step 1: Create a Virtual Environment

It's recommended to create a virtual environment to manage dependencies:

python -m venv amf-env

Activate the virtual environment:

  • Windows:
    .\amf-env\Scripts\activate
    
  • macOS/Linux:
    source amf-env/bin/activate
    

Step 2: Install Dependencies

With the virtual environment activated, install AMF using pip:

pip install amf

This command will install the latest version of AMF along with its dependencies.

Components

Argument Segmentor

The Argument Segmentor component is responsible for detecting and segmenting arguments within text.

Read More

Turnator

The Turnator identifies and segments dialogue turns, facilitating the analysis of conversations and interactions within texts. This module is particularly useful for dialogue-based datasets.

Read More

Argument Relation Identifier

This component identifies and categorizes the relationships between argument units.

Read More

Argument Scheme Classifier

The Argument Scheme Classifier categorizes arguments based on predefined schemes, enabling structured argument analysis.

Read More

Usage

Argument Relation Prediction Example

Below is an example of how to use the AMF Predictor class to generate argument map using an input provided based on AIF:

from amf import ArgumentRelationPredictor
import json

# Initialize Predictor
predictor = ArgumentRelationPredictor(model_type="dialogpt", variant="vanilla")

# Example XAIF structure
xaif = {
    "AIF": {
        "nodes": [
            {"nodeID": "1", "text": "THANK YOU", "type": "I", "timestamp": "2016-10-31 17:17:34"},
            {"nodeID": "2", "text": "COOPER : THANK YOU", "type": "L", "timestamp": "2016-11-10 18:34:23"},
            # Add more nodes as needed
        ],
        "edges": [
            {"edgeID": "1", "fromID": "1", "toID": "20", "formEdgeID": "None"},
            {"edgeID": "2", "fromID": "20", "toID": "3", "formEdgeID": "None"}
            # Add more edges as needed
        ],
        "locutions": [],
        "participants": []
    },
    "text": "people feel that they have been treated disrespectfully..."
}

# Convert XAIF structure to JSON string
xaif_json = json.dumps(xaif)

# Predict argument relations
result_map = predictor.argument_map(xaif_json)
print(result_map)

Full Workflow Example

In this section, we demonstrate how to use multiple components of the AMF framework in a complete argument mining workflow. This example shows how to process a text input through the Turninator, Segmenter, Propositionalizer, and Argument Relation Predictor components.

from src.loader.task_loader import load_amf_component




def process_pipeline(input_data):
    """Process input data through the entire pipeline."""
    # Initialize components
    turninator = load_amf_component('turninator')()
    segmenter = load_amf_component('segmenter')()
    propositionalizer = load_amf_component('propositionalizer')()    
    argument_relation = load_amf_component('argument_relation', "dialogpt", "vanila")
    visualiser = load_amf_component('visualiser')()

    # Step 1: Turninator
    turninator_output = turninator.get_turns(input_data, True)
    print(f'Turninator output: {turninator_output}')

    # Step 2: Segmenter
    segmenter_output = segmenter.get_segments(turninator_output)
    print(f'Segmenter output: {segmenter_output}')

    # Step 3: Propositionalizer
    propositionalizer_output = propositionalizer.get_propositions(segmenter_output)
    print(f'Propositionalizer output: {propositionalizer_output}')

    # Step 4: Argument Relation Prediction
    argument_map_output = argument_relation.get_argument_map(propositionalizer_output)
    print(f'Argument relation prediction output: {argument_map_output}')

    # Additional Analysis
    print("Get all claims:")
    print(argument_relation.get_all_claims(argument_map_output))
    print("===============================================")

    print("Get evidence for claim:")
    print(argument_relation.get_evidence_for_claim(
        "But this isn’t the time for vaccine nationalism", argument_map_output))
    print("===============================================")

    print("Visualise the argument map")
    visualiser.visualise(argument_map_output)

    # Initialize the converter and perform the conversion


def main():
    # Sample input data
    input_data = (
        """Liam Halligan: Vaccines mark a major advance in human achievement since the """
        """enlightenment into the 19th Century and Britain’s been at the forefront of """
        """those achievements over the years and decades. But this isn’t the time for """
        """vaccine nationalism. I agree we should congratulate all the scientists, those """
        """in Belgium, the States, British scientists working in international teams here """
        """in the UK, with AstraZeneca.\n"""
        """Fiona Bruce: What about the logistical capabilities? They are obviously """
        """forefront now, now we’ve got a vaccine that’s been approved. It’s good -- I’m """
        """reassured that the British Army are going to be involved. They’re absolute world """
        """experts at rolling out things, complex logistic capabilities. This is probably """
        """going to be the biggest logistical exercise that our armed forces have undertaken """
        """since the Falklands War, which I’m old enough to remember, just about. So, as a """
        """neutral I’d like to see a lot of cross-party cooperation, and I’m encouraged with """
        """Sarah’s tone, everybody wants to see us getting on with it now. They don’t want """
        """to see competition on whose vaccine is best. There will be some instances where """
        """the Pfizer vaccine works better, another where you can’t have cold refrigeration, """
        """across the developing world as well, a cheaper vaccine like the AstraZeneca works """
        """better. Let’s keep our fingers crossed and hope we make a good job of this."""
    )

    process_pipeline(input_data)

if __name__ == "__main__":
    main()

Detailed Breakdown of the Workflow

  1. Turninator Component:

    • The Turninator processes the input text to identify and segment dialogue turns. This is particularly useful for dialogue datasets.
    • The get_turns method is used to perform the segmentation. The boolean parameter indicates whether the default settings should be applied.
    turninator_output = turninator.get_turns(input_data, True)
    print(f'Turninator output: {turninator_output}')
    
  2. Segmenter Component:

    • The Segmenter takes the output from the Turninator and further segments the text into argument segments.
    • This component is designed to handle various text formats and identify distinct argumentative segments within them.
    segmenter_output = segmenter.get_segments(turninator_output)
    print(f'Segmenter output: {segmenter_output}')
    
  3. Propositionalizer Component:

    • The Propositionalizer takes the segmented text and contructs propositions.
    propositionalizer_output = propositionalizer.get_propositions(segmenter_output)
    print(f'Propositionalizer output: {propositionalizer_output}')
    
  4. Argument Relation Predictor:

    • The Argument Relation Predictor analyzes the propositions and identifies the argument relationships between them. The argument relations include support, contradiction, or rephrase.
    argument_map_output = predictor.get_argument_map(propositionalizer_output)
    print(f'Argument relation prediction output: {argument_map_output}')
    

Customization and Advanced Usage

  • Model Selection: You can customize the model_type and variant parameters in the ArgumentRelationPredictor to use different models or configurations, depending on your specific requirements.

API Reference

For detailed information on all available methods and parameters, please refer to the API Documentation.

License

AMF is licensed under the MIT License. For more details, please refer to the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

argument_mining_framework-0.0.4.tar.gz (22.0 kB view details)

Uploaded Source

Built Distribution

argument_mining_framework-0.0.4-py3-none-any.whl (28.0 kB view details)

Uploaded Python 3

File details

Details for the file argument_mining_framework-0.0.4.tar.gz.

File metadata

File hashes

Hashes for argument_mining_framework-0.0.4.tar.gz
Algorithm Hash digest
SHA256 9234fe49024c6bfae820467f7cbca09ff0ff0a0de6e52a864f002f33f46a50e4
MD5 4b1ad9d008717e90c29324c5a1450ea6
BLAKE2b-256 1ae5901a16f72ef392b752f9bfad6e1b0202d1bc4cc8e89b6356949d0bbdd1f6

See more details on using hashes here.

File details

Details for the file argument_mining_framework-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for argument_mining_framework-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 4d227198adb57606bdc1460af7ce4ea858550751f524a794e7c58778e4f171f8
MD5 289cd050b7873667292bb45d570bc3e2
BLAKE2b-256 4e1c528884a6860a74fba3967e11c3dd28776d5f6411c97b9d8021ad313c8f33

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page