Skip to main content

A package for analyzing Wikipedia deletion discussions.

Project description

Wide-Analysis : Suite for wikipedia deletion discussion analysis

Introduction

Wide-Analysis is a suite of tools for analyzing Wikipedia deletion discussions. It is designed to help researchers and practitioners to understand the dynamics of deletion discussions, and to develop tools for supporting the decision-making process in Wikipedia. The suite includes a set of tools for collecting, processing, and analyzing deletion discussions. The package contains the following functionalities

  • Data Collection and preprcoessing: Collecting deletion discussions from Wikipedia and prepare a dataset. This can be done in article title level, or in date-range level.
  • Model based functionalities: The suite includes a set of Language Model based tasks, such as:
    • Outcome Prediction: Predicting the outcome of a deletion discussion, the outcome can be the decision made with the discussion (e.g., keep, delete, merge, etc.) (determined from the complete discussion)
    • Stance Detection: Identifying the stance of the participants in the discussion, in relation to the deletion decision.(determined from each individual comment in discussion)
    • Policy Prediction: Predicting the policy that is most relevant to the comments of the participants in the discussion.(determined from each individual comment in discussion)
    • Sentiment Prediction: Predicting the sentiment of the participants in the discussion, in relation to the deletion decision.(determined from each individual comment in discussion)
    • Offensive Language Detection: Detecting offensive language in the comments of the participants in the discussion.(determined from each individual comment in discussion)

Get started 🚀

You can install the package from PyPI using the following command:

pip install wide-analysis

After the installation, you can import the package and start using the functionalities.

Create dataset

The dataset creation funtionalities will return a dataframe. The data collection command contains the following parameters:

  • mode : str
    • The mode of data collection. It can be 'article', 'date_range', or 'date' or 'existing'.
  • start_date : str
    • The start date of the data collection. It should be in the format 'YYYY-MM-DD'(for example, '2021-01-01').
  • end_date : str
    • The end date of the data collection. It should be in the format 'YYYY-MM-DD'(for example, '2021-01-01'). If left empty, the data collection will be done for a single date(start_date).
  • url : str (optional)
  • title : str
    • The title of the Wikipedia article. only needed for title based extraction. for example: 'COVID-19_pandemic_in_India'
  • output_path : str
    • The path to save the dataset.The dataset will be saved as 'csv' file. If not provided, the dataset will be returned as a dataframe.

Creation of dataset can be done in four ways:

  • Wide-analysis Dataset: If selected 'wide_2023' as mode parameter, then the data will be collected from the existing Wide-analysis dataset available in huggingface('hsuvaskakoty/wide_analysis') and the function will return huggingface dataset.
from wide_analysis import data_collect
data = data_collect.collect(mode = 'wide_2023', 
                            start_date=None, 
                            end_date=None, 
                            url=None, 
                            title=None, 
                            output_path=None)

Example: To collect the existing 'wide_2023' dataset, the following command can be used:

from wide_analysis import data_collect
data = data_collect.collect(mode = 'wide_2023', 
                            start_date=None, 
                            end_date=None, 
                            url=None, 
                            title=None, 
                            output_path=None)

will return the existing dataset available in huggingface('hsuvaskakoty/wide_analysis').

Datset loaded successfully as huggingfaece dataset
The dataset has the following columns: {'train': ['text', 'label'], 'validation': ['text', 'label'], 'test': ['text', 'label']}
  • Article level: Collecting deletion discussions for a specific article.
from wide_analysis import data_collect
data = data_collect.collect(mode = 'title', 
                            start_date='YYYY-MM-DD', 
                            end_date=None, 
                            url='URL for the title', 
                            title='article title', 
                            output_path='save_path' or None)

Example: To collect the deletion discussions for the article 'Raisul Islam Ador' for the date '2024-07-18', the following command can be used:

from wide_analysis import data_collect
data = data_collect.collect(mode = 'title', 
                            start_date='2024-07-18', 
                            end_date=None, 
                            url='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador', 
                            title='Raisul Islam Ador', 
                            output_path= None)

This will return a dataframe with the data for the title 'Raisul Islam Ador' for the date '2024-07-18'. If the output_path is provided, the dataframe will be saved as a csv file in the provided path. The output looks like the following:

Date Title URL Discussion Label Confirmation
2024-07-18 Raisul Islam Ador URL to article text Deletion discussion here speedy delete Please do not modify it.
  • Date range level: Collecting deletion discussions for a specific date range.
from wide_analysis import data_collect
data = data_collect.collect(mode = 'date_range', 
                            start_date='YYYY-MM-DD', 
                            end_date='YYYY-MM-DD', 
                            url=None, 
                            title=None, 
                            output_path='save_path' or None)

Example: To collect the deletion discussions for the articles within the date range '2024-07-18' and '2024-07-20', the following command can be used:

from wide_analysis import data_collect
data = data_collect.collect(mode = 'date_range', 
                            start_date='2024-07-18', 
                            end_date='2024-07-20', 
                            url=None, 
                            title=None, 
                            output_path= None)

This will return a dataframe with the data for the articles within the date range '2024-07-18' and '2024-07-20'. The output looks like the same format as the article level data collection, just with more rows for each date within the date range.

  • Date level: Collecting deletion discussions for a specific date.
from wide_analysis import data_collect
data = data_collect.collect(mode = 'date', 
                            start_date='YYYY-MM-DD', 
                            end_date=None, 
                            url=None, 
                            title=None, 
                            output_path= None)

Example: To collect the deletion discussions for the articles within the date '2024-07-18', the following command can be used:

from wide_analysis import data_collect
data = data_collect.collect(mode = 'date', 
                            start_date='2024-07-18', 
                            end_date=None, 
                            url=None, 
                            title=None, 
                            output_path= None)

This will return a dataframe with the data for the articles within the date '2024-07-18'. The output looks like the same format as the article level data collection, just with more rows for each article within the date.

Model based functionalities

We train a set of models and leverage some pretrained task based models from huggingface for the following tasks: Outcome Prediction, Stance Detection, Policy Prediction, Sentiment Prediction, and Offensive Language Detection. The functionalities will return a dictionary, with the predictions for each task and their individual probablity score. The model based functionalities contain the following parameters:

  • inp: 'str'
    • The url or text of the Wikipedia article deletion discussion.
  • mode: 'str'
    • The mode of the input. it can be 'url' or 'text'. If 'url' is selected, the input should be the URL of the Wikipedia article deletion discussion. If 'text' is selected, the input should be the text of the Wikipedia article deletion discussion in the following format: Title: Deletion discussion Text where Title is the title of the article and Text is the deletion discussion. Default is 'url'.
  • task: 'str'
    • The task to be performed. It can be 'outcome', 'stance', 'policy', 'sentiment', or 'offensive'.

It is worth noting that the model based functionalities are only available for the article level data collection. We also provide an explanation feature for outcome prediction task, which will return the explanation of the prediction made by the model using Openai GPT 3.5 model. You will need your own API key for this feature to work.

Outcome Prediction

Apart from the input parameters, the outcome prediction function also contains the following parameters:

  • openai_access_token: 'str'
    • The API key for Openai GPT 4o-mini model. If explanation is True, then it will ask for the API key for Openai GPT 4o-mini model. Default is None.
  • explanation: 'bool'
    • If True, it will return the explanation of the prediction made by the model. Default is False.
from wide_analysis import analyze
predictions = analyze(inp='URL/text of the article',
                    mode='url or text',
                     task='outcome',
                     openai_access_token=None,
                     explanation=False)

Example: To predict the outcome of the deletion discussion for the article 'Raisul Islam Ador' using discussion url, the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',
                mode= 'url', 
                task='outcome',
                openai_access_token=None,
                explanation=False)

OR if using text:

from wide_analysis import analyze
text_input = 'Raisul Islam Ador: None establish his Wikipedia:Notability. The first reference is almost identical in wording to his official web site.CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.' #sample input text
predictions = analyze(inp=text_input, 
                    mode= 'text', 
                    task='outcome', 
                    openai_access_token=None, 
                    explanation=False)

Both of which will return the following output:

{'prediction': 'speedy delete', 'probability': 0.99}

To predict the outcome of the deletion discussion for the article 'Raisul Islam Ador' with explanation, the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',
                    mode='url', 
                    task='outcome',
                    openai_access_token='<OPENAI KEY>',
                    explanation=True)

Returns:

{'prediction': 'speedy delete', 
'probability': 0.99, 
'explanation': 'The article does not establish the notability of the subject. The references are not reliable and the article is not well written. '}

Stance Detection

from wide_analysis import analyze
predictions = analyze.analyze(inp='URL/text of the article',mode='url or text', task='stance')

Example: To predict the stance of the participants in the deletion discussion for the article 'Raisul Islam Ador', the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',mode = 'url', task='stance')

OR if using text:

from wide_analysis import analyze
text_input = 'Raisul Islam Ador: None establish his Wikipedia:Notability. The first reference is almost identical in wording to his official web site.CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.' #sample input text
predictions = analyze(inp=text_input, mode= 'text', task='stance')

Both of which will return the following output:

[{'sentence': 'None establish his Wikipedia:Notability .  ', 'stance': 'delete', 'score': 0.9950249791145325}, 
{'sentence': 'The first reference is almost identical in wording to his official web site.  ', 'stance': 'delete', 'score': 0.7702090740203857}, 
{'sentence': 'CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.  ', 'stance': 'delete', 'score': 0.9993199110031128}]

Policy Prediction

from wide_analysis import analyze
predictions = analyze(inp='URL/text of the article',mode='url or text', task='policy')

Example: To predict the policy that is most relevant to the comments of the participants in the deletion discussion for the article 'Raisul Islam Ador', the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',mode = 'url', task='policy')

OR if using text:

from wide_analysis import analyze
text_input = 'Raisul Islam Ador: None establish his Wikipedia:Notability. The first reference is almost identical in wording to his official web site.CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.' #sample input text
predictions = analyze(inp=text_input, mode= 'text', task='policy')

Both of which will return the following output:

[{'sentence': 'None establish his Wikipedia:Notability .  ', 'policy': 'Wikipedia:Notability', 'score': 0.8100407719612122}, 
{'sentence': 'The first reference is almost identical in wording to his official web site.  ', 'policy': 'Wikipedia:Notability', 'score': 0.6429345607757568}, 
{'sentence': 'CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.  ', 'policy': 'Wikipedia:Criteria for speedy deletion', 'score': 0.9400111436843872}]

Sentiment Prediction

from wide_analysis import analyze
predictions = analyze(inp='URL/text of the article',mode='url or text', task='sentiment')

Example: To predict the sentiment of the participants in the deletion discussion for the article 'Raisul Islam Ador' with url, the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',mode='url' task='sentiment')

OR if using text:

from wide_analysis import analyze
text_input = 'Raisul Islam Ador: None establish his Wikipedia:Notability. The first reference is almost identical in wording to his official web site.CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.' #sample input text
predictions = analyze(inp=text_input, mode= 'text', task='sentiment')

Both of which will return the following output:

[{'sentence': 'None establish his Wikipedia:Notability .  ', 'sentiment': 'negative', 'score': 0.515991747379303},
 {'sentence': 'The first reference is almost identical in wording to his official web site.  ', 'sentiment': 'neutral', 'score': 0.9082792401313782}, 
 {'sentence': 'CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.  ', 'sentiment': 'neutral', 'score': 0.8958092927932739}, ]

Offensive Language Detection

from wide_analysis import analyze
predictions = analyze(inp='URL/text of the article',mode='url or text', task='offensive')

Example: To detect offensive language in the comments of the participants in the deletion discussion for the article 'Raisul Islam Ador', the following command can be used:

from wide_analysis import analyze
predictions = analyze(inp='https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Log/2024_July_15#Raisul_Islam_Ador',mode='url', task='offensive')

OR if using text:

from wide_analysis import analyze
text_input = 'Raisul Islam Ador: None establish his Wikipedia:Notability. The first reference is almost identical in wording to his official web site.CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.' #sample input text
predictions = analyze(inp=text_input, mode= 'text', task='offensive')

Both of which will return the following output:

[{'sentence': 'None establish his Wikipedia:Notability .  ', 'offensive_label': 'non-offensive', 'score': 0.8752073645591736}, 
{'sentence': 'The first reference is almost identical in wording to his official web site.  ', 'offensive_label': 'non-offensive', 'score': 0.9004920721054077},
{'sentence': 'CambridgeBayWeather (solidly non-human), Uqaqtuq (talk) , Huliva 20:06, 15 July 2024 (UTC) [ reply ] Delete , if not a CSD under G11.  ', 'offensive_label': 'non-offensive', 'score': 0.9054554104804993}]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wide_analysis-0.3.3.tar.gz (18.4 kB view details)

Uploaded Source

Built Distribution

wide_analysis-0.3.3-py3-none-any.whl (19.8 kB view details)

Uploaded Python 3

File details

Details for the file wide_analysis-0.3.3.tar.gz.

File metadata

  • Download URL: wide_analysis-0.3.3.tar.gz
  • Upload date:
  • Size: 18.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for wide_analysis-0.3.3.tar.gz
Algorithm Hash digest
SHA256 ba93f886ca8d0b9798df4b3c710285d7f5478dddd9e263a64c7dc48fb2796575
MD5 96192c1ac25046882e4c2cba50d45017
BLAKE2b-256 643506f28e9952edca5bcc8f15b6f9e68825e0262e212b8198373a790e9fcf88

See more details on using hashes here.

File details

Details for the file wide_analysis-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for wide_analysis-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 178142de85c859d9f866c993f0181f2b0be87e4d6b81b513abb16fac9b92dbdb
MD5 2f5da96bd1046370009cde176a90e43a
BLAKE2b-256 5afcb1aa2ba63cc9bda3455e5ec2df9f2f9f2aea58359da3a2e94d8525f620f5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page