Skip to main content

Python Version of Andrew Welter's Hatebase Wrapper, based on DanielJDufour's implementation

Project description

hatebase

Python Version of Andrew Welter's Hatebase Wrapper Using the current 4.2 Version of the Hatebase API

Install

pip install hatebase

Requirements

pip install requests

HatebaseAPI functions

Initialize HatebaseAPI class

from hatebase import HatebaseAPI
key = 'yourHatebaseAPIKeyString'
hatebase = HatebaseAPI({"key": key})
# for more details, set debug to True
hatebase = HatebaseAPI({"key": key, "debug": True})

HatebaseAPI getVocabulary

# set filters for vocabulary query
filters = {"language": "eng"}
format = "json"

response = hatebase.getVocabulary(filters=filters, format=format)

# get some details from response
vocablist = response["result"]
results = response["number_of_results"]
pages = response["number_of_pages"]

HatebaseAPI getVocabularyDetails

format = "json"
details_filters = {'vocabulary_id': vocab_id}

response = hatebase.getVocabularyDetails(filters=details_filters, format=format)

HatebaseAPI getSightings

filters = {'is_about_nationality': '1', 'language': 'eng', 'country_id': 'US'}
format = "json"

response = hatebase.getSightings(filters=filters, format=format)

HatebaseAPI analyze

# TBD

HatebaseAPI getAnalysis

# TBD

Examples

Get All the Hate Speech in English About Nationality in the US

import json
import requests
from hatebase import HatebaseAPI

hatebase = HatebaseAPI({"key": key})
filters = {'is_about_nationality': '1', 'language': 'eng', 'country_id': 'US'}
format = "json"
json_response = hatebase.getSightings(filters=filters, format=format)

Get All Arabic Vocabulary

from json 
import requests
import pandas as pd
from hatebase import HatebaseAPI

hatebase = HatebaseAPI({"key": key})
filters = {"language": "ara"}
format = "json"
# initialize list for all vocabulary entry dictionaries
ara_vocab = []
response = hatebase.getVocabulary(filters=filters, format=format)
pages = response["number_of_pages"]
# fill the vocabulary list with all entries of all pages
# this might take some time...
for page in range(1, pages+1):
    filters["page"] = str(page) 
    response = hatebase.getVocabulary(filters=filters, format=format)
    ara_vocab.append(response["result"])

# create empty pandas df for all vocabulary entries
df_ara_vocab = pd.DataFrame()
# fill df
for elem in ara_vocab:
    df_ara_vocab = df_ara_vocab.append(elem)
# reset the df index
df_ara_vocab.reset_index(drop=True, inplace=True)    

For more documentation on the API check out https://github.com/hatebase/Hatebase-API-Docs

Testing

To test the package run

python -m unittest hatebase.tests.test

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hatebase-1.0.3.tar.gz (5.8 kB view details)

Uploaded Source

File details

Details for the file hatebase-1.0.3.tar.gz.

File metadata

  • Download URL: hatebase-1.0.3.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.5.0.1 requests/2.24.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.8.5

File hashes

Hashes for hatebase-1.0.3.tar.gz
Algorithm Hash digest
SHA256 9507023ced7c132c4c1f7d57cf8303ceb5583403a71ceb5a59a78b50719d9fdd
MD5 ff5cbbefd7e7d3ba814f74ff838a0546
BLAKE2b-256 1772ab0f76df0c2cefe94a13b0e23eadfed2d346f138dfb11145682b6c34b8d4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page