Skip to main content

Custom implementation of tfidf for imbalanced datasets

Project description

Weighted-Class-Tfidf

forthebadge made-with-pythonForTheBadgebuilt-with-love

Inspiration behind WCBTFIDF

Standard tfidf models select the features(number defined by the max_features param) using term frequency alone. This can create problems when the dataset is imbalanced resulting in words from the majority class being selected. As a result of this minority class gets under-represented in the matrix that is being returned by tfidf.

Solution

To tackle this problem we break down the tfidf process class wise. Let us consider an example to understand what WCBTFIDF does under the hood

Assume a dataset having two labels 0 and 1. 0 is present in 80% of the records and 1 is present in 20% of the records.

If we run standard tfidf on this(with for example 300 features) it will pick the top 300 words by frequency from both the classes. There is a very high chance that words selected will be majorly from class 0 and we might run the risk of under-representing class 1 severely.

What wcbtfidf does is that first it calculates weight for each label. Weight here refers to how many features it should select from each class.

Since class 0 is present in 80% of the records, wcbtfidf will pick 240 features from class 0 and 60 features from class 1.

So essentially we run tfidf class wise on 0 and 1 labels with max features set as 240 and 60.

After doing that, we combine the vocabulary from both these classes into a single list.It can be easily done since tfidf provides a vocabulary_ param that stores the vocab.

Finally this combined vocab is used as a fixed vocabulary in another tfidf model that is ran on the entire data. By fixing the vocab for the final tfidf we ensure that we are going to score on these set of words only.

To put it simply the 300 features choosen by wcbtfidf are a better representation of the overall data as compared to the features chosen by standard tfidf model.

RESULTS

In the experiments conducted, wcbtfidf performed better than standard tfidf models. The results have been put into a notebook under the demos folder.

Data Sources

IMDB Dataset

Toxic Classification Dataset

Sentiment140 Dataset

Article Link

Click here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Weighted Class Tfidf-1.0.0.tar.gz (4.5 kB view details)

Uploaded Source

Built Distribution

Weighted_Class_Tfidf-1.0.0-py3-none-any.whl (4.4 kB view details)

Uploaded Python 3

File details

Details for the file Weighted Class Tfidf-1.0.0.tar.gz.

File metadata

  • Download URL: Weighted Class Tfidf-1.0.0.tar.gz
  • Upload date:
  • Size: 4.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.5

File hashes

Hashes for Weighted Class Tfidf-1.0.0.tar.gz
Algorithm Hash digest
SHA256 b95af10438856997f73dfa4f519273d2a3d0f57a78097c20042e036449e46a6b
MD5 f091be39ed433da3e658feffe8042cdb
BLAKE2b-256 8e62b2f5e21b7d1de67738ffbd707999e516e4376780cc69b9f61bd64c4743b8

See more details on using hashes here.

File details

Details for the file Weighted_Class_Tfidf-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for Weighted_Class_Tfidf-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4ddbe8d7f3c48372cfa57ea53566aba07b6e8aae03b093ebd166e77aac9d2b0b
MD5 a7d00fb0b40be779c2193e664d7a88ce
BLAKE2b-256 26ca5cceef8502a247e3663f7557e427d13b3a5e009afc9e3df77ed633000c40

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page