Skip to main content

Scrape Twitter based off query and runs NLTK vader and cos similarity model

Project description

Basic Twitter NLP

Description: Simple set of comands to work with twitter and text sediment analysis

NEEDED: Twitter Developer account for bearer token Twitter Developer

Getting Started


Build Database to Store tweets and analysis. It will create the 5 tables needed for the process

db_init(con)

Add Querys. It takes any query that is allowed by your Twitter api acesses. Twitter Query Help Guide

add_query(twitter_query, con)

Run Process. This will run the scrape and NLP process and store them in the tables

run_tw_nlp(con, client, query=None, inc_rt=False)

con : SQLite3 connection con = sqlite3.connection('DATABASE_NAME.db')

client : twitter bearer token client = tweepy.Client(bearer_token='bearer')

query : query index number which can be found in the "query_idx" column in the queries table

inc_rt : to include retweets in text analysis

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

basictwitternlp-0.1.1.tar.gz (5.8 kB view hashes)

Uploaded Source

Built Distribution

basictwitternlp-0.1.1-py3-none-any.whl (6.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page