Scrape Twitter based off query and runs NLTK vader and cos similarity model
Project description
Basic Twitter NLP
Description: Simple set of comands to work with twitter and text sediment analysis
NEEDED: Twitter Developer account for bearer token Twitter Developer
Getting Started
Build Database to Store tweets and analysis. It will create the 5 tables needed for the process
db_init(con)
Add Querys. It takes any query that is allowed by your Twitter api acesses. Twitter Query Help Guide
add_query(twitter_query, con)
Run Process. This will run the scrape and NLP process and store them in the tables
run_tw_nlp(con, client, query=None, inc_rt=False)
con : SQLite3 connection con = sqlite3.connection('DATABASE_NAME.db')
client : twitter bearer token client = tweepy.Client(bearer_token='bearer')
query : query index number which can be found in the "query_idx" column in the queries table
inc_rt : to include retweets in text analysis
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for basictwitternlp-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba300224a043c6afa2a3336c0a2a549402ca9e07a346a1abc896ae99482992ad |
|
MD5 | ca0a003480984da2cb2dbf4bfceec775 |
|
BLAKE2b-256 | f958f482223a7648d72d6aaad9e5acfc085824e716729fdf27d97c503ef3f229 |