Scrape Twitter based off query and runs NLTK vader and cos similarity model
Project description
Basic Twitter NLP
Description: Simple set of comands to work with twitter and text setiment analysis
NEEDED: Twitter Developer account for bearer token Twitter Developer
Getting Started
Build Database to Store tweets and analysis. It will create the 5 tables needed for the process
db_init(con)
Add Querys. It takes any query that is allowed by your Twitter api acesses. Twitter Query Help Guide
add_query(twitter_query, con)
Run Process. This will run the scrape and NLP process and store them in the tables
run_tw_nlp(con, client, query=None, inc_rt=False)
con : SQLite3 connection con = sqlite3.connection('DATABASE_NAME.db')
client : twitter bearer token client = tweepy.Client(bearer_token='bearer')
query : query index number which can be found in the "query_idx" column in the queries table
inc_rt : to include retweets in text analysis
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for basictwitternlp-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 772b47e19b7acffd097681a966d41c730fff9167c49bf95edef78482e3bdea28 |
|
MD5 | d2f98b32e9e8bd0d0be54763959d3ec6 |
|
BLAKE2b-256 | f1ba4ef167f6a27d93f72104e7c80f47547420ba55ab96b5f552bb8f4e4f5cf1 |