Python package to scrap twitter's front-end easily with selenium
Project description
Twitter scraper selenium
Python's package to scrap Twitter's front-end easily with selenium.
Table of Contents
Table of Contents
Prerequisites
Installation
Installing from the source
Download the source code or clone it with:
git clone https://github.com/shaikhsajid1111/twitter-scraper-selenium
Open terminal inside the downloaded folder:
python3 setup.py install
Installing with PyPI
pip3 install twitter-scraper-selenium
Usage
To scrap profile's tweets:
In JSON format:
from twitter_scraper_selenium import scrap_profile
microsoft = scrap_profile(twitter_username="microsoft",output_format="json",browser="firefox",posts_count=10)
print(microsoft)
Output:
{
"1430938749840629773": {
"tweet_id": "1430938749840629773",
"username": "Microsoft",
"name": "Microsoft",
"profile_picture": "https://twitter.com/Microsoft/photo",
"replies": 29,
"retweets": 58,
"likes": 453,
"is_retweet": false,
"retweet_link": "",
"posted_time": "2021-08-26T17:02:38+00:00",
"content": "Easy to use and efficient for all \u2013 Windows 11 is committed to an accessible future.\n\nHere's how it empowers everyone to create, connect, and achieve more: https://msft.it/6009X6tbW ",
"hashtags": [],
"mentions": [],
"images": [],
"videos": [],
"tweet_url": "https://twitter.com/Microsoft/status/1430938749840629773",
"link": "https://blogs.windows.com/windowsexperience/2021/07/01/whats-coming-in-windows-11-accessibility/?ocid=FY22_soc_omc_br_tw_Windows_AC"
},...
}
In CSV format:
from twitter_scraper_selenium import scrap_profile
scrap_profile(twitter_username="microsoft",output_format="csv",browser="firefox",tweets_count=10,filename="microsoft",directory="/home/user/Downloads")
Output:
tweet_id | username | name | profile_picture | replies | retweets | likes | is_retweet | retweet_link | posted_time | content | hashtags | mentions | images | videos | post_url | link |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1430938749840629773 | Microsoft | Microsoft | https://twitter.com/Microsoft/photo | 64 | 75 | 521 | False | 2021-08-26T17:02:38+00:00 | Easy to use and efficient for all – Windows 11 is committed to an accessible future. Here's how it empowers everyone to create, connect, and achieve more: https://msft.it/6009X6tbW |
[] | [] | [] | [] | https://twitter.com/Microsoft/status/1430938749840629773 | https://blogs.windows.com/windowsexperience/2021/07/01/whats-coming-in-windows-11-accessibility/?ocid=FY22_soc_omc_br_tw_Windows_AC |
...
scrap_profile()
arguments:
Argument | Argument Type | Description |
twitter_username | String | Twitter username of the account |
browser | String | Which browser to use for scraping?, Only 2 are supported Chrome and Firefox. Default is set to Firefox |
proxy | String | Optional parameter, if user wants to use proxy for scraping. If the proxy is authenticated proxy then the proxy format is username:password@host:port. |
tweets_count | Integer | Number of posts to scrap. Default is 10. |
output_format | String | The output format, whether JSON or CSV. Default is JSON. |
filename | String | If output parameter is set to CSV, then it is necessary for filename parameter to passed. If not passed then the filename will be same as username passed. |
directory | String | If output_format parameter is set to CSV, then it is valid for directory parameter to be passed. If not passed then CSV file will be saved in current working directory. |
Keys of the output
Key | Type | Description |
tweet_id | String | Post Identifier(integer casted inside string) |
username | String | Username of the profile |
name | String | Name of the profile |
profile_picture | String | Profile Picture link |
replies | Integer | Number of replies of tweet |
retweets | Integer | Number of retweets of tweet |
likes | Integer | Number of likes of tweet |
is_retweet | boolean | Is the tweet a retweet? |
retweet_link | String | If it is retweet, then the retweet link else it'll be empty string |
posted_time | String | Time when tweet was posted in ISO 8601 format |
content | String | content of tweet as text |
hashtags | Array | Hashtags presents in tweet, if they're present in tweet |
mentions | Array | Mentions presents in tweet, if they're present in tweet |
images | Array | Images links, if they're present in tweet |
videos | Array | Videos links, if they're present in tweet |
tweet_url | String | URL of the tweet |
link | String | If any link is present inside tweet for some external website. |
To scrap tweets using keywords:
In JSON format:
from twitter_scraper_selenium import scrap_keyword
#scrap 10 posts by searching keyword "india" from date 30th August till date 31st August
india = scrap_keyword(keyword="india", browser="firefox",
tweets_count=10,output_format="json" ,until="2021-08-31", since="2021-08-30")
print(india)
Output:
{
"1432493306152243200": {
"tweet_id": "1432493306152243200",
"username": "TOICitiesNews",
"name": "TOI Cities",
"profile_picture": "https://twitter.com/TOICitiesNews/photo",
"replies": 0,
"retweets": 0,
"likes": 0,
"is_retweet": false,
"posted_time": "2021-08-30T23:59:53+00:00",
"content": "Paralympians rake in medals, India Inc showers them with rewards",
"hashtags": [],
"mentions": [],
"images": [],
"videos": [],
"tweet_url": "https://twitter.com/TOICitiesNews/status/1432493306152243200",
"link": "https://t.co/odmappLovL?amp=1"
},...
}
In CSV format:
from twitter_scraper_selenium import scrap_keyword
scrap_keyword(keyword="india", browser="firefox",
tweets_count=10, until="2021-08-31", since="2021-08-30",output_format="csv",filename="india")
Output:
tweet_id | username | name | profile_picture | replies | retweets | likes | is_retweet | posted_time | content | hashtags | mentions | images | videos | tweet_url | link |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1432493306152243200 | TOICitiesNews | TOI Cities | https://twitter.com/TOICitiesNews/photo | 0 | 0 | 0 | False | 2021-08-30T23:59:53+00:00 | Paralympians rake in medals, India Inc showers them with rewards | [] | [] | [] | [] | https://twitter.com/TOICitiesNews/status/1432493306152243200 | https://t.co/odmappLovL?amp=1 |
...
scrap_keyword()
arguments:
Argument | Argument Type | Description |
keyword | String | Keyword to search on twitter. |
browser | String | Which browser to use for scraping?, Only 2 are supported Chrome and Firefox,default is set to Firefox. |
until | String | Optional parameter, Until date for scraping, a end date from where search ends. Format for date is YYYY-MM-DD. |
since | String | Optional parameter, Since date for scraping, a past date from where to search from. Format for date is YYYY-MM-DD. |
proxy | Integer | Optional parameter, if user wants to use proxy for scraping. If the proxy is authenticated proxy then the proxy format is username:password@host:port |
tweets_count | Integer | Number of posts to scrap. Default is 10. |
output_format | String | The output format, whether JSON or CSV. Default is JSON. |
filename | String | If output parameter is set to CSV, then it is necessary for filename parameter to passed. If not passed then the filename will be same as keyword passed. |
directory | String | If output parameter is set to CSV, then it is valid for directory parameter to be passed. If not passed then CSV file will be saved in current working directory. |
Keys of the output
Key | Type | Description |
tweet_id | String | Post Identifier(integer casted inside string) |
username | String | Username of the profile |
name | String | Name of the profile |
profile_picture | String | Profile Picture link |
replies | Integer | Number of replies of tweet |
retweets | Integer | Number of retweets of tweet |
likes | Integer | Number of likes of tweet |
is_retweet | boolean | Is the tweet a retweet? |
posted_time | String | Time when tweet was posted in ISO 8601 format |
content | String | content of tweet as text |
hashtags | Array | Hashtags presents in tweet, if they're present in tweet |
mentions | Array | Mentions presents in tweet, if they're present in tweet |
images | Array | Images links, if they're present in tweet |
videos | Array | Videos links, if they're present in tweet |
tweet_url | String | URL of the tweet |
link | String | If any link is present inside tweet for some external website. |
Using scraper with proxy
Just pass proxy
argument to function.
from twitter_scraper_selenium import scrap_keyword
scrap_keyword(keyword="#india", browser="firefox",tweets_count=10,output="csv",filename="india",
proxy="66.115.38.247:5678") #In IP:PORT format
Proxy that requires authentication:
from twitter_scraper_selenium import scrap_profile
microsoft_data = scrap_profile(twitter_username="microsoft", browser="chrome", tweets_count=10, output="json",
proxy="sajid:pass123@66.115.38.247:5678") # username:password@IP:PORT
print(microsoft_data)
Privacy
This scraper only scrapes public data available to unauthenticated user and does not holds the capability to scrap anything private.
LICENSE
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file twitter_scraper_selenium-0.1.3.tar.gz
.
File metadata
- Download URL: twitter_scraper_selenium-0.1.3.tar.gz
- Upload date:
- Size: 14.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.1 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1097fdec1a1ae0c6b71fe0933bf5536c44b64f59e53f43d6405082c27e12a917 |
|
MD5 | 46f04c9a5ec328f2a07bcba6c42015dc |
|
BLAKE2b-256 | d9cffe9cbd38346acedb0b441130e9efc843936a947ff3086e1c779361676c68 |
File details
Details for the file twitter_scraper_selenium-0.1.3-py3.10.egg
.
File metadata
- Download URL: twitter_scraper_selenium-0.1.3-py3.10.egg
- Upload date:
- Size: 28.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.1 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5df15e2801dd93c7709dec6e86c7e6d3c987f701ae28f66f4c0337601a5e9e9a |
|
MD5 | c11a1fcf5c40ebf3d4114fc91d5a57d4 |
|
BLAKE2b-256 | 266257bd456f0f093bcb7dba3a0e4595c406d204797ba15903d9246252e5673e |