Twitter GraphQL and Search API implementation with SNScrape data models
Project description
twscrape
Twitter GraphQL and Search API implementation with SNScrape data models.
Install
pip install twscrape
Or development version:
pip install git+https://github.com/vladkens/twscrape.git
Features
- Support both Search & GraphQL Twitter API
- Async/Await functions (can run multiple scrapers in parallel at the same time)
- Login flow (with receiving verification code from email)
- Saving/restoring account sessions
- Raw Twitter API responses & SNScrape models
- Automatic account switching to smooth Twitter API rate limits
Usage
import asyncio
from twscrape import AccountsPool, API, gather
from twscrape.logger import set_log_level
async def main():
pool = AccountsPool() # or AccountsPool("path-to.db") - default is `accounts.db`
await pool.add_account("user1", "pass1", "user1@example.com", "email_pass1")
await pool.add_account("user2", "pass2", "user2@example.com", "email_pass2")
# log in to all new accounts
await pool.login_all()
api = API(pool)
# search api (latest tab)
await gather(api.search("elon musk", limit=20)) # list[Tweet]
# graphql api
tweet_id, user_id, user_login = 20, 2244994945, "twitterdev"
await api.tweet_details(tweet_id) # Tweet
await gather(api.retweeters(tweet_id, limit=20)) # list[User]
await gather(api.favoriters(tweet_id, limit=20)) # list[User]
await api.user_by_id(user_id) # User
await api.user_by_login(user_login) # User
await gather(api.followers(user_id, limit=20)) # list[User]
await gather(api.following(user_id, limit=20)) # list[User]
await gather(api.user_tweets(user_id, limit=20)) # list[Tweet]
await gather(api.user_tweets_and_replies(user_id, limit=20)) # list[Tweet]
# note 1: limit is optional, default is -1 (no limit)
# note 2: all methods have `raw` version e.g.:
async for tweet in api.search("elon musk"):
print(tweet.id, tweet.user.username, tweet.rawContent) # tweet is `Tweet` object
async for rep in api.search_raw("elon musk"):
print(rep.status_code, rep.json()) # rep is `httpx.Response` object
# change log level, default info
set_log_level("DEBUG")
# Tweet & User model can be converted to regular dict or json, e.g.:
doc = await api.user_by_id(user_id) # User
doc.dict() # -> python dict
doc.json() # -> json string
if __name__ == "__main__":
asyncio.run(main())
Note on rate limits:
- Search API – 250 requests per account / 15 minites
- GraphQL API – 500 requests per account per operation / 15 minutes
Models
Related
- SNScrape – is a scraper for social networking services (SNS)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
twscrape-0.1.1.tar.gz
(129.6 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
twscrape-0.1.1-py3-none-any.whl
(18.0 kB
view details)
File details
Details for the file twscrape-0.1.1.tar.gz.
File metadata
- Download URL: twscrape-0.1.1.tar.gz
- Upload date:
- Size: 129.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4edd1877c5e03cf98f1c628b3b3950ba782c1a1889b3b1847b577e9299076fbc
|
|
| MD5 |
ef7dc17fb8d17d2642841bba6c0fcfb7
|
|
| BLAKE2b-256 |
da654dcc12c4ad03a7f80bff970f5300fe9e6296d3ce3b3c5d77b30a2c00c765
|
File details
Details for the file twscrape-0.1.1-py3-none-any.whl.
File metadata
- Download URL: twscrape-0.1.1-py3-none-any.whl
- Upload date:
- Size: 18.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7055f948bace380adcd5ba20313e34f714cefb7e6dc523da2044ec75b10fbc51
|
|
| MD5 |
104ca27aedf5d9b88cf36ee2e52f3c06
|
|
| BLAKE2b-256 |
4482bf10d939a3e360486488a51c8bfebbf5fc32f641e24713927c943cbd7e7c
|