Skip to main content

Twitter GraphQL and Search API implementation with SNScrape data models

Project description

twscrape

Twitter GraphQL and Search API implementation with SNScrape data models.

Install

pip install twscrape

Or development version:

pip install git+https://github.com/vladkens/twscrape.git

Features

  • Support both Search & GraphQL Twitter API
  • Async/Await functions (can run multiple scrapers in parallel at the same time)
  • Login flow (with receiving verification code from email)
  • Saving/restoring account sessions
  • Raw Twitter API responses & SNScrape models
  • Automatic account switching to smooth Twitter API rate limits

Usage

Since this project works through an authorized API, accounts need to be added. You can register and add an account yourself. You can also google sites that provide these things.

The email password is needed to get the code to log in to the account automatically (via imap protocol).

Data models:

import asyncio
from twscrape import AccountsPool, API, gather
from twscrape.logger import set_log_level

async def main():
    pool = AccountsPool()  # or AccountsPool("path-to.db") - default is `accounts.db` 
    await pool.add_account("user1", "pass1", "user1@example.com", "email_pass1")
    await pool.add_account("user2", "pass2", "user2@example.com", "email_pass2")

    # log in to all new accounts
    await pool.login_all()

    api = API(pool)

    # search api (latest tab)
    await gather(api.search("elon musk", limit=20))  # list[Tweet]

    # graphql api
    tweet_id, user_id, user_login = 20, 2244994945, "twitterdev"

    await api.tweet_details(tweet_id)  # Tweet
    await gather(api.retweeters(tweet_id, limit=20))  # list[User]
    await gather(api.favoriters(tweet_id, limit=20))  # list[User]

    await api.user_by_id(user_id)  # User
    await api.user_by_login(user_login)  # User
    await gather(api.followers(user_id, limit=20))  # list[User]
    await gather(api.following(user_id, limit=20))  # list[User]
    await gather(api.user_tweets(user_id, limit=20))  # list[Tweet]
    await gather(api.user_tweets_and_replies(user_id, limit=20))  # list[Tweet]

    # note 1: limit is optional, default is -1 (no limit)
    # note 2: all methods have `raw` version e.g.:

    async for tweet in api.search("elon musk"):
        print(tweet.id, tweet.user.username, tweet.rawContent)  # tweet is `Tweet` object

    async for rep in api.search_raw("elon musk"):
        print(rep.status_code, rep.json())  # rep is `httpx.Response` object

    # change log level, default info
    set_log_level("DEBUG")

    # Tweet & User model can be converted to regular dict or json, e.g.:
    doc = await api.user_by_id(user_id)  # User
    doc.dict()  # -> python dict
    doc.json()  # -> json string

if __name__ == "__main__":
    asyncio.run(main())

CLI

Get help on CLI commands

# show all commands
twscrape

# help on specific comand
twscrape search --help

Add accounts & login

First add accounts from file:

# twscrape add_accounts <file_path> <line_format>
# line_format should have "username", "password", "email", "email_password" tokens
# tokens delimeter should be same as an file
twscrape add_accounts accounts.txt username:password:email:email_password

The call login:

twscrape login_accounts

Accounts and their sessions will be saved, so they can be reused for future requests

Get list of accounts and their statuses

twscrape accounts

# Output:
# ───────────────────────────────────────────────────────────────────────────────────
# username  logged_in  active  last_used            total_req  error_msg
# ───────────────────────────────────────────────────────────────────────────────────
# user1     True       True    2023-05-20 03:20:40  100        None
# user2     True       True    2023-05-20 03:25:45  120        None
# user3     False      False   None                 120        Login error

Use different accounts file

Useful if using a different set of accounts for different actions

twscrape --db test-accounts.db <command>

Search commands

twscrape search "QUERY" --limit=20
twscrape tweet_details TWEET_ID
twscrape retweeters TWEET_ID --limit=20
twscrape favoriters TWEET_ID --limit=20
twscrape user_by_id USER_ID
twscrape user_by_login USERNAME
twscrape followers USER_ID --limit=20
twscrape following USER_ID --limit=20
twscrape user_tweets USER_ID --limit=20
twscrape user_tweets_and_replies USER_ID --limit=20

The default output is in the console (stdout), one document per line. So it can be redirected to the file.

twscrape search "elon mask lang:es" --limit=20 > data.txt

By default, parsed data is returned. The original tweet responses can be retrieved with --raw

twscrape search "elon mask lang:es" --limit=20 --raw

Limitations

API rate limits (per account):

  • Search API – 250 req / 15 min
  • GraphQL API – has individual rate limits per operation (in most cases this is 500 req / 15 min)

API data limits:

  • user_tweets & user_tweets_and_replies – can return ~3200 tweets maximum

See also

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

twscrape-0.2.0.tar.gz (135.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

twscrape-0.2.0-py3-none-any.whl (23.4 kB view details)

Uploaded Python 3

File details

Details for the file twscrape-0.2.0.tar.gz.

File metadata

  • Download URL: twscrape-0.2.0.tar.gz
  • Upload date:
  • Size: 135.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.3

File hashes

Hashes for twscrape-0.2.0.tar.gz
Algorithm Hash digest
SHA256 8267240db646a6b69b23a750f6e1483fa7877dbad5e94a1bba441efc299da88d
MD5 ba98fc3cb599182e22f78eb1439f8f97
BLAKE2b-256 7368bdcf3594072c83ac27b9ba403959429529a8e40fa3463b542ea4c3c40b84

See more details on using hashes here.

File details

Details for the file twscrape-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: twscrape-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 23.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.3

File hashes

Hashes for twscrape-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 234d425504f071574084235d4aa9a0ac9edd0d517f5a56f75654b80b9ee28181
MD5 c77d78da64f0c84e07edaffb402053df
BLAKE2b-256 cb95bea2d54ddb7ad2ba187b91d5aaccb6e0d2c698fcce31f6e676484ba61a83

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page