Skip to main content

automatically scrape onlyfans

Project description

This is a fork of onlyfans-scraper with additional features and fixes

What should work

  • scraping options like downloading content,unliking, and liking post

other options might not work currently. If your auth is not correct, then the latest version will force a proper configuration

Notes

Note the guide is still a little incomplete, so it might not be up to date with the changes I made I hope to go through it and make the necessary changes soon.

new db branch has some changes that will be coming to the main branch soon https://github.com/excludedBittern8/ofscraper/tree/db

Will be added a feature to speed up repeated scraping of models

DISCLAIMERS:

  1. This tool is not affiliated, associated, or partnered with OnlyFans in any way. We are not authorized, endorsed, or sponsored by OnlyFans. All OnlyFans trademarks remain the property of Fenix International Limited.
  2. This is a theoritical program only and is for educational purposes. If you choose to use it then it may or may not work. You solely accept full responsability and indemnify the creator, hostors, contributors and all other involved persons from any any all responsability.
  3. Description:

    command-line program to download media, and to process other batch operations such as liking and unliking posts.

    CopyQ nsUBdI

    Installation

    Recommended python3.9 or python3.10

    Windows:

    pip install ofscraper
    

    or

    pip install git+https://github.com/excludedBittern8/ofscraper
    

    If you're on macOS/Linux, then do this instead:

    pip3 install ofscraper
    

    or

    pip3 install git+https://github.com/excludedBittern/ofscraper
    

    Setup

    Before you can fully use it, you need to fill out some fields in a auth.json file. This file will be created for you when you run the program for the first time.

    These are the fields:

    {
        "auth": {
            "app-token": "",
            "sess": "",
            "auth_id": "",
            "auth_uniq_": "",
            "user_agent": "",
            "x-bc": ""
        }
    }
    

    It's really not that bad. I'll show you in the next sections how to get these bits of info.

    Step One: Creating the 'auth.json' File

    You first need to run the program in order for the auth.json file to be created. To run it, simply type ofscraper in your terminal and hit enter. Because you don't have an auth.json file, the program will create one for you and then ask you to enter some information. Now we need to get that information.

    Step Two: Getting Your Auth Info

    If you've already used DIGITALCRIMINAL's OnlyFans script, you can simply copy and paste the auth information from there to here.

    Go to your notification area on OnlyFans. Once you're there, open your browser's developer tools. If you don't know how to do that, consult the following chart:

    Operating System Keys
    macOS altcmdi
    Windows ctrlshifti
    Linux ctrlshifti

    Once you have your browser's developer tools open, your screen should look like the following:

    Click on the Network tab at the top of the browser tools:

    Then click on XHR sub-tab inside of the Network tab:

    Once you're inside of the XHR sub-tab, refresh the page while you have your browser's developer tools open. After the page reloads, you should see a section titled init appear:

    When you click on init, you should see a large sidebar appear. Make sure you're in the Headers section:

    After that, scroll down until you see a subsection called Request Headers. You should then see three important fields inside of the Request Headers subsection: Cookie, User-Agent, and x-bc

    Inside of the Cookie field, you will see a couple of important bits:

    • sess=
    • auth_id=
    • auth_uid_=

    Your auth_uid_ will only appear if you have 2FA (two-factor authentication) enabled. Also, keep in mind that your auth_uid_ will have numbers after the final underscore and before the equal sign (that's your auth_id).

    You need everything after the equal sign and everything before the semi-colon for all of those bits.

    Once you've copied the value for your sess cookie, go back to the program, paste it in, and hit enter. Now go back to your browser, copy the auth_id value, and paste it into the program and hit enter. Then go back to your browser, copy the auth_uid_ value, and paste it into the program and hit enter (leave this blank if you don't use 2FA!!!).

    Once you do that, the program will ask for your user agent. You should be able to find your user agent in a field called User-Agent below the Cookie field. Copy it and paste it into the program and hit enter.

    After it asks for your user agent, it will ask for your x-bc token. You should also be able to find this in the Request Headers section.

    You're all set and you can now use ofscraper.

    Usage

    Whenever you want to run the program, all you need to do is type ofscraper in your terminal:

    ofscraper
    

    That's it. It's that simple.

    Once the program launches, all you need to do is follow the on-screen directions. The first time you run it, it will ask you to fill out your auth.json file (directions for that in the section above).

    You will need to use your arrow keys to select an option:

    If you choose to download content, you will have three options: having a list of all of your subscriptions printed, manually entering a username, or scraping all accounts that you're subscribed to.

    Liking/Unliking Posts

    You can also use this program to like all of a user's posts or remove your likes from their posts. Just select either option during the main menu screen and enter their username.

    This program will like posts at a rate of around one post per second. This may be reduced in the future but OnlyFans is strict about how quickly you can like posts.

    At the moment, you can only like ~1000 posts per day. That's not our restriction, that's OnlyFans's restriction. So choose wisely.

    Migrating Databases

    If you've used DIGITALCRIMINAL's script, you might've liked how his script prevented duplicates from being downloaded each time you ran it on a user. This is done through database files.

    This program also uses a database file to prevent duplicates. In order to make it easier for user's to transition from his program to this one, this program will migrate the data from those databases for you (only IDs and filenames).

    In order to use it select the last option (Migrate an old database) and enter the path to the directory that contains the database files (Posts.db, Archived.db, etc.).

    For example, if you have a directory that looks like the following:

    Users
    |__ home
        |__ .sites
            |__ OnlyFans
                |__ melodyjai
                    |__ Metadata
                        |__ Archived.db
                        |__ Messages.db
                        |__ Posts.db
    

    Then the path you enter should be /Users/home/.sites/OnlyFans/melodyjai/Metadata. The program will detect the .db files in the directory and then ask you for the username to whom those .db files belong. The program will then move the relevant data over.

    Bugs/Issues/Suggestions

    If you run into trouble try the discord, careful though we do bite. If you open an issue for any of the following you will be banned from opening future issues. These are not issues they are operator error.

    1. Status Down - This means that your auth details are bad, keep trying.
    2. ofscraper command not found - This means that you have not added the path to your directory. You will have to look this up on your own with google.
    3. 404 page not found or any other 404 error. - The post or profile can't be found. The user has been suspended or deleted or the post was removed and isn't completely deleted yet. No fix for this other than unsubscribing from the user. Do not open an issue for it.

    Honestly unless you're one of my subscribers or support the project in some form your suggestions are generally ignored.

Project details


Release history Release notifications | RSS feed

This version

1.70

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ofscraper-1.70.tar.gz (36.2 kB view details)

Uploaded Source

Built Distribution

ofscraper-1.70-py3-none-any.whl (44.7 kB view details)

Uploaded Python 3

File details

Details for the file ofscraper-1.70.tar.gz.

File metadata

  • Download URL: ofscraper-1.70.tar.gz
  • Upload date:
  • Size: 36.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.10.6 Linux/6.1.14-1-liquorix-amd64

File hashes

Hashes for ofscraper-1.70.tar.gz
Algorithm Hash digest
SHA256 36258cec21331aca2ea09e0f05b7aa4c866e361cd604d4eddc3a05bf7d8e2148
MD5 5e2f5d808b4a14020fd60050645e9abe
BLAKE2b-256 38dea252c4b0c797b0868b50e45d8bb4ad9e09abc0e2fba186ef2aedb451710b

See more details on using hashes here.

File details

Details for the file ofscraper-1.70-py3-none-any.whl.

File metadata

  • Download URL: ofscraper-1.70-py3-none-any.whl
  • Upload date:
  • Size: 44.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.10.6 Linux/6.1.14-1-liquorix-amd64

File hashes

Hashes for ofscraper-1.70-py3-none-any.whl
Algorithm Hash digest
SHA256 d0dfe86bdc1b8ea58aea6d72e64ad7fc785a273ef3fc13d23c0a1ef5bd13f447
MD5 f669f40b66200eba75525e7d22619ed1
BLAKE2b-256 bb3d932e29a2e27525a270e9dcf359119ac9f34c7f53fce31851d86e537e22ed

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page