Skip to main content

automatically scrape onlyfans

Project description

A fork of onlyfans-scraper. It has been optimized to make it more feature complete with dc's onlyfans script. A matter of fact with the right settings transitioning between the two scripts should be a easy enough process

In addition there are numerous filtering features to control exactly which type of content you want to scrape. https://github.com/datawhores/ofscraper/blob/main/CHANGES.md

DISCLAIMERS:

  1. This tool is not affiliated, associated, or partnered with OnlyFans in any way. We are not authorized, endorsed, or sponsored by OnlyFans. All OnlyFans trademarks remain the property of Fenix International Limited.
  2. This is a theoritical program only and is for educational purposes. If you choose to use it then it may or may not work. You solely accept full responsability and indemnify the creator, hostors, contributors and all other involved persons from any any all responsability.
  3. 1.81 Changes:

    • sync keys names across config(old keys will still work)
    • change username to model_username in metadata
    • change site_name to sitename in metadata
    • remove --purchased args
    • add purchase and pinned as post types
    • add letter-count argument
      • This is for counting letters and not words for textlength
    • added testing
      • to install run poetry install --with test
      • run with poetry run pytest
    • print path for each file
    • responsetype mapping in config
      • This is for example keeping all messages or paid post in same folder, but also allow long time users to keep their current structure
    • added a Post class to serve as a 'single source of truth' for all responsetypes
    • removed unused functions like download_paid that are no longer needed
    • prompts and config functions have been revamped
    • fixed bugs caused by duplicate uploads on pages
    • additional bugs and fixes

    Description:

    command-line program to download media, and to process other batch operations such as liking and unliking posts.

    CopyQ nsUBdI

    Installation

    Recommended python3.9 or python3.10

    Windows:

    pip install ofscraper
    

    or

    pip install git+https://github.com/datawhores/ofscraper.git 
    

    macOS/Linux

    pip3 install ofscraper
    

    or

    pip3 install git+https://github.com/datawhores/ofscraper.git 
    

    Authentication

    You'll need to retrive your auth information

    https://github.com/datawhores/ofscraper/wiki/Auth

    Usage

    Whenever you want to run the program, all you need to do is type ofscraper in your terminal:

    Basic usage is just to run the command below

    ofscraper
    

    image

    Select "Download content from a user" is all your need to get started with downloading content

    Liking/Unliking

    It is also possible to batch unlike or like post by choosing the appropriate option in the menu Note their are limitations (currently 1000) on how many post you can like per day

    image

    Selecting specific users

    The fuzzy search system can be a little confusing see

    https://github.com/datawhores/ofscraper/wiki/Fuzzy-Search

    Other menu options

    See: https://github.com/datawhores/ofscraper/wiki/Menu-Options

    commandline

    While the menu is easy to use, and convient. You may want more automation.

    The best way to do this is through the commandline system. This will allow you to skip the menu, and for example scrape a provided list of accounts

    https://github.com/datawhores/ofscraper/wiki/command-line-args

    Customazation

    https://github.com/datawhores/ofscraper/wiki/Customizing-save-path https://github.com/datawhores/ofscraper/wiki/Config-Options

    Issues

    Open a issue in this repo, or you can mention your issue in the discord

    https://discord.gg/wN7uxEVHRK

    Common

    Status Down

    This typically means that your auth information is not correct, or onlyfans signed you out.

    404 Issue

    This could mean that the content you are trying to scrape is no longer present. It can also indicate that model has deleted her account, and it is no longer accesible on the platform

    Request taking a long time

    If a request fails ofscraper will pause and try again a few times. This can lead to certain runs taking longer and points.

    Known Limitations

    • 1000 likes is the max per day

    Migrating from DC script

    You will need to change the settings so that the metadata option is compatible with your current folders Additionally you might want to set the save_path, dir_path, and filename so they output similar outputs

    The metadata path from dc's script is used for duplicate check so as long as your set the right path. Files won't download a second time

    https://github.com/datawhores/ofscraper/wiki/Migrating-from-DC-script https://github.com/datawhores/ofscraper/wiki/Config-Options https://github.com/datawhores/ofscraper/wiki/Customizing-save-path

    Ask in the discord or open an issue if you need help with what to change to accomplish this

    Discord

    https://discord.gg/wN7uxEVHRK

    Support

    buymeacoffee.com/datawhores

    BTC: bc1qcnnetrgmtr86rmqvukl2e24f5upghpcsqqsy87

    ETH: 0x1d427a6E336edF6D95e589C16cb740A1372F6609

    codecov

Project details


Release history Release notifications | RSS feed

This version

1.81

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ofscraper-1.81.tar.gz (36.2 kB view details)

Uploaded Source

Built Distribution

ofscraper-1.81-py3-none-any.whl (45.3 kB view details)

Uploaded Python 3

File details

Details for the file ofscraper-1.81.tar.gz.

File metadata

  • Download URL: ofscraper-1.81.tar.gz
  • Upload date:
  • Size: 36.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.10.6 Linux/6.1.14-1-liquorix-amd64

File hashes

Hashes for ofscraper-1.81.tar.gz
Algorithm Hash digest
SHA256 6a0f395a5f1d2d0728c483c8acc82b4add88d48871501d4c166d8ad5f25c103e
MD5 3e8a322c07a31337e09b59c995e4a6b5
BLAKE2b-256 1acee60263e5a61c4a6d8654ca4a90a7a4b7c269c8a5f5d1fac0cc34e40d8bba

See more details on using hashes here.

File details

Details for the file ofscraper-1.81-py3-none-any.whl.

File metadata

  • Download URL: ofscraper-1.81-py3-none-any.whl
  • Upload date:
  • Size: 45.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.1 CPython/3.10.6 Linux/6.1.14-1-liquorix-amd64

File hashes

Hashes for ofscraper-1.81-py3-none-any.whl
Algorithm Hash digest
SHA256 f3abffd7af55fe206ad98f7bfd79423c70844b36d68eaeb8e850724b27950a98
MD5 1e329ae901943baf0913dabc4f9e15ce
BLAKE2b-256 7dc856c0311694be859aa150151fcdc58e347e83417b513fd469fbe1e70ac51b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page