Skip to main content

Professional-grade Facebook data extraction tool with Nuitka compilation support

Project description

Mashrur Facebook Scraper - Ultra Simple

Made by Mashrur Rahman

Ultra Simple - Just 5 Parameters

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts("email", "password", "page_url", num_posts, "output_file.json")

Installation

pip install mashrur-facebook-scraper

Usage

Create your scraper file:

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts(
    "your_email@example.com",
    "your_password",
    "https://www.facebook.com/indianexpress",
    5,
    "my_data.json"
)

print(f"Scraped {len(posts)} posts!")

Function Parameters

Parameter Type Description
email str Your Facebook email
password str Your Facebook password
page_url str Facebook page URL
num_posts int Number of posts
output_filename str Output JSON filename

Output Format

The scraper generates clean JSON data with the following structure:

[
  {
    "post_id": "123456789",
    "url": "https://facebook.com/posts/123456789",
    "content": "Post content text here...",
    "user_username_raw": "Page Name",
    "date_posted": "2025-01-15T10:30:00Z",
    "likes": 1250,
    "num_comments": 45,
    "num_shares": 12,
    "media_urls": ["https://facebook.com/image1.jpg"],
    "hashtags": ["#example", "#hashtag"],
    "post_type": "Post",
    "is_sponsored": false
  }
]

Advanced Examples

Batch Processing Multiple Pages

from mashrur_facebook_scraper import scrape_facebook_posts

pages = [
    "https://www.facebook.com/cnn",
    "https://www.facebook.com/bbc",
    "https://www.facebook.com/reuters"
]

for page in pages:
    page_name = page.split('/')[-1]
    filename = f"{page_name}_posts.json"

    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url=page,
        num_posts=20,
        output_filename=filename
    )

    print(f"Scraped {len(posts)} posts from {page_name}")

Error Handling

from mashrur_facebook_scraper import scrape_facebook_posts

try:
    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url="https://www.facebook.com/invalidpage",
        num_posts=10
    )
except ValueError as e:
    print(f"Input error: {e}")
except Exception as e:
    print(f"Scraping error: {e}")

Requirements

  • Python 3.7 or higher
  • Chrome browser installed
  • Valid Facebook credentials
  • Stable internet connection

Support

License

Proprietary - All rights reserved to Mashrur Rahman

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mashrur_facebook_scraper-2.0.1.tar.gz (15.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mashrur_facebook_scraper-2.0.1-py3-none-any.whl (14.1 kB view details)

Uploaded Python 3

File details

Details for the file mashrur_facebook_scraper-2.0.1.tar.gz.

File metadata

  • Download URL: mashrur_facebook_scraper-2.0.1.tar.gz
  • Upload date:
  • Size: 15.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for mashrur_facebook_scraper-2.0.1.tar.gz
Algorithm Hash digest
SHA256 ce0db7fd6c94c6569511eabd9b102175ab6554bce493e81ed033ce598c7c5ad8
MD5 f0fc0e3feb2c764e5fb33ee1f625366a
BLAKE2b-256 a3dc9928e08676592bc732963ced946171dff0ecdc2400cad34df6fae3ff2d84

See more details on using hashes here.

File details

Details for the file mashrur_facebook_scraper-2.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for mashrur_facebook_scraper-2.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b9eaae8fac6b8c5bb8c9cbc8b67d6b81171dc26d374e31f13e8130cdca9c6440
MD5 b8c6d9cde59ff5ba7e58095f8855d94d
BLAKE2b-256 cd216e029af4fc2ceca7e13a82bae4ff03f846e9526b5671278ff01829aa30e4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page