Skip to main content

Professional-grade Facebook data extraction tool - Cython compiled for code protection

Project description

Mashrur Facebook Scraper - Ultra Simple

Made by Mashrur Rahman

Ultra Simple - Just 5 Parameters

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts("email", "password", "page_url", num_posts, "output_file.json")

Installation

pip install mashrur-facebook-scraper

Usage

Create your scraper file:

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts(
    "your_email@example.com",
    "your_password",
    "https://www.facebook.com/indianexpress",
    5,
    "my_data.json"
)

print(f"Scraped {len(posts)} posts!")

Function Parameters

Parameter Type Description
email str Your Facebook email
password str Your Facebook password
page_url str Facebook page URL
num_posts int Number of posts
output_filename str Output JSON filename

Output Format

The scraper generates clean JSON data with the following structure:

[
  {
    "post_id": "123456789",
    "url": "https://facebook.com/posts/123456789",
    "content": "Post content text here...",
    "user_username_raw": "Page Name",
    "date_posted": "2025-01-15T10:30:00Z",
    "likes": 1250,
    "num_comments": 45,
    "num_shares": 12,
    "media_urls": ["https://facebook.com/image1.jpg"],
    "hashtags": ["#example", "#hashtag"],
    "post_type": "Post",
    "is_sponsored": false
  }
]

Advanced Examples

Batch Processing Multiple Pages

from mashrur_facebook_scraper import scrape_facebook_posts

pages = [
    "https://www.facebook.com/cnn",
    "https://www.facebook.com/bbc",
    "https://www.facebook.com/reuters"
]

for page in pages:
    page_name = page.split('/')[-1]
    filename = f"{page_name}_posts.json"

    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url=page,
        num_posts=20,
        output_filename=filename
    )

    print(f"Scraped {len(posts)} posts from {page_name}")

Error Handling

from mashrur_facebook_scraper import scrape_facebook_posts

try:
    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url="https://www.facebook.com/invalidpage",
        num_posts=10
    )
except ValueError as e:
    print(f"Input error: {e}")
except Exception as e:
    print(f"Scraping error: {e}")

Requirements

  • Python 3.7 or higher
  • Chrome browser installed
  • Valid Facebook credentials
  • Stable internet connection

Support

License

Proprietary - All rights reserved to Mashrur Rahman

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mashrur_facebook_scraper-2.0.2.tar.gz (15.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mashrur_facebook_scraper-2.0.2-cp312-cp312-win_amd64.whl (135.7 kB view details)

Uploaded CPython 3.12Windows x86-64

File details

Details for the file mashrur_facebook_scraper-2.0.2.tar.gz.

File metadata

  • Download URL: mashrur_facebook_scraper-2.0.2.tar.gz
  • Upload date:
  • Size: 15.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for mashrur_facebook_scraper-2.0.2.tar.gz
Algorithm Hash digest
SHA256 522c06a477a179023f3ce718acddb3e82301498ef705ee21ab394b2940eb864a
MD5 20f5548a8178cd7a03829a1fcdec5930
BLAKE2b-256 abb7626d8821f450c6f25f1179261d9b1c3340e90c5b7a0015aff25103973e2d

See more details on using hashes here.

File details

Details for the file mashrur_facebook_scraper-2.0.2-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for mashrur_facebook_scraper-2.0.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 11db74ed6d42de402164c2d62ef6e0d785f2f122396b88ff0fd4eac16862555d
MD5 a94bbd16b2d4a17c02087b7b53019ea8
BLAKE2b-256 4b48bbd523fab9fb611b2c4163b8d492cef344cea657fe0b0a3ef08d434b68b6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page