Skip to main content

Professional-grade Facebook data extraction tool - Cython compiled for code protection

Project description

Mashrur Facebook Scraper - Ultra Simple

Made by Mashrur Rahman

Ultra Simple - Just 5 Parameters

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts("email", "password", "page_url", num_posts, "output_file.json")

Installation

pip install mashrur-facebook-scraper

Usage

Create your scraper file:

from mashrur_facebook_scraper import scrape_facebook_posts

posts = scrape_facebook_posts(
    "your_email@example.com",
    "your_password",
    "https://www.facebook.com/indianexpress",
    5,
    "my_data.json"
)

print(f"Scraped {len(posts)} posts!")

Function Parameters

Parameter Type Description
email str Your Facebook email
password str Your Facebook password
page_url str Facebook page URL
num_posts int Number of posts
output_filename str Output JSON filename

Output Format

The scraper generates clean JSON data with the following structure:

[
  {
    "post_id": "123456789",
    "url": "https://facebook.com/posts/123456789",
    "content": "Post content text here...",
    "user_username_raw": "Page Name",
    "date_posted": "2025-01-15T10:30:00Z",
    "likes": 1250,
    "num_comments": 45,
    "num_shares": 12,
    "media_urls": ["https://facebook.com/image1.jpg"],
    "hashtags": ["#example", "#hashtag"],
    "post_type": "Post",
    "is_sponsored": false
  }
]

Advanced Examples

Batch Processing Multiple Pages

from mashrur_facebook_scraper import scrape_facebook_posts

pages = [
    "https://www.facebook.com/cnn",
    "https://www.facebook.com/bbc",
    "https://www.facebook.com/reuters"
]

for page in pages:
    page_name = page.split('/')[-1]
    filename = f"{page_name}_posts.json"

    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url=page,
        num_posts=20,
        output_filename=filename
    )

    print(f"Scraped {len(posts)} posts from {page_name}")

Error Handling

from mashrur_facebook_scraper import scrape_facebook_posts

try:
    posts = scrape_facebook_posts(
        email="your_email@example.com",
        password="your_password",
        page_url="https://www.facebook.com/invalidpage",
        num_posts=10
    )
except ValueError as e:
    print(f"Input error: {e}")
except Exception as e:
    print(f"Scraping error: {e}")

Requirements

  • Python 3.7 or higher
  • Chrome browser installed
  • Valid Facebook credentials
  • Stable internet connection

Support

License

Proprietary - All rights reserved to Mashrur Rahman

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mashrur_facebook_scraper-2.0.4.tar.gz (16.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mashrur_facebook_scraper-2.0.4-cp312-cp312-win_amd64.whl (136.0 kB view details)

Uploaded CPython 3.12Windows x86-64

File details

Details for the file mashrur_facebook_scraper-2.0.4.tar.gz.

File metadata

  • Download URL: mashrur_facebook_scraper-2.0.4.tar.gz
  • Upload date:
  • Size: 16.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for mashrur_facebook_scraper-2.0.4.tar.gz
Algorithm Hash digest
SHA256 6cf36892ffdaced34a1b87f63c44069c5c5e82c12816bda710b28f1106d46eae
MD5 ca08134cb25da3529ed72c652c62c93a
BLAKE2b-256 55b6952e1b0dcb8212ad340beb4241b7f1b76eafe18aa332aee037a2af067ffd

See more details on using hashes here.

File details

Details for the file mashrur_facebook_scraper-2.0.4-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for mashrur_facebook_scraper-2.0.4-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 df36c8f52da3005a36236de61df604fa82960477728fb64c516113da72555cbb
MD5 b681b09cc12dd9babc1ba335b8bd73ac
BLAKE2b-256 28221ba9e5a023b546505db33d09438145d5535c486452243b7135624222a231

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page