Skip to main content

A Python library for crawling LinkedIn posts and GitHub repos and scrap websites.

Project description

omnicrawls

omnicrawls is a Python library designed to simplify the extraction of data from various online platforms, including LinkedIn, GitHub repositories, and general websites. With this library, you can easily crawl and collect posts from LinkedIn, repository data from GitHub, and content from any website.

Features

  • LinkedIn Crawler: Extracts posts from LinkedIn profiles.
  • GitHub Crawler: Retrieves file data from GitHub repositories.
  • Website Scraper: Scrapes and retrieves content from specified websites.

Output Formats

  • LinkedIn: Returns a list of posts in the format ["post1", "post2", "post3", ...].
  • GitHub: Returns a dictionary of filenames and their respective data in the format {"filename": "data", "filename2": "data", ...}.
  • Website: Returns simple scraped data in a string format data.......

Installation

You can install the crawls library via pip:

pip install omnicrawls

Usage

Here’s a quick example demonstrating how to use the library to extract data from GitHub, LinkedIn, and general websites.

from omnicrawls import GithubCrawler
from omnicrawls import LinkedInCrawler
from omnicrawls import WebsiteScraper

# Extract data from GitHub repository
def githubtest():
    github = GithubCrawler()  # Initialize the GitHub Crawler
    link = input("Enter your GitHub repo link: ")
    output = github.extract(link)  # Extract data using the main method
    print(output)

# Extract data from a website
def websitetest():
    website = WebsiteScraper()  # Initialize the Website Scraper
    link = input("Enter website link: ")
    output = website.extract(link)  # Extract data from the website
    print(output)

# Extract data from LinkedIn profile
def linkedintest():
    email = input("Enter your LinkedIn username: ")
    password = input("Enter your LinkedIn password: ")
    link = input("Enter your LinkedIn profile link: ")
    linkedin = LinkedInCrawler(email, password)  # Initialize the LinkedIn Crawler
    output = linkedin.extract(link)  # Extract data from the LinkedIn profile
    print(output)

if __name__ == "__main__":
    print("1. Test GitHub")
    githubtest()
    print("2. Test LinkedIn")
    linkedintest()
    print("3. Test Website")
    websitetest()

Class and Method Descriptions

# GitHubCrawler:
  Initializes a new instance of the GithubCrawler().
  extract(link): This method takes a GitHub repository link and retrieves file data from it.

# LinkedInCrawler:
  Initializes a new instance of the LinkedInCrawler(email, password).
  extract(link): This method takes a LinkedIn profile link and extracts posts from that profile.

# WebsiteScraper:
  Initializes a new instance of the WebsiteScraper().
  extract(link): This method takes a website link and scrapes content from that page.

Example Outputs

  • LinkedIn Output:

    ["post1", "post2", "post3"]
    
  • GitHub Output:

    {"filename": "data", "filename2": "data"}
    
  • Website Output:

    data......
    

Contributing

Contributions are welcome! If you have suggestions for improvements or new features, feel free to create an issue or submit a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contact

For any inquiries, please reach out to Vinayak Pratap Rana.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnicrawls-0.1.0.tar.gz (5.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnicrawls-0.1.0-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file omnicrawls-0.1.0.tar.gz.

File metadata

  • Download URL: omnicrawls-0.1.0.tar.gz
  • Upload date:
  • Size: 5.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for omnicrawls-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ba00959f5823e4875b79f78c0b61aa8a8f8f6db80b80aa9bbbea66af8e180178
MD5 96c866619ae075d9d28c6131109aacd2
BLAKE2b-256 f3440744f9f9908344f3035c13765a4710e87370686f12276c9fc9c5f67d28c5

See more details on using hashes here.

File details

Details for the file omnicrawls-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: omnicrawls-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for omnicrawls-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b0f9416324e2916df921591af46b03b008a9ae6f7530000334f05bfcc2e3cc5c
MD5 7fbd85d368a3dafbc3cb2aedcb0861fb
BLAKE2b-256 91a8c51d7b9b413e25654b8b153cadc169cac4f0989269cabd30b2137e6ad97f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page