Skip to main content

A Python library for crawling LinkedIn data and GitHub repos and scrap websites.

Project description

omnicrawls

omnicrawls is a Python library designed to simplify the extraction of data from various online platforms, including LinkedIn, GitHub repositories, and general websites. With this library, you can easily crawl and collect posts from LinkedIn, repository data from GitHub, and content from any website.

NOTE=Set your linkedin email and password in .env file in root directory of your project for first time login after that linkedin_cookies.pkl will be created in root directory of your project and that is use for further login. Like this: EMAIL=your_email_here PASSWORD=your_password_here

Features

  • LinkedIn Crawler: Extracts Data from LinkedIn profiles.
  • GitHub Crawler: Retrieves file data from GitHub repositories.
  • Website Scraper: Scrapes and retrieves content from specified websites.

Output Formats

  • LinkedIn: Returns a data of the profile`.
  • GitHub: Returns a dictionary of filenames and their respective data in the format {"filename": "data", "filename2": "data", ...}.
  • Website: Returns simple scraped data in a string format data.......

Installation

You can install the crawls library via pip:

pip install omnicrawls

Usage

Here’s a quick example demonstrating how to use the library to extract data from GitHub, LinkedIn, and general websites.

from omnicrawls import GithubCrawler
from omnicrawls import LinkedInCrawler
from omnicrawls import WebsiteScraper

# Extract data from GitHub repository
def githubtest():
    github = GithubCrawler()  # Initialize the GitHub Crawler
    link = input("Enter your GitHub repo link: ")
    output = github.extract(link)  # Extract data using the main method
    print(output)

# Extract data from a website
def websitetest():
    website = WebsiteScraper()  # Initialize the Website Scraper
    link = input("Enter website link: ")
    output = website.extract(link)  # Extract data from the website
    print(output)


# Extract data from LinkedIn profile
def linkedintest():
    from dotenv import load_dotenv
    load_dotenv()
    link = input("define your linkedin profile url: ") # or just set url without input
    data_types = ["Posts"]  #["Posts","Experience","Education","Basic Profile","all"] you can enter one or more of these types and if want all then use "all"
    linkedin = LinkedInCrawler()
    output = linkedin.extract(link, data_types)
    print(output)
    
    #Set your linkedin email and password in .env file in root directory of your project for first time login after that linkedin_cookies.pkl will be created in root directory of your project and that is use for further login.
#    Like this:
#    EMAIL=your_email_here
#    PASSWORD=your_password_here


if __name__ == "__main__":
    print("1. Test GitHub")
    githubtest()
    print("2. Test LinkedIn")
    linkedintest()
    print("3. Test Website")
    websitetest()

Class and Method Descriptions

# GitHubCrawler:
  Initializes a new instance of the GithubCrawler().
  extract(link): This method takes a GitHub repository link and retrieves file data from it.

# LinkedInCrawler:
  Initializes a new instance of the LinkedInCrawler(email, password).
  LinkedInCrawler.extract(link,data_types): This method takes a LinkedIn profile link and extracts posts, basic profile, experience, education from that profile.
  NOTE=Set your linkedin email and password in .env file in root directory of your project for first time login after that linkedin_cookies.pkl will be created in root directory of your project and that is use for further login.

# WebsiteScraper:
  Initializes a new instance of the WebsiteScraper().
  extract(link): This method takes a website link and scrapes content from that page.

Example Outputs

  • LinkedIn Output:

    {'Posts':["post1", "post2", "post3"..]}
    
  • GitHub Output:

    {"filename": "data", "filename2": "data"}
    
  • Website Output:

    data......
    

Contributing

Contributions are welcome! If you have suggestions for improvements or new features, feel free to create an issue or submit a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contact

For any inquiries, please reach out to Vinayak Pratap Rana.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omnicrawls-2.1.tar.gz (6.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omnicrawls-2.1-py3-none-any.whl (7.0 kB view details)

Uploaded Python 3

File details

Details for the file omnicrawls-2.1.tar.gz.

File metadata

  • Download URL: omnicrawls-2.1.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for omnicrawls-2.1.tar.gz
Algorithm Hash digest
SHA256 af44b762f756fa01ef2b1a42c896cae63f3861fbb9d671bb63e5a1858cdbcaf1
MD5 7ef18d2a2fcb642888e89e90a89c04d5
BLAKE2b-256 80c717ce98260ef10c9d1289c1c7c48b100c55a7a5d6845bb276c974055a4e8a

See more details on using hashes here.

File details

Details for the file omnicrawls-2.1-py3-none-any.whl.

File metadata

  • Download URL: omnicrawls-2.1-py3-none-any.whl
  • Upload date:
  • Size: 7.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.5

File hashes

Hashes for omnicrawls-2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a43fe1f8d1155446c3e5e6344b22b6ba76211078394937624d9a2d3114c3133c
MD5 e820805ecdb5ed8cebf4c9943f438a39
BLAKE2b-256 e73f9ea51d0062518fd647f3d71cfddfa44da2e2ff6814dc66abd6f80770bedc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page