A library for scraping LinkedIn job postings.
Project description
LinkedInWebScraper
LinkedInWebScraper is a Python library designed to simplify and automate the process of scraping job postings from LinkedIn. It provides a reusable, user-friendly solution that streamlines the entire scraping process from data extraction to cleaning and storage.
Why Did I Create It?
Web scraping can be highly effective for gathering data, but setting up a custom scraper for each use case can be time-consuming and error-prone.
LinkedInWebScraper was developed to tackle these challenges by offering a streamlined solution that saves time, simplifies workflows, and enhances productivity.
Key Objectives of the Library:
- Simplify the Setup: Eliminate the need for boilerplate code, allowing users to focus on gathering and analyzing data.
- Enhance Reusability: Provide a flexible architecture that users can customize to fit their specific needs without rewriting the same code.
- Facilitate Automation: Automate the process of scraping job postings across multiple locations and keywords, enabling large-scale data collection with minimal effort.
Built Upon Reliable Libraries
LinkedInWebScraper leverages several powerful Python libraries to ensure robust functionality and performance:
-
BeautifulSoup: Used for parsing and navigating HTML content to extract meaningful job data, such as job titles, companies, locations, and descriptions.
-
Requests: Handles HTTP requests and provides a simple, reliable way to make GET requests to LinkedIn's job search pages, customizing headers to avoid detection.
-
Pandas: Structures and analyzes the scraped data, enabling users to manipulate large volumes of job postings, clean the data, and export it in formats such as CSV or JSON.
-
OpenAI API: Integrated to process and classify job titles and descriptions, leveraging machine learning models to categorize job postings intelligently, enhancing filtering and accuracy.
Together, these libraries form the backbone of LinkedInWebScraper, ensuring performance, reliability, and adaptability for a wide range of data scraping needs.
Overview of LinkedInWebScraper's Architecture
High-Level Structure
LinkedInWebScraper follows a modular architecture based on Object-Oriented Programming (OOP), with each module responsible for a specific part of the scraping process. This design ensures flexibility, maintainability, and scalability as features or customizations are added.
Main Components
-
LinkedInJobScraperClass:
The core class that orchestrates the entire scraping workflow. It coordinates the scraping, cleaning, classification, and enrichment of job postings. -
JobScraperClass:
Handles interaction with LinkedIn’s job search pages, URL generation, pagination, and parsing job postings from HTML. -
JobDataCleanerClass:
Processes raw job data scraped from LinkedIn, ensuring that the data is structured, consistent, and ready for analysis. -
JobTitleClassifierClass:
Uses the OpenAI API to classify job titles and categorize postings based on specific fields like data science or software engineering. -
JobScraperConfig&JobScraperConfigFactoryClasses:
Manages configuration settings for the scraper, such as keywords, location, distance, and job type. -
JobDescriptionProcessorClass:
Processes job descriptions, ensuring they are clean and standardized before being added to the dataset.
Design Decisions
Modularity
One of the central design principles behind LinkedInWebScraper is modularity. Each class and function has a well-defined responsibility, ensuring that the codebase is easy to understand, update, and extend.
Error Handling
Web scraping is inherently prone to errors, and LinkedInWebScraper incorporates robust error-handling mechanisms.
Scalability
LinkedInWebScraper was designed to handle large-scale data scraping efficiently.
Installation
To install the LinkedInWebScraper library, run:
pip install LinkedInWebScraper
Usage Example
Here’s an example of how to use LinkedInWebScraper to scrape job postings:
from LinkedInWebScraper import LinkedInJobScraper, JobScraperConfig
# Define scraper configuration
config = JobScraperConfig(
position="Data Scientist",
location="San Francisco",
remote="REMOTE"
)
# Initialize the scraper
scraper = LinkedInJobScraper(config=config)
# Scrape job data
job_data = scraper.run()
# View the results
print(job_data.head())
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
License
This project is licensed under the MIT License.
Contact
For any questions or inquiries, please reach out!
- LinkedIn: ricardogarciaramirez
- Email: rgr.5882@gmail.com
- Medium: @rgr5882
- X (Twitter): ricardogr_dsc
- Kaggle: ricardogr07
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file linkedinwebscraper-1.0.0.tar.gz.
File metadata
- Download URL: linkedinwebscraper-1.0.0.tar.gz
- Upload date:
- Size: 21.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
50f343b64f826b96277764d9c9e6873019e74c1d5145f97aa254534a989aa8aa
|
|
| MD5 |
71c2e65e96c3f28a7b7e39c5c9e78961
|
|
| BLAKE2b-256 |
f1a7a7cb2d945bc418117e96990a159207a73bb0560f215ec091a976a68cba18
|
File details
Details for the file LinkedInWebscraper-1.0.0-py3-none-any.whl.
File metadata
- Download URL: LinkedInWebscraper-1.0.0-py3-none-any.whl
- Upload date:
- Size: 23.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb757852e1e1bacde277e7f435f2c144a5b4f229cc62e20fff29e7db234f3f37
|
|
| MD5 |
fe2d3df9b88990a034e2287071dac6da
|
|
| BLAKE2b-256 |
d9f62e71fb9d5a68578915ca0db4e3aa27a62703b680d675b69d16973c686343
|