Skip to main content

Scrape public available jobs on Linkedin using headless browser

Project description

linkedin-jobs-scraper

Scrape public available jobs on Linkedin using headless browser. For each job, the following fields are extracted: job_id, link, apply_link, title, company, place, description, description_html, date, seniority_level, job_function, employment_type, industries.

Table of Contents

Requirements

Installation

Install package:

pip install linkedin-jobs-scraper

Usage

import logging
from linkedin_jobs_scraper import LinkedinScraper
from linkedin_jobs_scraper.events import Events, EventData
from linkedin_jobs_scraper.query import Query, QueryOptions, QueryFilters
from linkedin_jobs_scraper.filters import RelevanceFilters, TimeFilters, TypeFilters, ExperienceLevelFilters

# Change root logger level (default is WARN)
logging.basicConfig(level = logging.INFO)


def on_data(data: EventData):
    print('[ON_DATA]', data.title, data.company, data.date, data.link, len(data.description))


def on_error(error):
    print('[ON_ERROR]', error)


def on_end():
    print('[ON_END]')


scraper = LinkedinScraper(
    chrome_executable_path=None, # Custom Chrome executable path (e.g. /foo/bar/bin/chromedriver) 
    chrome_options=None,  # Custom Chrome options here
    headless=True,  # Overrides headless mode only if chrome_options is None
    max_workers=1,  # How many threads will be spawned to run queries concurrently (one Chrome driver for each thread)
    slow_mo=1,  # Slow down the scraper to avoid 'Too many requests (429)' errors
)

# Add event listeners
scraper.on(Events.DATA, on_data)
scraper.on(Events.ERROR, on_error)
scraper.on(Events.END, on_end)

queries = [
    Query(
        options=QueryOptions(
            optimize=True,  # Blocks requests for resources like images and stylesheet
            limit=27  # Limit the number of jobs to scrape
        )
    ),
    Query(
        query='Engineer',
        options=QueryOptions(
            locations=['United States'],
            optimize=False,
            limit=5,
            filters=QueryFilters(
                company_jobs_url='https://www.linkedin.com/jobs/search/?f_C=1441%2C17876832%2C791962%2C2374003%2C18950635%2C16140%2C10440912&geoId=92000000',  # Filter by companies
                relevance=RelevanceFilters.RECENT,
                time=TimeFilters.MONTH,
                type=[TypeFilters.FULL_TIME, TypeFilters.INTERNSHIP],
                experience=None,
            )
        )
    ),
]

scraper.run(queries)

Anonymous vs authenticated session

By default the scraper will run in anonymous mode (no authentication required). In some environments (e.g. AWS or Heroku) this may be not possible though. You may face the following error message:

Scraper failed to run in anonymous mode, authentication may be necessary for this environment.

In that case the only option available is to run using an authenticated session. These are the steps required:

  1. Login to LinkedIn using an account of your choice.
  2. Open Chrome developer tools:

  1. Go to tab Application, then from left panel select Storage -> Cookies -> https://www.linkedin.com. In the main view locate row with name li_at and copy content from the column Value.

  1. Set the environment variable LI_AT_COOKIE with the value obtained in step 3, then run your application as normal. Example:
LI_AT_COOKIE=<your li_at cookie value here> python your_app.py

Rate limiting

You may experience the following rate limiting warning during execution:

[429] Too many requests. You should probably increase scraper "slow_mo" value or reduce concurrency.

This means you are exceeding the number of requests per second allowed by the server (this is especially true when using authenticated sessions where the rate limits are much more strict). You can overcome this by:

  • Trying a higher value for slow_mo parameter (this will slow down scraper execution).
  • Reducing the value of max_workers to limit concurrency. I recommend to use no more than one worker in authenticated mode.
  • If you are using anonymous mode, you can try proxy mode.

Proxy mode [experimental]

It is also possible to pass a list of proxies to the scraper:

scraper = LinkedinScraper(
    chrome_executable_path=None,
    chrome_options=None,
    headless=True,
    max_workers=1,
    slow_mo=1,
    proxies=[
        'http://localhost:6666',
        'http://localhost:7777',        
    ]
)

How it works? Basically every request from the browser is intercepted and executed from a python library instead, using one of the provided proxies in a round-robin fashion. The response is then returned back to the browser. In case of a proxy error, the request will be executed from the browser (a warning will be logged to stdout).

WARNING: proxy mode is currently not supported when using an authenticated session.

Filters

It is possible to customize queries with the following filters:

  • RELEVANCE:
    • RELEVANT
    • RECENT
  • TIME:
    • DAY
    • WEEK
    • MONTH
    • ANY
  • TYPE:
    • FULL_TIME
    • PART_TIME
    • TEMPORARY
    • CONTRACT
  • EXPERIENCE LEVEL:
    • INTERNSHIP
    • ENTRY_LEVEL
    • ASSOCIATE
    • MID_SENIOR
    • DIRECTOR

See the following example for more details:

from linkedin_jobs_scraper.query import Query, QueryOptions, QueryFilters
from linkedin_jobs_scraper.filters import RelevanceFilters, TimeFilters, TypeFilters, ExperienceLevelFilters


query = Query(
    query='Engineer',
    options=QueryOptions(
        locations=['United States'],
        optimize=False,
        limit=5,
        filters=QueryFilters(            
            relevance=RelevanceFilters.RECENT,
            time=TimeFilters.MONTH,
            type=[TypeFilters.FULL_TIME, TypeFilters.INTERNSHIP],
            experience=[ExperienceLevelFilters.INTERNSHIP, ExperienceLevelFilters.MID_SENIOR],
        )
    )
)

Company Filter

It is also possible to filter by company using the public company jobs url on LinkedIn. To find this url you have to:

  1. Login to LinkedIn using an account of your choice.
  2. Go to the LinkedIn page of the company you are interested in (e.g. https://www.linkedin.com/company/google).
  3. Click on jobs from the left menu.

  1. Scroll down and locate See all jobs or See jobs button.

  1. Right click and copy link address (or navigate the link and copy it from the address bar).
  2. Paste the link address in code as follows:
query = Query(    
    options=QueryOptions(        
        filters=QueryFilters(
            # Paste link below
            company_jobs_url='https://www.linkedin.com/jobs/search/?f_C=1441%2C17876832%2C791962%2C2374003%2C18950635%2C16140%2C10440912&geoId=92000000',        
        )
    )
)

Logging

Package logger can be retrieved using namespace li:scraper. Default level is INFO. It is possible to change logger level using environment variable LOG_LEVEL or in code:

import logging

# Change root logger level (default is WARN)
logging.basicConfig(level = logging.DEBUG)

# Change package logger level
logging.getLogger('li:scraper').setLevel(logging.DEBUG)

# Optional: change level to other loggers
logging.getLogger('urllib3').setLevel(logging.WARN)
logging.getLogger('selenium').setLevel(logging.WARN)

License

MIT License

If you like the project and want to contribute you can donate something here!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

linkedin-jobs-scraper-1.7.0.tar.gz (23.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

linkedin_jobs_scraper-1.7.0-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file linkedin-jobs-scraper-1.7.0.tar.gz.

File metadata

  • Download URL: linkedin-jobs-scraper-1.7.0.tar.gz
  • Upload date:
  • Size: 23.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.57.0 CPython/3.9.2

File hashes

Hashes for linkedin-jobs-scraper-1.7.0.tar.gz
Algorithm Hash digest
SHA256 bbe0d8fed140283873248ed4d9a9cae844b04e35cbfaf6c0444f9374c1f98901
MD5 64ca7a5f4ce4b351fac5411b218fb1bd
BLAKE2b-256 9b50a74e7e244343374a993c72791485f05754142019695b1c92a3d96556b31c

See more details on using hashes here.

File details

Details for the file linkedin_jobs_scraper-1.7.0-py3-none-any.whl.

File metadata

  • Download URL: linkedin_jobs_scraper-1.7.0-py3-none-any.whl
  • Upload date:
  • Size: 30.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.57.0 CPython/3.9.2

File hashes

Hashes for linkedin_jobs_scraper-1.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f9594240135fc2e24c08a0f0f4a12a4348e7d948cce80f80e30965a0bbaad4a5
MD5 6315e092e36d7e8323ce9261f96b88cd
BLAKE2b-256 6e7578d237aab24a6c5c26095dc81edb6cdd9944d6b21ab86245a77d903433a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page