Scrape public available jobs on Linkedin using headless browser
Project description
linkedin-jobs-scraper
Scrape public available jobs on Linkedin using headless browser. For each job, the following fields are extracted:
job_id
,link
,apply_link
,title
,company
,company_link
,company_img_link
,place
,description
,description_html
,date
,insights
.It's also available an equivalent npm package.
Table of Contents
- Requirements
- Installation
- Usage
- Anonymous vs authenticated session
- Rate limiting
- Proxy mode
- Filters
- Company filter
- Logging
- License
Requirements
- Chrome or Chromium
- Chromedriver: latest version tested is
108.0.5359.71
- Python >= 3.6
Installation
Install package:
pip install linkedin-jobs-scraper
Usage
import logging
from linkedin_jobs_scraper import LinkedinScraper
from linkedin_jobs_scraper.events import Events, EventData, EventMetrics
from linkedin_jobs_scraper.query import Query, QueryOptions, QueryFilters
from linkedin_jobs_scraper.filters import RelevanceFilters, TimeFilters, TypeFilters, ExperienceLevelFilters, \
OnSiteOrRemoteFilters
# Change root logger level (default is WARN)
logging.basicConfig(level=logging.INFO)
# Fired once for each successfully processed job
def on_data(data: EventData):
print('[ON_DATA]', data.title, data.company, data.company_link, data.date, data.link, data.insights,
len(data.description))
# Fired once for each page (25 jobs)
def on_metrics(metrics: EventMetrics):
print('[ON_METRICS]', str(metrics))
def on_error(error):
print('[ON_ERROR]', error)
def on_end():
print('[ON_END]')
scraper = LinkedinScraper(
chrome_executable_path=None, # Custom Chrome executable path (e.g. /foo/bar/bin/chromedriver)
chrome_options=None, # Custom Chrome options here
headless=True, # Overrides headless mode only if chrome_options is None
max_workers=1, # How many threads will be spawned to run queries concurrently (one Chrome driver for each thread)
slow_mo=0.5, # Slow down the scraper to avoid 'Too many requests 429' errors (in seconds)
page_load_timeout=40 # Page load timeout (in seconds)
)
# Add event listeners
scraper.on(Events.DATA, on_data)
scraper.on(Events.ERROR, on_error)
scraper.on(Events.END, on_end)
queries = [
Query(
options=QueryOptions(
limit=27 # Limit the number of jobs to scrape.
)
),
Query(
query='Engineer',
options=QueryOptions(
locations=['United States', 'Europe'],
apply_link=True, # Try to extract apply link (easy applies are skipped). If set to True, scraping is slower because an additional page mus be navigated. Default to False.
skip_promoted_jobs=True, # Skip promoted jobs. Default to False.
limit=5,
filters=QueryFilters(
company_jobs_url='https://www.linkedin.com/jobs/search/?f_C=1441%2C17876832%2C791962%2C2374003%2C18950635%2C16140%2C10440912&geoId=92000000',
# Filter by companies.
relevance=RelevanceFilters.RECENT,
time=TimeFilters.MONTH,
type=[TypeFilters.FULL_TIME, TypeFilters.INTERNSHIP],
on_site_or_remote=[OnSiteOrRemoteFilters.REMOTE],
experience=[ExperienceLevelFilters.MID_SENIOR]
)
)
),
]
scraper.run(queries)
Anonymous vs authenticated session
⚠ WARNING: due to lack of time, anonymous session strategy is no longer maintained. If someone wants to keep support for this feature and become a project maintainer, please be free to pm me.
By default the scraper will run in anonymous mode (no authentication required). In some environments (e.g. AWS or Heroku) this may be not possible though. You may face the following error message:
Scraper failed to run in anonymous mode, authentication may be necessary for this environment.
In that case the only option available is to run using an authenticated session. These are the steps required:
- Login to LinkedIn using an account of your choice.
- Open Chrome developer tools:
- Go to tab
Application
, then from left panel selectStorage
->Cookies
->https://www.linkedin.com
. In the main view locate row with nameli_at
and copy content from the columnValue
.
- Set the environment variable
LI_AT_COOKIE
with the value obtained in step 3, then run your application as normal. Example:
LI_AT_COOKIE=<your li_at cookie value here> python your_app.py
Rate limiting
You may experience the following rate limiting warning during execution:
[429] Too many requests. You should probably increase scraper "slow_mo" value or reduce concurrency.
This means you are exceeding the number of requests per second allowed by the server (this is especially true when using authenticated sessions where the rate limits are much more strict). You can overcome this by:
- Trying a higher value for
slow_mo
parameter (this will slow down scraper execution). - Reducing the value of
max_workers
to limit concurrency. I recommend to use no more than one worker in authenticated mode. - If you are using anonymous mode, you can try proxy mode.
The right value for slow_mo
parameter largely depends on rate-limiting settings on Linkedin servers (and this can
vary over time). For the time being, I suggest a value of at least 1.3
in anonymous mode and 0.4
in authenticated
mode.
Proxy mode [experimental]
It is also possible to pass a list of proxies to the scraper:
scraper = LinkedinScraper(
chrome_executable_path=None,
chrome_options=None,
headless=True,
max_workers=1,
slow_mo=1,
proxies=[
'http://localhost:6666',
'http://localhost:7777',
]
)
How it works? Basically every request from the browser is intercepted and executed from a python library instead, using one of the provided proxies in a round-robin fashion. The response is then returned back to the browser. In case of a proxy error, the request will be executed from the browser (a warning will be logged to stdout).
WARNING: proxy mode is currently not supported when using an authenticated session.
Filters
It is possible to customize queries with the following filters:
- RELEVANCE:
RELEVANT
RECENT
- TIME:
DAY
WEEK
MONTH
ANY
- TYPE:
FULL_TIME
PART_TIME
TEMPORARY
CONTRACT
- EXPERIENCE LEVEL:
INTERNSHIP
ENTRY_LEVEL
ASSOCIATE
MID_SENIOR
DIRECTOR
- REMOTE:
REMOTE
(supported only with authenticated session)
See the following example for more details:
from linkedin_jobs_scraper.query import Query, QueryOptions, QueryFilters
from linkedin_jobs_scraper.filters import RelevanceFilters, TimeFilters, TypeFilters, ExperienceLevelFilters, \
OnSiteOrRemoteFilters
query = Query(
query='Engineer',
options=QueryOptions(
locations=['United States'],
optimize=False,
apply_link=True,
skip_promoted_jobs=True,
limit=5,
filters=QueryFilters(
relevance=RelevanceFilters.RECENT,
time=TimeFilters.MONTH,
type=[TypeFilters.FULL_TIME, TypeFilters.INTERNSHIP],
experience=[ExperienceLevelFilters.INTERNSHIP, ExperienceLevelFilters.MID_SENIOR],
on_site_or_remote=[OnSiteOrRemoteFilters.REMOTE], # supported only with authenticated session
)
)
)
Company Filter
It is also possible to filter by company using the public company jobs url on LinkedIn. To find this url you have to:
- Login to LinkedIn using an account of your choice.
- Go to the LinkedIn page of the company you are interested in (e.g. https://www.linkedin.com/company/google).
- Click on
jobs
from the left menu.
- Scroll down and locate
See all jobs
orSee jobs
button.
- Right click and copy link address (or navigate the link and copy it from the address bar).
- Paste the link address in code as follows:
query = Query(
options=QueryOptions(
filters=QueryFilters(
# Paste link below
company_jobs_url='https://www.linkedin.com/jobs/search/?f_C=1441%2C17876832%2C791962%2C2374003%2C18950635%2C16140%2C10440912&geoId=92000000',
)
)
)
Logging
Package logger can be retrieved using namespace li:scraper
. Default level is INFO
.
It is possible to change logger level using environment variable LOG_LEVEL
or in code:
import logging
# Change root logger level (default is WARN)
logging.basicConfig(level = logging.DEBUG)
# Change package logger level
logging.getLogger('li:scraper').setLevel(logging.DEBUG)
# Optional: change level to other loggers
logging.getLogger('urllib3').setLevel(logging.WARN)
logging.getLogger('selenium').setLevel(logging.WARN)
License
If you like the project and want to contribute you can donate something here!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for linkedin-jobs-scraper-2.0.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3fd10c611fbaa88deb996ffccfa159992a6821f86e68215ae5c9c2a5717f7ddc |
|
MD5 | 46c8cae2e8681de1f93366f688b35ad5 |
|
BLAKE2b-256 | f3c3676dda56c4f3867970e060f300a100908593e0933253de8e2175d915b574 |
Hashes for linkedin_jobs_scraper-2.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ddae188ae4fae125c359cca6df1fbe71f88fcd0a044ff30352bff53eae35fa9d |
|
MD5 | a1b7fadf4a01f5f26f4232bd34d6b834 |
|
BLAKE2b-256 | cc26c5264d1d4f099470291b8352e6080e2ec38ccd1cfee5895d40c91a5326e4 |