Skip to main content

No project description provided

Project description

proxycurl-py - The official Python client for Proxycurl API to scrape and enrich LinkedIn profiles

What is Proxycurl?

Proxycurl is an enrichment API to fetch fresh data on people and businesses. We are a fully-managed API that sits between your application and raw data so that you can focus on building the application; instead of worrying about building a web-scraping team and processing data at scale.

With Proxycurl, you can programatically:

  • Enrich profiles on people and companies
  • Lookup people and companies
  • Lookup contact information on people and companies
  • Check if an email address is of a disposable nature
  • And more..

Visit Proxycurl's website for more details.

Before you install

You should understand that proxycurl-py was designed with concurrency as a first class citizen from ground-up. To install proxycurl-py, you have to pick a concurency model.

We support the following concurrency models:

The right way to use Proxycurl API is to make API calls concurrently. In fact, making API requests concurrently is the only way to achieve a high rate of throughput. On the default rate limit, you can enrich up to 432,000 profiles per day. See this blog post for context.

Installation and supported Python versions

proxycurl-py is available on PyPi. For which you can install into your project with the following command:

# install proxycurl-py with asyncio
$ pip install 'proxycurl-py[asyncio]'

# install proxycurl-py with gevent
$ pip install 'proxycurl-py[gevent]'

# install proxycurl-py with twisted
$ pip install 'proxycurl-py[twisted]'

proxycurl-py is tested on Python 3.7, 3.8 and 3.9.

Initializing proxycurl-py with an API Key

You can get an API key by registering an account with Proxycurl. The API Key can be retrieved from the dashboard.

To use Proxycurl with the API Key:

  • You can run your script with the PROXYCURL_API_KEY environment variable set.
  • Or, you can prepend your script with the API key injected into the environment. See proxycurl/config.py for an example.

Usage with examples

I will be using proxycurl-py with the asyncio concurrency model to illustrate some examples on what you can do with Proxycurl and how the code will look with this library.

Forexamples with other concurrency models such as:

  • gevent, see examples/lib-gevent.py.
  • twisted, see examples/lib-twisted.

Enrich a Person Profile

Given a LinkedIn Member Profile URL, you can get the entire profile back in structured data with Proxycurl's Person Profile API Endpoint.

from proxycurl.asyncio import Proxycurl, do_bulk
import asyncio
import csv

proxycurl = Proxycurl()
person = asyncio.run(proxycurl.linkedin.person.get(
    url='https://www.linkedin.com/in/williamhgates/'
))
print('Person Result:', person)

Enrich a Company Profile

Given a LinkedIn Company Profile URL, enrich the URL with it's full profile with Proxycurl's Company Profile API Endpoint.

company = asyncio.run(proxycurl.linkedin.company.get(
    url='https://www.linkedin.com/company/tesla-motors'
))
print('Company Result:', company)

Lookup a person

Given a first name and a company name or domain, lookup a person with Proxycurl's Person Lookup API Endpoint.

lookup_results = asyncio.run(proxycurl.linkedin.person.resolve(first_name="bill", last_name="gates", company_domain="microsoft"))
print('Person Lookup Result:', lookup_results)

Lookup a company

Given a company name or a domain, lookup a company with Proxycurl's Company Lookup API Endpoint.

company_lookup_results = asyncio.run(proxycurl.linkedin.company.resolve(company_name="microsoft", company_domain="microsoft.com"))
print('Company Lookup Result:', company_lookup_results)

Lookup a LinkedIn Profile URL from a work email address

Given a work email address, lookup a LinkedIn Profile URL with Proxycurl's Reverse Work Email Lookup Endpoint.

lookup_results = asyncio.run(proxycurl.linkedin.person.resolve_by_email(work_email="anthony.tan@grab.com"))
print('Reverse Work Email Lookup Result:', lookup_results)

Enrich LinkedIn member profiles in bulk (from a CSV)

Given a CSV file with a list of LinkedIn member profile URLs, you can enrich the list in the following manner:

# PROCESS BULK WITH CSV
bulk_linkedin_person_data = []
with open('sample.csv', 'r') as file:
    reader = csv.reader(file)
    next(reader, None)
    for row in reader:
        bulk_linkedin_person_data.append(
            (proxycurl.linkedin.person.get, {'url': row[0]})
        )
results = asyncio.run(do_bulk(bulk_linkedin_person_data))

print('Bulk:', results)

More asyncio examples

More asyncio examples can be found at examples/lib-asyncio.py

Rate limit and error handling

There is no need for you to handle rate limits (429 HTTP status error). The library handles rate limits automatically with exponential backoff.

However, there is a need for you to handle other error codes. Errors will be returned in the form of ProxycurlException. The list of possible errors is listed in our API documentation.

API Endpoints and their corresponding documentation

Here we list the possible API endpoints and their corresponding library functions. Do refer to each endpoint's relevant API documentation to find out the required arguments that needs to be fed into the function.

Function Endpoint API
linkedin.company.employee_count(**kwargs) Employee Count Endpoint Company API
linkedin.company.resolve(**kwargs) Company Lookup Endpoint Company API
linkedin.company.employee_list(**kwargs) Employee Listing Endpoint Company API
linkedin.company.get(**kwargs) Company Profile Endpoint Company API
linkedin.person.resolve_by_email(**kwargs) Reverse Work Email Lookup Endpoint Contact API
linkedin.person.lookup_email(**kwargs) Work Email Lookup Endpoint Contact API
linkedin.person.personal_contact(**kwargs) Personal Contact Number Lookup Endpoint Contact API
linkedin.person.personal_email(**kwargs) Personal Email Lookup Endpoint Contact API
linkedin.disposable_email(**kwargs) Disposable Email Address Check Endpoint Contact API
linkedin.company.find_job(**kwargs) Job Listings Endpoint Jobs API
linkedin.job.get(**kwargs) Jobs Profile Endpoint Jobs API
linkedin.person.resolve(**kwargs) Person Lookup Endpoint People API
linkedin.company.role_lookup(**kwargs) Role Lookup Endpoint People API
linkedin.person.get(**kwargs) Person Profile Endpoint People API
linkedin.school.get(**kwargs) School Profile Endpoint School API
linkedin.company.reveal Reveal Endpoint Reveal API
get_balance(**kwargs) View Credit Balance Endpoint Meta API

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proxycurl_py-0.1.0.post2.tar.gz (55.3 kB view details)

Uploaded Source

Built Distribution

proxycurl_py-0.1.0.post2-py3-none-any.whl (56.5 kB view details)

Uploaded Python 3

File details

Details for the file proxycurl_py-0.1.0.post2.tar.gz.

File metadata

  • Download URL: proxycurl_py-0.1.0.post2.tar.gz
  • Upload date:
  • Size: 55.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.10 Linux/5.10.102.1-microsoft-standard-WSL2

File hashes

Hashes for proxycurl_py-0.1.0.post2.tar.gz
Algorithm Hash digest
SHA256 cd85fa5183f6ed7bc48bbd53ae2d4f3db4dbcc6f6d04e874f6c7035bc8c55ea3
MD5 ed854016a9ae7268ae1335543636f542
BLAKE2b-256 b61ea2f6946b96b632fc4a2f80110defa0561a7db2742420864735545f3a83ad

See more details on using hashes here.

File details

Details for the file proxycurl_py-0.1.0.post2-py3-none-any.whl.

File metadata

  • Download URL: proxycurl_py-0.1.0.post2-py3-none-any.whl
  • Upload date:
  • Size: 56.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.8.10 Linux/5.10.102.1-microsoft-standard-WSL2

File hashes

Hashes for proxycurl_py-0.1.0.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 bae0c524167376139500c58505312633bbb6574caf3125bd6da094c7be853280
MD5 aefa04d8a9d31deb279a32d937d3e634
BLAKE2b-256 680168a1a37b437290fa67cccb322912f03713a34e77f60c0c4571983ba6f364

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page