Skip to main content

Python interface to the OpenAlex database

Project description

PyAlex - a Python wrapper for OpenAlex

PyAlex

PyPI DOI

PyAlex is a Python library for OpenAlex. OpenAlex is an index of hundreds of millions of interconnected scholarly papers, authors, institutions, and more. OpenAlex offers a robust, open, and free REST API to extract, aggregate, or search scholarly data. PyAlex is a lightweight and thin Python interface to this API. PyAlex tries to stay as close as possible to the design of the original service.

The following features of OpenAlex are currently supported by PyAlex:

  • Get single entities
  • Filter entities
  • Search entities
  • Group entities
  • Search filters
  • Select fields
  • Sample
  • Pagination
  • Autocomplete endpoint
  • N-grams
  • Authentication

We aim to cover the entire API, and we are looking for help. We are welcoming Pull Requests.

Key features

  • Pipe operations - PyAlex can handle multiple operations in a sequence. This allows the developer to write understandable queries. For examples, see code snippets.
  • Plaintext abstracts - OpenAlex doesn't include plaintext abstracts due to legal constraints. PyAlex can convert the inverted abstracts into plaintext abstracts on the fly.
  • Permissive license - OpenAlex data is CC0 licensed :raised_hands:. PyAlex is published under the MIT license.

Installation

PyAlex requires Python 3.8 or later.

pip install pyalex

Getting started

PyAlex offers support for all Entity Objects: Works, Authors, Sources, Institutions, Topics, Publishers, and Funders.

from pyalex import Works, Authors, Sources, Institutions, Topics, Publishers, Funders

The polite pool

The polite pool has much faster and more consistent response times. To get into the polite pool, you set your email:

import pyalex

pyalex.config.email = "mail@example.com"

Max retries

By default, PyAlex will raise an error at the first failure when querying the OpenAlex API. You can set max_retries to a number higher than 0 to allow PyAlex to retry when an error occurs. retry_backoff_factor is related to the delay between two retry, and retry_http_codes are the HTTP error codes that should trigger a retry.

from pyalex import config

config.max_retries = 0
config.retry_backoff_factor = 0.1
config.retry_http_codes = [429, 500, 503]

Get single entity

Get a single Work, Author, Source, Institution, Concept, Topic, Publisher or Funder from OpenAlex by the OpenAlex ID, or by DOI or ROR.

Works()["W2741809807"]

# same as
Works()["https://doi.org/10.7717/peerj.4375"]

The result is a Work object, which is very similar to a dictionary. Find the available fields with .keys().

For example, get the open access status:

Works()["W2741809807"]["open_access"]
{'is_oa': True, 'oa_status': 'gold', 'oa_url': 'https://doi.org/10.7717/peerj.4375'}

The previous works also for Authors, Sources, Institutions, Concepts and Topics

Authors()["A2887243803"]
Authors()["https://orcid.org/0000-0002-4297-0502"]  # same

Get random

Get a random Work, Author, Source, Institution, Concept, Topic, Publisher or Funder.

Works().random()
Authors().random()
Sources().random()
Institutions().random()
Concepts().random()
Topics().random()
Publishers().random()
Funders().random()

Get abstract

Only for Works. Request a work from the OpenAlex database:

w = Works()["W3128349626"]

All attributes are available like documented under Works, as well as abstract (only if abstract_inverted_index is not None). This abstract made human readable is create on the fly.

w["abstract"]
'Abstract To help researchers conduct a systematic review or meta-analysis as efficiently and transparently as possible, we designed a tool to accelerate the step of screening titles and abstracts. For many tasks—including but not limited to systematic reviews and meta-analyses—the scientific literature needs to be checked systematically. Scholars and practitioners currently screen thousands of studies by hand to determine which studies to include in their review or meta-analysis. This is error prone and inefficient because of extremely imbalanced data: only a fraction of the screened studies is relevant. The future of systematic reviewing will be an interaction with machine learning algorithms to deal with the enormous increase of available text. We therefore developed an open source machine learning-aided pipeline applying active learning: ASReview. We demonstrate by means of simulation studies that active learning can yield far more efficient reviewing than manual reviewing while providing high quality. Furthermore, we describe the options of the free and open source research software and present the results from user experience tests. We invite the community to contribute to open source projects such as our own that provide measurable and reproducible improvements over current practice.'

Please respect the legal constraints when using this feature.

Get lists of entities

results = Works().get()

For lists of entities, you can also count the number of records found instead of returning the results. This also works for search queries and filters.

Works().count()
# 10338153

For lists of entities, you can return the result as well as the metadata. By default, only the results are returned.

results, meta = Topics().get(return_meta=True)
print(meta)
{'count': 65073, 'db_response_time_ms': 16, 'page': 1, 'per_page': 25}

Filter records

Works().filter(publication_year=2020, is_oa=True).get()

which is identical to:

Works().filter(publication_year=2020).filter(is_oa=True).get()

Nested attribute filters

Some attribute filters are nested and separated with dots by OpenAlex. For example, filter on authorships.institutions.ror.

In case of nested attribute filters, use a dict to build the query.

Works()
  .filter(authorships={"institutions": {"ror": "04pp8hn57"}})
  .get()

Search entities

OpenAlex reference: The search parameter

Works().search("fierce creatures").get()

Search filter

OpenAlex reference: The search filter

Authors().search_filter(display_name="einstein").get()
Works().search_filter(title="cubist").get()
Funders().search_filter(display_name="health").get()

Sort entity lists

OpenAlex reference: Sort entity lists.

Works().sort(cited_by_count="desc").get()

Select

OpenAlex reference: Select fields.

Works().filter(publication_year=2020, is_oa=True).select(["id", "doi"]).get()

Sample

OpenAlex reference: Sample entity lists.

Works().sample(100, seed=535).get()

Logical expressions

OpenAlex reference: Logical expressions

Inequality:

Sources().filter(works_count=">1000").get()

Negation (NOT):

Institutions().filter(country_code="!us").get()

Intersection (AND):

Works().filter(institutions={"country_code": ["fr", "gb"]}).get()

# same
Works().filter(institutions={"country_code": "fr"}).filter(institutions={"country_code": "gb"}).get()

Addition (OR):

Works().filter(institutions={"country_code": "fr|gb"}).get()

Paging

OpenAlex offers two methods for paging: basic (offset) paging and cursor paging. Both methods are supported by PyAlex.

Cursor paging (default)

Use the method paginate() to paginate results. Each returned page is a list of records, with a maximum of per_page (default 25). By default, paginates argument n_max is set to 10000. Use None to retrieve all results.

from pyalex import Authors

pager = Authors().search_filter(display_name="einstein").paginate(per_page=200)

for page in pager:
    print(len(page))

Looking for an easy method to iterate the records of a pager?

from itertools import chain
from pyalex import Authors

query = Authors().search_filter(display_name="einstein")

for record in chain(*query.paginate(per_page=200)):
    print(record["id"])
Basic paging

See limitations of basic paging in the OpenAlex documentation.

from pyalex import Authors

pager = Authors().search_filter(display_name="einstein").paginate(method="page", per_page=200)

for page in pager:
    print(len(page))

Autocomplete

OpenAlex reference: Autocomplete entities.

Autocomplete a string:

from pyalex import autocomplete

autocomplete("stockholm resilience centre")

Autocomplete a string to get a specific type of entities:

from pyalex import Institutions

Institutions().autocomplete("stockholm resilience centre")

You can also use the filters to autocomplete:

from pyalex import Works

r = Works().filter(publication_year=2023).autocomplete("planetary boundaries")

Get N-grams

OpenAlex reference: Get N-grams.

Works()["W2023271753"].ngrams()

Serialize

All results from PyAlex can be serialized. For example, save the results to a JSON file:

with open(Path("works.json"), "w") as f:
    json.dump(Works().get(), f)

with open(Path("works.json")) as f:
    works = [Work(w) for w in json.load(f)]

Code snippets

A list of awesome use cases of the OpenAlex dataset.

Cited publications (referenced works)

from pyalex import Works

# the work to extract the referenced works of
w = Works()["W2741809807"]

Works()[w["referenced_works"]]

Get works of a single author

from pyalex import Works

Works().filter(author={"id": "A2887243803"}).get()

Dataset publications in the global south

from pyalex import Works

# the work to extract the referenced works of
w = Works() \
  .filter(institutions={"is_global_south":True}) \
  .filter(type="dataset") \
  .group_by("institutions.country_code") \
  .get()

Most cited publications in your organisation

from pyalex import Works

Works() \
  .filter(authorships={"institutions": {"ror": "04pp8hn57"}}) \
  .sort(cited_by_count="desc") \
  .get()

Experimental

Authentication

OpenAlex experiments with authenticated requests at the moment. Authenticate your requests with

import pyalex

pyalex.config.api_key = "<MY_KEY>"

Alternatives

R users can use the excellent OpenAlexR library.

License

MIT

Contact

This library is a community contribution. The authors of this Python library aren't affiliated with OpenAlex.

Feel free to reach out with questions, remarks, and suggestions. The issue tracker is a good starting point. You can also email me at jonathandebruinos@gmail.com.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyalex-0.15.1.tar.gz (44.5 kB view details)

Uploaded Source

Built Distribution

pyalex-0.15.1-py3-none-any.whl (11.1 kB view details)

Uploaded Python 3

File details

Details for the file pyalex-0.15.1.tar.gz.

File metadata

  • Download URL: pyalex-0.15.1.tar.gz
  • Upload date:
  • Size: 44.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for pyalex-0.15.1.tar.gz
Algorithm Hash digest
SHA256 064ae2bc20b21e66a813bccf7f1aec3223ab112ca77e8a1f3e40d657281d181f
MD5 7781b2bc486f2b5f1bf080472a08ff13
BLAKE2b-256 aade1edd12185f2eb950a8a65042e7c5e1dec31aa2b04a78af8d6c72be22c246

See more details on using hashes here.

File details

Details for the file pyalex-0.15.1-py3-none-any.whl.

File metadata

  • Download URL: pyalex-0.15.1-py3-none-any.whl
  • Upload date:
  • Size: 11.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for pyalex-0.15.1-py3-none-any.whl
Algorithm Hash digest
SHA256 713a1aa1cee77f89f212afa58a0a262dbcea46b3c8a4a62dc232f2ed3ccf1ce3
MD5 4a18d79385da7a3695bb141eca0480d7
BLAKE2b-256 da3fe2dfe66db69c2900099ec966b6405912c11be65f4f38ab0c2a3eb56b5730

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page