Skip to main content

Low Level Client for Crossref Search API

Project description

habanero

pypi docs ghactions coverage black

This is a low level client for working with Crossref’s search API. It’s been named to be more generic, as other organizations are/will adopt Crossref’s search API, making it possible to interact with all from one client.

Crossref API docs

Other Crossref API clients:

Crossref’s API issue tracker: https://gitlab.com/crossref/issues

habanero includes three modules you can import as needed (or import all):

Crossref - Crossref search API. The Crossref module includes methods matching Crossref API routes, and a few convenience methods for getting DOI agency and random DOIs:

  • works - /works route

  • members - /members route

  • prefixes - /prefixes route

  • funders - /funders route

  • journals - /journals route

  • types - /types route

  • licenses - /licenses route

  • registration_agency - get DOI minting agency

  • random_dois - get random set of DOIs

counts - citation counts. Includes the single citation_count method

cn - content negotiation. Includes the methods:

  • content_negotiation - get citations in a variety of formats

  • csl_styles - get CSL styles, used in content_negotation method

WorksContainer - A class for handling Crossref works. Pass output of works from methods on the Crossref class to more easily extract specific fields of works.

Note about searching:

You are using the Crossref search API described at https://api.crossref.org/swagger-ui/index.html. When you search with query terms, on Crossref servers they are not searching full text, or even abstracts of articles, but only what is available in the data that is returned to you. That is, they search article titles, authors, etc. For some discussion on this, see https://gitlab.com/crossref/issues/-/issues/101

Rate limits

See the headers X-Rate-Limit-Limit and X-Rate-Limit-Interval for current rate limits.

The Polite Pool

To get in the polite pool it’s a good idea now to include a mailto email address. See docs for more information.

Installation

Stable version

pip (or pip3) install habanero

Dev version

pip install git+https://github.com/sckott/habanero.git#egg=habanero

Or build it yourself locally

git clone https://github.com/sckott/habanero.git
cd habanero
make install

Usage

Initialize a client

from habanero import Crossref
cr = Crossref()

Works route

# query
x = cr.works(query = "ecology")
x['message']
x['message']['total-results']
x['message']['items']

# fetch data by DOI
cr.works(ids = '10.1371/journal.pone.0033693')

Members route

# ids here is the Crossref Member ID; 98 = Hindawi
cr.members(ids = 98, works = True)

Citation counts

from habanero import counts
counts.citation_count(doi = "10.1016/j.fbr.2012.01.001")

Content negotiation - get citations in many formats

from habanero import cn
cn.content_negotiation(ids = '10.1126/science.169.3946.635')
cn.content_negotiation(ids = '10.1126/science.169.3946.635', format = "citeproc-json")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "rdf-xml")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "text")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "text", style = "apa")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "bibentry")

Meta

Changelog

1.2.2 (2022-05-19)

  • Fixed class WorksContainer to work with cursor output of works results (e.g., cr.works(query, cursor=”*”)) (#106) thanks @IvanSterligov

1.2 (2022-03-27)

  • Added class WorksContainer to make handling works data easier (#101)

  • changed master branch to main in github development repository (#103)

  • exclude tests from install (#105)

1.0 (2021-11-12)

  • fixes to docs/contributing.rst and package level docs for habanero (#89) (#90) thanks @Daniel-Mietchen !

  • fix limit and offset internal handling for request and Request (#91) thanks @Bubblbu !

  • content_negotation throws warning now on 4xx/5xx status code to allow for bad DOIs alongside good DOIS (#92)

  • add example to README for querying works by DOI (#93)

  • fail better when json is not returned; try json.loads and catch ValueError (JSONDecodeError is a subclass of ValueError) (#97)

  • funders, journals, members, prefixes, types and works gain warn parameter to optionally throw a warning instead of error if a DOI is not found - not found DOI with warn=True returns None (#69)

0.7.4 (2020-05-29)

  • query.title filter is deprecated, use query.bibliographic instead (#85)

0.7.2 (2019-12-12)

  • Crossref() class gains ua_string option to add an additional string to the user-agent sent with every request (#84)

0.7.0 (2019-11-08)

  • filter_names() and filter_details() altered to get metadata for works, members and funders filters; and added egs to members and funders methods for using filters (#67)

  • many typos fixed (#80) thanks @Radcliffe !

  • use of a progress bar is now possible when fetching works route data, only when doing deep paging, see progress_bar parameter (#77) (#82)

  • content_negotiation fixes: ids parameter is now required (has no default), and must be a str or list of str (#83)

  • no longer testing under Python 2

0.6.2 (2018-10-22)

  • changelog was missing from the pypi distribution, fixed now (#71)

  • fixed Crossref.registration_agency() method, borked it up on a previous change (#72)

  • set encoding on response text for content_negotiation() method to UTF-8 to fix encoding issues (#73)

  • fix Crossref.filter_names() method; no sort on dict_keys (#76)

0.6.0 (2017-10-20)

  • Added verification and docs for additional Crossref search filters (#62)

  • Big improvement to docs on readthedocs (#59)

  • Added mailto support (#68) (#63) and related added docs about polite pool (#66)

  • Added support for select parameter (#65)

  • Added all new /works route filters, and simplified filter option handling within library (#60)

0.5.0 (2017-07-20)

  • Now using vcrpy to mock all unit tests (#54)

  • Can now set your own base URL for content negotation (#37)

  • Some field queries with works() were failing, but now seem to be working, likely due to fixes in Crossref API (#53)

  • style input to content_negotiation was fixed (#57) (#58) thanks @talbertc-usgs

  • Fix to content_negotiation when inputting a DOI as a unicode string (#56)

0.3.0 (2017-05-21)

  • Added more documentation for field queries, describing available fields that support field queries, and how to do field queries (#50)

  • sample parameter maximum value is 100 - has been for a while, but wasn’t updated in Crossref docs (#44)

  • Updated docs that facet parameter can be a string query in addition to a boolean (#49)

  • Documented new 10,000 max value for /works requests - that is, for the offset parameter - if you need more results than that use cursor (see https://github.com/CrossRef/rest-api-doc/blob/master/rest_api.md#deep-paging-with-cursors) (#47)

  • Added to docs a bit about rate limiting, their current values, that they can change, and how to show them in verbose curl responses (#45)

  • Now using https://doi.org for cn.content_negotation - and function gains new parameter url to specify different base URLs for content negotiation (#36)

  • Fixes to kwargs and fix docs for what can be passed to kwargs (#41)

  • Duplicated names passed to filter were not working - fixed now (#48)

  • Raise proper HTTP errors when appropriate for cn.content_negotiation thanks @jmaupetit (#55)

0.2.6 (2016-06-24)

0.2.2 (2016-03-09)

  • fixed some example code that included non-working examples (#34)

  • fixed bug in registration_agency() method, works now! (#35)

  • removed redundant filter_names and filter_details bits in docs

0.2.0 (2016-02-10)

  • user-agent strings now passed in every http request to Crossref, including a X-USER-AGENT header in case the User-Agent string is lost (#33)

  • added a disclaimer to docs about what is actually searched when searching the Crossref API - that is, only what is returned in the API, so no full text or abstracts are searched (#32)

  • improved http error parsing - now passes on the hopefully meaningful error messages from the Crossref API (#31)

  • more tests added (#30)

  • habanero now supports cursor for deep paging. note that cursor only works with requests to the /works route (#18)

0.1.3 (2015-12-02)

  • Fix wheel file to be a universal to install on python2 and python3 (#25)

  • Added method csl_styles to get CSL styles for use in content negotiation (#27)

  • More documentation for content negotiation (#26)

  • Made note in docs that sample param ignored unless /works used (#24)

  • Made note in docs that funders without IDs don’t show up on the /funders route (#23)

0.1.1 (2015-11-17)

  • Fix readme

0.1.0 (2015-11-17)

  • Now compatible with Python 2x and 3x

  • agency() method changed to registration_agency()

  • New method citation_count() - get citation counts for DOIs

  • New method crosscite() - get a citation for DOIs, only supports simple text format

  • New method random_dois() - get a random set of DOIs

  • Now importing xml.dom to do small amount of XML parsing

  • Changed library structure, now with module system, separated into modules for the main Crossref search API (i.e., api.crossref.org) including higher level methods (e.g., registration_agency), content negotiation, and citation counts.

0.0.6 (2015-11-09)

  • First pypi release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

habanero-1.2.2.tar.gz (47.6 kB view details)

Uploaded Source

Built Distribution

habanero-1.2.2-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file habanero-1.2.2.tar.gz.

File metadata

  • Download URL: habanero-1.2.2.tar.gz
  • Upload date:
  • Size: 47.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.12

File hashes

Hashes for habanero-1.2.2.tar.gz
Algorithm Hash digest
SHA256 5e4ed00e811a350c03894d6691657e555fbdef417bfe723c2248020efac37641
MD5 783b1dc37244defc0f6c6fa2c4aa9538
BLAKE2b-256 c0b91c17bba1251a9e42f51ed4249349abe4c6650fc4f0c9eb6ad4639bf843f1

See more details on using hashes here.

File details

Details for the file habanero-1.2.2-py3-none-any.whl.

File metadata

  • Download URL: habanero-1.2.2-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.6.0 importlib_metadata/4.8.2 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.12

File hashes

Hashes for habanero-1.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 29047c3a81455e03af3a8419899df2dfb4cd1abcc51ec612f70f833ada4acf5a
MD5 fc3a11f9aeac94aee949874f712be74e
BLAKE2b-256 e6304a4950e43fa95501b840e76d93f930061d9f52d182e01f28b759a659cac2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page