Skip to main content

Low Level Client for Crossref Search API

Project description

habanero

pypi docs travis coverage

This is a low level client for working with Crossref’s search API. It’s been named to be more generic, as other organizations are/will adopt Crossref’s search API, making it possible to interact with all from one client.

Crossref API docs

Other Crossref API clients:

Crossref’s API issue tracker: https://gitlab.com/crossref/issues

habanero includes three modules you can import as needed (or import all):

Crossref - Crossref search API. The Crossref module includes methods matching Crossref API routes, and a few convenience methods for getting DOI agency and random DOIs:

  • works - /works route

  • members - /members route

  • prefixes - /prefixes route

  • funders - /funders route

  • journals - /journals route

  • types - /types route

  • licenses - /licenses route

  • registration_agency - get DOI minting agency

  • random_dois - get random set of DOIs

counts - citation counts. Includes the single citation_count method

cn - content negotiation. Includes the methods:

  • content_negotiation - get citations in a variety of formats

  • csl_styles - get CSL styles, used in content_negotation method

Note about searching:

You are using the Crossref search API described at https://github.com/CrossRef/rest-api-doc. When you search with query terms, on Crossref servers they are not searching full text, or even abstracts of articles, but only what is available in the data that is returned to you. That is, they search article titles, authors, etc. For some discussion on this, see https://gitlab.com/crossref/issues/issues/101

Rate limits

See the headers X-Rate-Limit-Limit and X-Rate-Limit-Interval for current rate limits.

The Polite Pool

To get in the polite pool it’s a good idea now to include a mailto email address. See docs for more information.

Installation

Stable version

pip install habanero

Dev version

sudo pip install git+git://github.com/sckott/habanero.git#egg=habanero

# OR

git clone git@github.com:sckott/habanero.git
cd habanero
make install

Usage

Initialize a client

from habanero import Crossref
cr = Crossref()

Works route

x = cr.works(query = "ecology")
x['message']
x['message']['total-results']
x['message']['items']

Members route

cr.members(ids = 98, works = True)

Citation counts

from habanero import counts
counts.citation_count(doi = "10.1016/j.fbr.2012.01.001")

Content negotiation - get citations in many formats

from habanero import cn
cn.content_negotiation(ids = '10.1126/science.169.3946.635')
cn.content_negotiation(ids = '10.1126/science.169.3946.635', format = "citeproc-json")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "rdf-xml")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "text")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "text", style = "apa")
cn.content_negotiation(ids = "10.1126/science.169.3946.635", format = "bibentry")

Meta

Changelog

0.7.0 (2019-11-08)

  • filter_names() and filter_details() altered to get metadata for works, members and funders filters; and added egs to members and funders methods for using filters (#67)

  • many typos fixed (#80) thanks @Radcliffe !

  • use of a progress bar is now possible when fetching works route data, only when doing deep paging, see progress_bar parameter (#77) (#82)

  • content_negotiation fixes: ids parameter is now required (has no default), and must be a str or list of str (#83)

  • no longer testing under Python 2

0.6.2 (2018-10-22)

  • changelog was missing from the pypi distribution, fixed now (#71)

  • fixed Crossref.registration_agency() method, borked it up on a previous change (#72)

  • set encoding on response text for content_negotiation() method to UTF-8 to fix encoding issues (#73)

  • fix Crossref.filter_names() method; no sort on dict_keys (#76)

0.6.0 (2017-10-20)

  • Added verification and docs for additional Crossref search filters (#62)

  • Big improvement to docs on readthedocs (#59)

  • Added mailto support (#68) (#63) and related added docs about polite pool (#66)

  • Added support for select parameter (#65)

  • Added all new /works route filters, and simplified filter option handling within library (#60)

0.5.0 (2017-07-20)

  • Now using vcrpy to mock all unit tests (#54)

  • Can now set your own base URL for content negotation (#37)

  • Some field queries with works() were failing, but now seem to be working, likely due to fixes in Crossref API (#53)

  • style input to content_negotiation was fixed (#57) (#58) thanks @talbertc-usgs

  • Fix to content_negotiation when inputting a DOI as a unicode string (#56)

0.3.0 (2017-05-21)

  • Added more documentation for field queries, describing available fields that support field queries, and how to do field queries (#50)

  • sample parameter maximum value is 100 - has been for a while, but wasn’t updated in Crossref docs (#44)

  • Updated docs that facet parameter can be a string query in addition to a boolean (#49)

  • Documented new 10,000 max value for /works requests - that is, for the offset parameter - if you need more results than that use cursor (see https://github.com/CrossRef/rest-api-doc/blob/master/rest_api.md#deep-paging-with-cursors) (#47)

  • Added to docs a bit about rate limiting, their current values, that they can change, and how to show them in verbose curl responses (#45)

  • Now using https://doi.org for cn.content_negotation - and function gains new parameter url to specify different base URLs for content negotiation (#36)

  • Fixes to kwargs and fix docs for what can be passed to kwargs (#41)

  • Duplicated names passed to filter were not working - fixed now (#48)

  • Raise proper HTTP errors when appropriate for cn.content_negotiation thanks @jmaupetit (#55)

0.2.6 (2016-06-24)

0.2.2 (2016-03-09)

  • fixed some example code that included non-working examples (#34)

  • fixed bug in registration_agency() method, works now! (#35)

  • removed redundant filter_names and filter_details bits in docs

0.2.0 (2016-02-10)

  • user-agent strings now passed in every http request to Crossref, including a X-USER-AGENT header in case the User-Agent string is lost (#33)

  • added a disclaimer to docs about what is actually searched when searching the Crossref API - that is, only what is returned in the API, so no full text or abstracts are searched (#32)

  • improved http error parsing - now passes on the hopefully meaningful error messages from the Crossref API (#31)

  • more tests added (#30)

  • habanero now supports cursor for deep paging. note that cursor only works with requests to the /works route (#18)

0.1.3 (2015-12-02)

  • Fix wheel file to be a universal to install on python2 and python3 (#25)

  • Added method csl_styles to get CSL styles for use in content negotiation (#27)

  • More documentation for content negotiation (#26)

  • Made note in docs that sample param ignored unless /works used (#24)

  • Made note in docs that funders without IDs don’t show up on the /funders route (#23)

0.1.1 (2015-11-17)

  • Fix readme

0.1.0 (2015-11-17)

  • Now compatible with Python 2x and 3x

  • agency() method changed to registration_agency()

  • New method citation_count() - get citation counts for DOIs

  • New method crosscite() - get a citation for DOIs, only supports simple text format

  • New method random_dois() - get a random set of DOIs

  • Now importing xml.dom to do small amount of XML parsing

  • Changed library structure, now with module system, separated into modules for the main Crossref search API (i.e., api.crossref.org) including higher level methods (e.g., registration_agency), content negotiation, and citation counts.

0.0.6 (2015-11-09)

  • First pypi release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

habanero-0.7.0.tar.gz (48.5 kB view details)

Uploaded Source

Built Distribution

habanero-0.7.0-py2.py3-none-any.whl (34.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file habanero-0.7.0.tar.gz.

File metadata

  • Download URL: habanero-0.7.0.tar.gz
  • Upload date:
  • Size: 48.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.22.0 setuptools/41.6.0 requests-toolbelt/0.8.0 tqdm/4.37.0 CPython/3.7.5

File hashes

Hashes for habanero-0.7.0.tar.gz
Algorithm Hash digest
SHA256 7239be1b134cff75bd0c9a32afa4661e9409d77dd0214f32ec137fff4406d760
MD5 d41d4beeda99425b54f1518af47d599a
BLAKE2b-256 998316f7b51c6dae1da910d6d855b92a6975baaf2e06a15bc8d00068523ff6d6

See more details on using hashes here.

File details

Details for the file habanero-0.7.0-py2.py3-none-any.whl.

File metadata

  • Download URL: habanero-0.7.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 34.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.22.0 setuptools/41.6.0 requests-toolbelt/0.8.0 tqdm/4.37.0 CPython/3.7.5

File hashes

Hashes for habanero-0.7.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3da369bb06a400896c5192785d04e3a7ee0d9af6f2b90dae0fc3ec0d40315b4b
MD5 adde2b1e504fd6c98d611d84b91593dc
BLAKE2b-256 fec456da14a7c264d322658b635fc4a2b12db60f26361364f653deb6d0eaeac1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page