Skip to main content

API client for the Arkindex project

Project description

arkindex-client provides an API client to interact with Arkindex servers.


Install the client using pip:

pip install arkindex-client


To create a client and login using an email/password combo, use the ArkindexClient.login helper method:

from arkindex import ArkindexClient
cli = ArkindexClient()
cli.login('EMAIL', 'PASSWORD')

This helper method will save the authentication token in your API client, so that it is reused in later API requests.

If you already have an API token, you can create your client like so:

from arkindex import ArkindexClient
cli = ArkindexClient('YOUR_TOKEN')

Making requests

To perform a simple API request, you can use the request() method. The method takes an operation ID as a name and the operation’s parameters as keyword arguments.

You can open https://your.arkindex/api-docs/ to access the API documentation, which will describe the available API endpoints, including their operation ID and parameters.

corpus = cli.request('RetrieveCorpus', id='...')

The result will be a Python dict containing the result of the API request. If the request returns an error, an apistar.exceptions.ErrorResponse will be raised.

Dealing with pagination

The Arkindex client adds another helper method for paginated endpoints that deals with pagination for you: ArkindexClient.paginate. This method returns a ResponsePaginator instance, which is a classic Python iterator that does not perform any actual requests until absolutely needed: that is, until the next page must be loaded.

for element in cli.paginate('ListElements', corpus=corpus['id']):

Warning: Using list on a ResponsePaginator may load dozens of pages at once and cause a big load on the server. You can use len to get the total item count before spamming a server.

A call to paginate may produce hundreds of sub-requests depending on the size of the dataset you’re requesting. To accommodate with large datasets, and support network or performance issues, paginate supports a retries parameter to specify the number of sub-request it’s able to run for each page in the dataset. By default, the method will retry 5 times per page.

You may want to allow paginate to fail on some pages, for really big datasets (errors happen). In this case, you should use the optional boolean parameter allow_missing_data (set to False by default).

Here is an example of pagination on a large dataset, allowing data loss, lowering

retries and listing the missed pages:

elements = cli.paginate(
for element in elements:

print("Missing pages: {elements.missing}")

Using another server

By default, the API client is set to point to the main Arkindex server at If you need or want to use this API client on another server, you can use the base_url keyword argument when setting up your API client:

cli = ArkindexClient(base_url='https://somewhere')

Handling errors

APIStar, the underlying API client we use, does most of the error handling. It will raise two types of exceptions:


The request resulted in a HTTP 4xx or 5xx response from the server.


Any error that prevents the client from making the request or fetching the response: invalid endpoint names or URLs, unsupported content types, or unknown request parameters. See the exception messages for more info.

Since this API client retrieves the endpoints description from the server using the base URL, errors can occur during the retrieval and parsing of the API schema. If this happens, an arkindex.exceptions.SchemaError exception will be raised.

You can handle HTTP errors and fetch more information about them using the exception’s attributes:

from apistar.exceptions import ErrorResponse
    # cli.request ...
except ErrorResponse as e:
    print(e.title)   # "400 Bad Request"
    print(e.status_code)  # 400
    print(e.result)  # Any kind of response body the server might give

Note that by default, using repr() or str() on APIStar exceptions will not give any useful messages; a fix in APIStar is waiting to be merged. In the meantime, you can use Teklia’s APIStar fork:

pip install git+

This will provide support for repr() and str(), which will also enhance error messages on unhandled exceptions.


Download full logs for each Ponos task in a workflow

workflow = cli.request('RetrieveWorkflow', id='...')
for task in workflow['tasks']:
    with open(task['id'] + '.txt', 'w') as f:
        f.write(cli.request('RetrieveTaskLog', id=task['id']))


We use pre-commit with black to automatically format the Python source code of this project.

To be efficient, you should run pre-commit before committing (hence the name…).

To do that, run once :

pip install pre-commit
pre-commit install

The linting workflow will now run on modified files before committing, and will fix issues for you.

If you want to run the full workflow on all the files: pre-commit run -a.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arkindex-client-1.0.14.tar.gz (12.8 kB view hashes)

Uploaded source

Built Distribution

arkindex_client-1.0.14-py3-none-any.whl (12.2 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page