Skip to main content

This is a library for making requests to a EvaluationKIT API.

Project description

Evalkit API Client Library
==========================

[Overview](#overview)
[Testing](#testing)
[Documentation](#documentation)
[Installation](#installation)
[Usage](#usage)
[Contributing](#contributing)
[References](#references)

Overview
--------

This is a library for making requests to an EvaluationKit API.

Testing
-------

This project is tested with [tox](https://tox.readthedocs.io/en/latest/).

Run the tox command to run checks and unit tests:
```
$ tox
```

By default, this project's tox runs:

* [flake8](http://flake8.pycqa.org/en/latest/)
* [mypy](https://github.com/python/mypy)
* [pytest](https://docs.pytest.org/en/latest/)

To create test coverage reports:
```
$ tox -e cov
```

Deployment
----------

Deployment to pypi is done with tox:
```
$ tox -e deploy
```
Make sure to bump the version in setup.py before deploying.

Documentation
-------------

This project has Sphinx documentation at the following url:
https://lcary.github.io/canvas-lms-tools/

The EvaluationKit API documentation is also very useful:


Installation
------------

To install, use pip:

pip install evalkit_api_client

Or clone the repo:

git clone https://github.com/lcary/canvas-lms-tools.git
cd canvas-lms-tools/evalkit_api_client
python setup.py install

Usage
-----

Adding the client as a dependency in your project's `requirements.txt`
file is the intended way to use the client.

#### REPL Example

```
$ python
>>> from evalkit_api_client.v1_client import EvalKitAPIv1
>>> url = 'https://sub-account.evaluationkit.com/api/v1'
>>> token = 'xxxxxxxxxxxxxxxxxxxTHISxISxNOTxAxREALxTOKENxxxxxxxxxxxxxxxxxxxxx'
>>> api = EvalKitAPIv1(url, token)
>>> projects = api.get_projects().json()
>>> len(projects.json()) # number of projects in sub-account
>>> for p in projects.['resultList']:
... print(p['id'], p['title'])
...
49400 Test Evaluation A
57600 Test Eval B
```

#### Script Example

This very simple example requires a few environment variables. The
API URL and token should be something like:
```
EVALKIT_API_URL=https://sub-account.evaluationkit.com/api/v1
EVALKIT_API_TOKEN=xxxxxxxxxxxxxxxxxxxTHISxISxNOTxAxREALxTOKENxxxxxxxxxxxxxxxxxxxxx
```

The recommended approach is to use a config file with limited read
permissions instead of environment variables, but bear with me here.

Once installed in your project via pip, use as follows:

```python
from os import environ
from pprint import pprint

from evalkit_api_client.v1_client import EvalKitAPIv1

url = environ.get('EVALKIT_API_URL')
token = environ.get('EVALKIT_API_TOKEN')

api = EvalKitAPIv1(url, token)
projects = api.get_projects()

print(projects.json())
```

#### EvalKitAPIv1

This library is meant to be imported into your code. The `EvalKitAPIv1` client
object requires a `api_url` argument and a `api_token` argument. The `api_url`
should likely be defined in a configuration file, and should be the full API
URL without the endpoint, e.g. `https://sub.evaluationkit.com/api/v1/`. The `api_token`
should similarly be defined in a config file, and is the token generated for
a given subaccount in EvaluationKit.

Refer to the client interface [documentation](#documentation) for more information.

Contributing
------------

#### Building Wheels

Building the wheel:

python setup.py bdist_wheel

#### Installing Wheels

How to install the client for testing:

pip uninstall evalkit_api_client || echo "Already uninstalled."
pip install --no-index --find-links=dist evalkit_api_client

Alternatively, install by specifying the full or relative path to the `.whl` file:

pip install --no-index /path/to/canvas-lms-tools/evalkit_api_client/dist/evalkit_api_client-<version>-py2.py3-none-any.whl

(You may need to `pip install wheel` first if you are installing from another
project. Consult [stack overflow](https://stackoverflow.com/questions/28002897/wheel-file-installation)
for more help.)

#### Sphinx Docs

Creating the docs:

cd docs
pip install -r requirements.txt
pip install evalkit_api_client
make html
open build/html/index.html

Deploying the docs to GitHub pages:

git checkout master
git pull
git branch -D gh-pages
git checkout -b gh-pages
rm -rf ./*
touch .nojekyll
git checkout master evalkit_api_client/docs/
< build the docs as above >
mv evalkit_api_client/docs/build/html/* ./
rm -rf evalkit_api_client
git add -A
git commit
git push -f origin gh-pages

For more info see the [GitHub Pages documentation](https://pages.github.com/),
the [Sphinx docs](http://www.sphinx-doc.org/en/master/contents.html),
or the following [script docs](http://www.willmcginnis.com/2016/02/29/automating-documentation-workflow-with-sphinx-and-github-pages/).

References
----------

This project was originally created with the following "cookiecutter" tool:
https://github.com/wdm0006/cookiecutter-pipproject


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

evalkit_api_client-0.0.1a1-py2.py3-none-any.whl (6.5 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file evalkit_api_client-0.0.1a1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for evalkit_api_client-0.0.1a1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 08084919425a6cc364092de3206d5c6bf53aab0c272734180bcb8f2e20a20e61
MD5 7902a5e5497beef42dbb95bc69bab442
BLAKE2b-256 dbfae08f6b5b41534172b882b766125f7e60a0e4852f67d3671af48dda0e24c0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page