Python client for expert.ai Natural Language API
Project description
expert.ai Natural Language API for Python
Python client for the expert.ai Natural Language API. Leverage Natural Language understanding from your Python apps.
Installation (contributor)
Clone the repository and run the following script:
$ cd nlapi-python
$ pip install -r requirements-dev.txt
As good practice it's recommended to work in a isolated Python environment, creating a virtual environment with virtualenv package before building the package. You can create an isolated environment with the command
$ virtualenv expertai
$ source expertai/bin/activate
Usage
Before making requests to the API, you need to create an instance of the ExpertClient
. You will set your API Credentials as environment variables:
export EAI_USERNAME=YOUR_USER
export EAI_PASSWORD=YOUR_PASSWORD
or from within the Python shell:
import os
os.environ["EAI_USERNAME"] = YOUR_USER
os.environ["EAI_PASSWORD"] = YOUR_PASSWORD
and then you can code as follows:
from expertai.client import ExpertAiClient
eai = ExpertAiClient()
Requests
From the client instance, you can call any endpoint (check the available endpoints below). For example, you can get named entities from a text document:
text = 'Facebook is looking at buying U.S. startup for $6 million'
language= 'en'
##get Named Entities
response = eai.specific_resource_analysis(body={"document": {"text": text}}, params={'language': language, 'resource': 'entities'})
or to classify it according the IPTC Media Topics taxonomy:
text = 'Facebook is looking at buying U.S. startup for $6 million'
language= 'en'
##get Media Topics Classification
response = eai.iptc_media_topics_classification(body={"document": {"text": text}}, params={'language': language})
Responses
The response object returned by every endpoint call is a JSON file as detailed in the Output reference:
For Named Entity extraction:
pprint(response.json)
{
"content": "Facebook is looking at buying U.S. startup for $6 million",
"entities": [
{
"lemma": "6,000,000 dollar",
"positions": [
{
"end": 57,
"start": 47
}
],
"syncon": -1,
"type": "MON"
},
{
"lemma": "Facebook Inc.",
"positions": [
{
"end": 8,
"start": 0
}
],
"syncon": 288110,
"type": "COM"
}
],
"knowledge": [
{
"label": "organization.company",
"properties": [
{
"type": "DBpediaId",
"value": "dbpedia.org/page/Facebook,_Inc."
},
{
"type": "WikiDataId",
"value": "Q380"
}
],
"syncon": 288110
}
],
"language": "en",
"version": "sensei: 3.1.0; disambiguator: 15.0-QNTX-2016"
}
For Document classification:
pprint(response.json)
{
"categories": [
{
"frequency": 64.63,
"hierarchy": [
"Economy, business and finance",
"Business information",
"Strategy and marketing",
"Merger or acquisition"
],
"id": "20000204",
"label": "Merger or acquisition",
"namespace": "iptc_en_1.0",
"positions": [
{
"end": 8,
"start": 0
},
{
"end": 29,
"start": 23
},
{
"end": 42,
"start": 35
}
],
"score": 1335,
"winner": true
}
],
"content": "Facebook is looking at buying U.S. startup for $6 million",
"language": "en",
"version": "sensei: 3.1.0; disambiguator: 14.5-QNTX-2016"
}
Available endpoints
These are all the endpoints of the API. For more information about each endpoint, check out the API documentation.
Document Analysis
Document Classification
Demo mode
You find a demo script in the package that you can use as starting poing for developing your application.
python demo.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for expertai_nlapi-1.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ed4d1e62a819c4a3e457ffd584da556890a538f0d198bc6ade51253c20bb62ec |
|
MD5 | 101702d85b88e86787e2de760625dcf4 |
|
BLAKE2b-256 | 96fbe8f23d458896c65ab2e1a48d980b4d2822ab37aa30718cace04a9c4a59f0 |