The Wowool Portal Package
Project description
wowool-portal
Introduction
The wowool-portal is a powerful and flexible Natural Language Processing (NLP) SDK designed to easy the integration of advanced NLP capabilities into your applications. This SDK provides a robust pipeline for processing text data and returns detailed annotations, including tokens and concepts, to help you extract meaningful insights from unstructured text.
With wowool-portal, you can easily use NLP domains that can handle a variety of tasks such as tokenization, named entity recognition, and concept extraction. The SDK is designed to be user-friendly and efficient, making it an ideal choice for developers and data scientists looking to enhance their applications with state-of-the-art NLP features.
Install
At this stage we are installing from PyPi so you need to add the index to the command line.
pip install wowool-portal
Quick Start
CLI
Before using the wowool-portal, you need to set the following environment variables for authentication:
WOWOOL_PORTAL_USERNAME: Your portal usernameWOWOOL_PORTAL_PASSWORD: Your portal passwordWOWOOL_PORTAL_API_KEY: Your portal API key
You can set these environment variables in your terminal session with the following commands:
export WOWOOL_PORTAL_USERNAME="your_username"
export WOWOOL_PORTAL_PASSWORD="your_password"
export WOWOOL_PORTAL_API_KEY="your_api_key"
Replace "your_username", "your_password" and "your_api_key" with your actual portal credentials. Once these environment variables are set, you can start using the wowool-portal CLI wow to process your text data.
Contact us at info@wowool.com to get your credentials.
Extracting NER and Sentiments.
To extract the name entities (NER) and sentiments from a text, you can use the wow command with the appropriate modules and input text. Here is an example:
wow -p english,entity,sentiment,sentiments.app -i "John Smith worked for IBM. He is a nice person."
This command processes the input text "John Smith worked for IBM. He is a nice person." and returns detailed annotations, including entities and sentiments. Note the in the sentiment.text that he has been resolved to its referent John Smith.
Example output:
app='wowool_analysis'
S:( 0, 26)
C:( 0, 26): Sentence
C:( 0, 10): Person,@(canonical='John Smith' family='Smith' gender='male' given='John' )
T:( 0, 4): John,{+giv, +init-cap, +init-token},[John:Prop-Std]
T:( 5, 10): Smith,{+fam, +init-cap},[Smith:Prop-Std]
T:( 11, 17): worked,[work:V-Past]
T:( 18, 21): for,[for:Prep-Std]
C:( 22, 25): Company,@(canonical='IBM' country='USA' sector='it' )
T:( 22, 25): IBM,{+all-cap},[IBM:Prop-Std]
T:( 25, 26): .,[.:Punct-Sent]
S:( 27, 47)
C:( 27, 47): Sentence
C:( 27, 46): PositiveSentiment
C:( 27, 29): SentimentObject
C:( 27, 29): Person,@(canonical='John Smith' family='Smith' gender='male' given='John' )
T:( 27, 29): He,{+3p, +init-cap, +init-token, +nom, +sg},[he:Pron-Pers]
T:( 30, 32): is,[be:V-Pres-Sg-be]
T:( 33, 34): a,[a:Det-Indef]
T:( 35, 39): nice,{+inf},[nice:Adj-Std]
T:( 40, 46): person,{+person},[person:Nn-Sg]
T:( 46, 47): .,[.:Punct-Sent]
app='wowool_sentiments'
{
"positive": 100.0,
"negative": 0.0,
"sentiments": [
{
"polarity": "positive",
"text": "John Smith be a nice person",
"begin_offset": 27,
"end_offset": 46,
"object": "John Smith"
}
]
}
In this output:
S denotes a sentence. C denotes a concept, such as a Person or Company. T denotes a token, such as a word or punctuation mark. PositiveSentiment indicates a positive sentiment associated with the sentence. SentimentObject indicates the object of the sentiment. This detailed level of annotation helps you understand the structure and meaning of the text, making it easier to extract valuable insights
API
This sample demonstrates how to use the API to process a text using a pipeline that includes English language processing and entity recognition. Here's a step-by-step explanation:
Extract NER entities
from wowool.portal import Pipeline
pipeline = Pipeline("english,entity")
doc = pipeline("John Smith worked for IBM. He is a nice person.")
print(doc)
print("-" * 80)
# Visit all the entities in the document
for entity in doc.entities
print(annotation)
print("-" * 80)
# Visit all annotations in the document
for annotation in doc.sentences:
print(annotation)
# Visit all the sentences and then all annotations for each sentence
print("-" * 80)
for sentence in doc.sentences:
for annotation in sentence:
if annotation.is_concept:
print(annotation.uri, annotation.text, annotation.begin_offset, annotation.end_offset)
Extract topics
This sample demonstrates how to use the `API` to process a text and request to topics from the document.
from wowool.portal import Pipeline
pipeline = Pipeline("english,entity,topics.app")
doc = pipeline("Van Kerkhove, who specializes in respiratory diseases, said that, while it was confirmed that this was a “new” coronavirus, it was still being investigated whether it was transmitted from an animal.")
print(doc.topics)
Extract Sentiments
from wowool.portal import Pipeline
import json
pipeline = Pipeline("english,entity,sentiment,sentiments.app")
doc = pipeline("John Smith worked for IBM. He is a nice person.")
sentiments = doc.results("wowool_sentiments")
print(json.dumps(sentiments, indent=2))
Example output:
{
"positive": 100.0,
"negative": 0.0,
"sentiments": [
{
"polarity": "positive",
"text": "John Smith be a nice person",
"begin_offset": 27,
"end_offset": 46,
"object": "John Smith"
}
]
}
Extract categories/themes
This sample demonstrates how to use the `API` to process a text and request to themes from the document.
from wowool.portal import Pipeline
pipeline = Pipeline("english,entity,semantic-themes,themes.app")
doc = pipeline("Van Kerkhove, who specializes in respiratory diseases, said that, while it was confirmed that this was a “new” coronavirus, it was still being investigated whether it was transmitted from an animal.")
print(doc.themes)
Pipeline
To process the text you need to put together a pipeline. Pipelines are an easy way to generate the output data you like by adding languages, domains and apps. In the above example, we have used 'english' as a language and 'entity' and 'sentiment' as domains. We can also add an app, like 'topic' that will perform topic identification:
wow -p english,topics.app \
-i "NFT scams, toxic mines and lost life savings: the cryptocurrency dream is fading fast"
The app 'themes' will take care of finding a category for the text:
wow -p english,semantic-theme,topics.app,themes.app \
-i "Supermassive black hole at centre of Milky Way seen for first time"
You can use a snippet to test a rule, like the assassination rule below
wow -p "english,entity,snippet( rule: {'kill' (Prop)+}=Assassination; ).app" \
-i "John Doe killed John Smith"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wowool_portal-0.0.2-py3-none-any.whl.
File metadata
- Download URL: wowool_portal-0.0.2-py3-none-any.whl
- Upload date:
- Size: 24.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a956836f71740066a067bb23ebb9ee1a8707b0091e12f54a61f7c24affc6479
|
|
| MD5 |
57e76417b9e1dae51510161cf75033c2
|
|
| BLAKE2b-256 |
a2a07c8266d3875ffa94a0d36f3e4bdebe6e942a5af0013e15a096aa85a9c57b
|