Skip to main content

An open source library for building end-to-end dialog systems and training chatbots.

Project description

License Apache 2.0 Python 3.6

We are still in a really early Alpha release.
In version 0.0.6 everything from package deeppavlov.skills except deeppavlov.skills.pattern_matching_skill was moved to deeppavlov.models so your imports might break

DeepPavlov is an open-source conversational AI library built on TensorFlow and Keras. It is designed for

  • development of production ready chat-bots and complex conversational systems,
  • NLP and dialog systems research.

Hello Bot in DeepPavlov

Import key components to build HelloBot.

from deeppavlov.core.agent import Agent, HighestConfidenceSelector
from deeppavlov.skills.pattern_matching_skill import PatternMatchingSkill

Create skills as pre-defined responses for a user's input containing specific keywords. Every skill returns response and confidence.

hello = PatternMatchingSkill(responses=['Hello world! :)'], patterns=["hi", "hello", "good day"])
bye = PatternMatchingSkill(['Goodbye world! :(', 'See you around.'], ["bye", "chao", "see you"])
fallback = PatternMatchingSkill(["I don't understand, sorry :/", 'I can say "Hello world!" 8)'])

Agent executes skills and then takes response from the skill with the highest confidence.

HelloBot = Agent([hello, bye, fallback], skills_selector=HighestConfidenceSelector())

Give the floor to the HelloBot!

print(HelloBot(['Hello!', 'Boo...', 'Bye.']))

Jupyther notebook with HelloBot example.

Installation

  1. Currently we support only Linux platform and Python 3.6 (Python 3.5 is not supported!)

  2. Create a virtual environment with Python 3.6

    virtualenv env
    
  3. Activate the environment.

    source ./env/bin/activate
    
  4. Clone the repo and cd to project root

    git clone https://github.com/deepmipt/DeepPavlov.git
    cd DeepPavlov
    
  5. Install basic requirements:

    python setup.py develop
    

Demo

Demo of selected features is available at demo.ipavlov.ai

Conceptual overview

Our goal is to enable AI-application developers and researchers with:

  • set of pre-trained NLP models, pre-defined dialog system components (ML/DL/Rule-based) and pipeline templates;
  • a framework for implementing and testing their own dialog models;
  • tools for application integration with adjacent infrastructure (messengers, helpdesk software etc.);
  • benchmarking environment for conversational models and uniform access to relevant datasets.

Key Concepts

  • Agent is a conversational agent communicating with users in natural language (text).
  • Skill fulfills user’s goal in some domain. Typically, this is accomplished by presenting information or completing transaction (e.g. answer question by FAQ, booking tickets etc.). However, for some tasks a success of interaction is defined as continuous engagement (e.g. chit-chat).
  • Component is a reusable functional component of Skill.
    • Rule-based Models cannot be trained.
    • Machine Learning Models can be trained only stand alone.
    • Deep Learning Models can be trained independently and in an end-to-end mode being joined in a chain.
  • Skill Manager performs selection of the Skill to generate response.
  • Chainer builds an agent/component pipeline from heterogeneous components (rule-based/ml/dl). It allows to train and infer models in a pipeline as a whole.

The smallest building block of the library is Component. Component stands for any kind of function in an NLP pipeline. It can be implemented as a neural network, a non-neural ML model or a rule-based system. Besides that, Component can have nested structure, i.e. a Component can include other Component'(s).

Components can be joined into a Skill. Skill solves a larger NLP task compared to Component. However, in terms of implementation Skills are not different from Components. The only restriction of Skills is that their input and output should both be strings. Therefore, Skills are usually associated with dialogue tasks.

Agent is supposed to be a multi-purpose dialogue system that comprises several Skills and can switch between them. It can be a dialogue system that contains a goal-oriented and chatbot skills and chooses which one to use for generating the answer depending on user input.

DeepPavlov is built on top of machine learning frameworks TensorFlow and Keras. Other external libraries can be used to build basic components.


Quick start

To use our pre-trained models, you should first install their requirements:

python -m deeppavlov install <path_to_config>

Then download the models and data for them:

python -m deeppavlov download <path_to_config>

or you can use additional key -d to automatically download all required models and data with any command like interact, riseapi, etc.

Then you can interact with the models or train them with the following command:

python -m deeppavlov <mode> <path_to_config> [-d]
  • <mode> can be train, predict, interact, interactbot or riseapi
  • <path_to_config> should be a path to an NLP pipeline json config (e.g. deeppavlov/configs/ner/slotfill_dstc2.json) or a name without the .json extension of one of the config files provided in this repository (e.g. slotfill_dstc2)

For the interactbot mode you should specify Telegram bot token in -t parameter or in TELEGRAM_TOKEN environment variable. Also if you want to get custom /start and /help Telegram messages for the running model you should:

  • Add section to utils/telegram_utils/model_info.json with your custom Telegram messages
  • In model config file specify metadata.labels.telegram_utils parameter with name which refers to the added section of utils/telegram_utils/model_info.json

For riseapi mode you should specify api settings (host, port, etc.) in utils/server_utils/server_config.json configuration file. If provided, values from model_defaults section override values for the same parameters from common_defaults section. Model names in model_defaults section should be similar to the class names of the models main component.

For predict you can specify path to input file with -f or --input-file parameter, otherwise, data will be taken from stdin.
Every line of input text will be used as a pipeline input parameter, so one example will consist of as many lines, as many input parameters your pipeline expects.
You can also specify batch size with -b or --batch-size parameter.

Features

Component Description
NER component Based on neural Named Entity Recognition network. The NER component reproduces architecture from the paper Application of a Hybrid Bi-LSTM-CRF model to the task of Russian Named Entity Recognition which is inspired by Bi-LSTM+CRF architecture from https://arxiv.org/pdf/1603.01360.pdf.
Slot filling components Based on fuzzy Levenshtein search to extract normalized slot values from text. The components either rely on NER results or perform needle in haystack search.
Classification component Component for classification tasks (intents, sentiment, etc). Based on shallow-and-wide Convolutional Neural Network architecture from Kim Y. Convolutional neural networks for sentence classification – 2014 and others. The model allows multilabel classification of sentences.
Goal-oriented bot Based on Hybrid Code Networks (HCNs) architecture from Jason D. Williams, Kavosh Asadi, Geoffrey Zweig, Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning – 2017. It allows to predict responses in goal-oriented dialog. The model is customizable: embeddings, slot filler and intent classifier can switched on and off on demand.
Seq2seq goal-oriented bot Dialogue agent predicts responses in a goal-oriented dialog and is able to handle multiple domains (pretrained bot allows calendar scheduling, weather information retrieval, and point-of-interest navigation). The model is end-to-end differentiable and does not need to explicitly model dialogue state or belief trackers.
Automatic spelling correction component Pipelines that use candidates search in a static dictionary and an ARPA language model to correct spelling errors.
Ranking component Based on LSTM-based deep learning models for non-factoid answer selection. The model performs ranking of responses or contexts from some database by their relevance for the given context.
Question Answering component Based on R-NET: Machine Reading Comprehension with Self-matching Networks. The model solves the task of looking for an answer on a question in a given context (SQuAD task format).
Morphological tagging component Based on character-based approach to morphological tagging Heigold et al., 2017. An extensive empirical evaluation of character-based morphological tagging for 14 languages. A state-of-the-art model for Russian and several other languages. Model assigns morphological tags in UD format to sequences of words.
Skills
ODQA An open domain question answering skill. The skill accepts free-form questions about the world and outputs an answer based on its Wikipedia knowledge.
Parameters Evolution
Parameters evolution for models Implementation of parameters evolution for DeepPavlov models that requires only some small changes in a config file.
Embeddings
Pre-trained embeddings for the Russian language Word vectors for the Russian language trained on joint Russian Wikipedia and Lenta.ru corpora.

Examples of some components

  • Run goal-oriented bot with Telegram interface:
python -m deeppavlov interactbot deeppavlov/configs/go_bot/gobot_dstc2.json -d -t <TELEGRAM_TOKEN>
  • Run goal-oriented bot with console interface:
python -m deeppavlov interact deeppavlov/configs/go_bot/gobot_dstc2.json -d
  • Run goal-oriented bot with REST API:
python -m deeppavlov riseapi deeppavlov/configs/go_bot/gobot_dstc2.json -d
  • Run slot-filling model with Telegram interface:
python -m deeppavlov interactbot deeppavlov/configs/ner/slotfill_dstc2.json -d -t <TELEGRAM_TOKEN>
  • Run slot-filling model with console interface:
python -m deeppavlov interact deeppavlov/configs/ner/slotfill_dstc2.json -d
  • Run slot-filling model with REST API:
python -m deeppavlov riseapi deeppavlov/configs/ner/slotfill_dstc2.json -d
  • Predict intents on every line in a file:
python -m deeppavlov predict deeppavlov/configs/intents/intents_snips.json -d --batch-size 15 < /data/in.txt > /data/out.txt

View video demo of deployment of a goal-oriented bot and a slot-filling model with Telegram UI

Tutorials

Jupyter notebooks and videos explaining how to use DeepPalov for different tasks can be found in /examples/tutorials/


Technical overview

Project modules

deeppavlov.core.commands basic training and inference functions
deeppavlov.core.common registration and classes initialization functionality, class method decorators
deeppavlov.core.data basic DatasetIterator, DatasetReader and Vocab classes
deeppavlov.core.layers collection of commonly used Layers for TF models
deeppavlov.core.models abstract model classes and interfaces
deeppavlov.dataset_readers concrete DatasetReader classes
deeppavlov.dataset_iterators concrete DatasetIterators classes
deeppavlov.metrics different Metric functions
deeppavlov.models concrete Model classes
deeppavlov.skills Skill classes. Skills are dialog models.
deeppavlov.vocabs concrete Vocab classes

Config of component

An NLP pipeline config is a JSON file that contains one required element chainer:

{
  "chainer": {
    "in": ["x"],
    "in_y": ["y"],
    "pipe": [
      ...
    ],
    "out": ["y_predicted"]
  }
}

Chainer is a core concept of DeepPavlov library: chainer builds a pipeline from heterogeneous components (rule-based/ml/dl) and allows to train or infer from pipeline as a whole. Each component in the pipeline specifies its inputs and outputs as arrays of names, for example: "in": ["tokens", "features"] and "out": ["token_embeddings", "features_embeddings"] and you can chain outputs of one components with inputs of other components:

{
  "class": "deeppavlov.models.preproccessors.str_lower:StrLower",
  "in": ["x"],
  "out": ["x_lower"]
},
{
  "name": "nltk_tokenizer",
  "in": ["x_lower"],
  "out": ["x_tokens"]
},

Each Component in the pipeline must implement method __call__ and has name parameter, which is its registered codename, or class parameter in the form of module_name:ClassName. It can also have any other parameters which repeat its __init__() method arguments. Default values of __init__() arguments will be overridden with the config values during the initialization of a class instance.

You can reuse components in the pipeline to process different parts of data with the help of id and ref parameters:

{
  "name": "nltk_tokenizer",
  "id": "tokenizer",
  "in": ["x_lower"],
  "out": ["x_tokens"]
},
{
  "ref": "tokenizer",
  "in": ["y"],
  "out": ["y_tokens"]
},

Training

There are two abstract classes for trainable components: Estimator and NNModel.
Estimators are fit once on any data with no batching or early stopping, so it can be safely done at the time of pipeline initialization. fit method has to be implemented for each Estimator. An example of Estimator is Vocab. NNModel requires more complex training. It can only be trained in a supervised mode (as opposed to Estimator which can be trained in both supervised and unsupervised settings). This process takes multiple epochs with periodic validation and logging. train_on_batch method has to be implemented for each NNModel.

Training is triggered by deeppavlov.core.commands.train.train_model_from_config() function.

Train config

Estimators that are trained should also have fit_on parameter which contains a list of input parameter names. An NNModel should have the in_y parameter which contains a list of ground truth answer names. For example:

[
  {
    "id": "classes_vocab",
    "name": "default_vocab",
    "fit_on": ["y"],
    "level": "token",
    "save_path": "vocabs/classes.dict",
    "load_path": "vocabs/classes.dict"
  },
  {
    "in": ["x"],
    "in_y": ["y"],
    "out": ["y_predicted"],
    "name": "intent_model",
    "save_path": "intents/intent_cnn",
    "load_path": "intents/intent_cnn",
    "classes_vocab": {
      "ref": "classes_vocab"
    }
  }
]

The config for training the pipeline should have three additional elements: dataset_reader, dataset_iterator and train:

{
  "dataset_reader": {
    "name": ...,
    ...
  }
  "dataset_iterator": {
    "name": ...,
    ...
  },
  "chainer": {
    ...
  }
  "train": {
    ...
  }
}

Simplified version of trainig pipeline contains two elemens: dataset and train. The dataset element currently can be used for train from classification data in csv and json formats. You can find complete examples of how to use simplified training pipeline in intents_sample_csv.json and intents_sample_json.json config files.

Train Parameters

  • epochs — maximum number of epochs to train NNModel, defaults to -1 (infinite)
  • batch_size,
  • metrics — list of names of registered metrics to evaluate the model. The first metric in the list is used for early stopping
  • metric_optimization — maximize or minimize a metric, defaults to maximize
  • validation_patience — how many times in a row the validation metric has to not improve for early stopping, defaults to 5
  • val_every_n_epochs — how often to validate the pipe, defaults to -1 (never)
  • log_every_n_batches, log_every_n_epochs — how often to calculate metrics for train data, defaults to -1 (never)
  • validate_best, test_best flags to infer the best saved model on valid and test data, defaults to true

DatasetReader

DatasetReader class reads data and returns it in a specified format. A concrete DatasetReader class should be inherited from the base deeppavlov.data.dataset_reader.DatasetReader class and registered with a codename:

from deeppavlov.core.common.registry import register
from deeppavlov.core.data.dataset_reader import DatasetReader

@register('dstc2_datasetreader')
class DSTC2DatasetReader(DatasetReader):

DatasetIterator

DatasetIterator forms the sets of data ('train', 'valid', 'test') needed for training/inference and divides it into batches. A concrete DatasetIterator class should be registered and can be inherited from deeppavlov.data.dataset_iterator.BasicDatasetIterator class. deeppavlov.data.dataset_iterator.BasicDatasetIterator is not an abstract class and can be used as a DatasetIterator as well.

Inference

All components inherited from deeppavlov.core.models.component.Component abstract class can be used for inference. The __call__() method should return standard output of a component. For example, a tokenizer should return tokens, a NER recognizer should return recognized entities, a bot should return an utterance. A particular format of returned data should be defined in __call__().

Inference is triggered by deeppavlov.core.commands.infer.interact_model() function. There is no need in a separate JSON for inference.

Rest API

Each library component or skill can be easily made available for inference as a REST web service. The general method is:

python -m deeppavlov riseapi <config_path> [-d]

(optional -d key is for dependencies download before service start)

Web service properties (host, port, model endpoint, GET request arguments) are provided in utils/server_utils/server_config.json. Properties from common_defaults section are used by default unless they are overridden by component-specific properties, provided in model_defaults section of the server_config.json. Component-specific properties are bound to the component by server_utils label in metadata/labels section of the component config. Value of server_utils label from component config should match with properties key from model_defaults section of server_config.json.

For example, metadata/labels/server_utils tag from go_bot/gobot_dstc2.json references to the GoalOrientedBot section of server_config.json. Therefore, model_endpoint parameter in common_defaults will be will be overridden with the same parameter from model_defaults/GoalOrientedBot.

Model argument names are provided as list in model_args_names parameter, where arguments order corresponds to component API. When inferencing model via REST api, JSON payload keys should match component arguments names from model_args_names. Default argument name for one argument components is "context". Here are POST requests examples for some of the library components:

Component POST request JSON payload example
One argument components
NER component {"context":"Elon Musk launched his cherry Tesla roadster to the Mars orbit"}
Intent classification component {"context":"I would like to go to a restaurant with Asian cuisine this evening"}
Automatic spelling correction component {"context":"errror"}
Ranking component {"context":"What is the average cost of life insurance services?"}
(Seq2seq) Goal-oriented bot {"context":"Hello, can you help me to find and book a restaurant this evening?"}
Two arguments components
Question Answering component {"context":"After 1765, growing philosophical and political differences strained the relationship between Great Britain and its colonies.", "question":"What strained the relationship between Great Britain and its colonies?"}

Flasgger UI for API testing is provided on <host>:<port>/apidocs when running a component in riseapi mode.

License

DeepPavlov is Apache 2.0 - licensed.

Support and collaboration

If you have any questions, bug reports or feature requests, please feel free to post on our Github Issues page. Please tag your issue with bug, feature request, or question. Also we’ll be glad to see your pull requests to add new datasets, models, embeddings, etc.

The Team

DeepPavlov is built and maintained by Neural Networks and Deep Learning Lab at MIPT within iPavlov project (part of National Technology Initiative) and in partnership with Sberbank.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deeppavlov-0.0.6.tar.gz (218.4 kB view details)

Uploaded Source

Built Distribution

deeppavlov-0.0.6-py3-none-any.whl (336.0 kB view details)

Uploaded Python 3

File details

Details for the file deeppavlov-0.0.6.tar.gz.

File metadata

  • Download URL: deeppavlov-0.0.6.tar.gz
  • Upload date:
  • Size: 218.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for deeppavlov-0.0.6.tar.gz
Algorithm Hash digest
SHA256 715e8714264201b773d2660df95137a79cd191e10ebf311a63b723bf0afae137
MD5 1d5338ec5cc03409fc25032d6055c90d
BLAKE2b-256 49975d2d07d7969e96b23c1f81309856233f2c13620214e645c45ee3b8af57fc

See more details on using hashes here.

File details

Details for the file deeppavlov-0.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for deeppavlov-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 405eee66cb94df4c91e56ca044421444c22b4676ad94e848f0fe0024ee2e9407
MD5 e1506b8f95ccfbfbc679c7dd6838c1e5
BLAKE2b-256 2e905949ff2004a0afaca9f11ce79f4022a226c8c7ea774e3a83cffebeabac20

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page