Skip to main content

Tiny DSL to generate training dataset for NLU engines

Project description



$ pip install pychatl


$ git clone
$ cd chatl/python
$ python install


$ pip install -e .


From the terminal

usage: pychatl [-h] [--version] [-a ADAPTER] [-m MERGE] [--pretty]
             files [files ...]

Generates training dataset from a simple DSL.

positional arguments:
  files                 One or more DSL files to process

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  -a ADAPTER, --adapter ADAPTER
                        Name of the adapter to use
  -m MERGE, --merge MERGE
                        Options file to merge with the final result
  --pretty              Pretty output

From the code

from pychatl import parse

result = parse("""
  will it rain in @[city] @[dateStart]

~[new york]

  at the end of the day

  ~[new york]

# Now you got a parsed dataset so you may want to process it for a specific NLU engines

from pychatl.adapters import snips

snips_dataset = snips(result) # Or give options with `snips(result, language='en')`

# And now you got your dataset ready to be fitted within snips-nlu!


$ pip install -e .[test]
$ python -m nose --with-doctest --with-coverage --cover-package=pychatl

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pychatl-2.0.4.tar.gz (17.1 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page