Skip to main content

A panoply of tools for parsing, lexical analysis, and semantic processing

Project description

What is this?

Parser/Scanner generator and run-time, all in Python. Plus various other handy btis and bobs.

Why is it cool?

  • Literate form: Definitions are embedded in MarkDown documents as code blocks.
  • Macros eliminate most of the tedium typical of a context-free grammar.
  • Productions are separate from action code, so you can see both the trees and the forest.
  • Grammar and Scanner in a single definition file.
  • JSON for tables means in principle other-language run-times should be straightforward.
  • Full LR(1) deterministic, and also generalized / non-deterministic modes supported.

Getting Started:

Install

D:\>  pip install booze-tools

Learn

Look in the examples for documentation by example.

  • Gentle Introduction with json.md and macro_json.py. These have the best introductory commentary to walk you through getting started.
  • For a complete working example, check out calculator.md and calculator.py.
  • Then check out the other examples as they interest you.

Full documentation is moving from the wiki page over to ReadTheDocs. But it's been a very slow process.

Run

Translate a definition; generate .automaton file:

D:\> py -m boozetools my_grammar.md

Get a full run-down of the command-line options:

D:\> py -m boozetools -h

What's New?

  • Certain files are re-organized for the 0.6.x series.
  • The project moves back to alpha stage for the time being.
  • The

What's Here?

For now there are four major components. Eventually there will be more. These are:

  • MiniParse -- Provides Minimal-LR(1)* or LALR(1) or Canonical-LR(1) with operator-precedence grammar facilities (like Lemon, YACC, or Bison), error productions, and good-and-proper error recovery.

  • MiniScan -- Provides a DFA-based backtracking scanner (like Flex or Lex) with a few extra goodies.

  • MacroParse -- This is the crown jewel of the package right now. It:

    • provides for a separate document containing the definitions of both a scanner and parser.
    • supports error productions and error-recovery in the same manner as MiniParse.
    • uses markdown format to make just such a document into a literate program.
    • enables a single such definition to be used for different applications on different host languages.
    • supports a macro language for simplifying otherwise-redundant parser specifications.
    • provides a suitable runtime library so the examples run and pass the tests.
    • can prepare parse and scan tables ahead of time (serialized to JSON) or just-in-time according to your needs.
    • can generate DOT graphs from grammars.
  • Support Library: generic bits and bobs that may also be useful in other contexts.

    • Strongly Connected Components
    • Transitive Closure
    • Visitor Pattern
    • Equivalence Classification
    • Hamming Distance
    • Breadth First Traversal
    • Various small array hacks

The "minimal-LR(1)" algorithm used here is meant to create a constructively minimal number of parser states while respecting precedence and associativity declarations in the usual way. That means it can split states but does not (yet) attempt to merge them again afterwards. It is strongly inspired by the IELR(1) algorithm, but it is NOT exactly that algorithm. As far as I can tell it is a new contribution. As such, I would appreciate feedback respecting your results with it.

Priorities?

  • These operate within a Python environment.
  • They have some features not found in other such tools.
  • Performance is accordingly not the top priority, but:
    • the profiler has been used to solve one or two problems,
    • if someone wants to play with the profiler they are welcome, and
    • contributions in that vein will be accepted as long as they are consistent with the higher priorities.

What Else?

There are unit tests. They're not vast and imposing, but they exercise the interface both directly and via the example code.

Bibliography:

I'll add links as I track them down.

Oh by the way..

I'm NOT a crack-pot. Really I'm not.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

booze-tools-0.6.2.1.tar.gz (110.2 kB view details)

Uploaded Source

File details

Details for the file booze-tools-0.6.2.1.tar.gz.

File metadata

  • Download URL: booze-tools-0.6.2.1.tar.gz
  • Upload date:
  • Size: 110.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.7

File hashes

Hashes for booze-tools-0.6.2.1.tar.gz
Algorithm Hash digest
SHA256 215c0351c9c4622f3f3fc5530c5e65b27302c5b117a98f4f0a929eb713cf7072
MD5 8cc8377d72f111d5c4594b930f5d9e3c
BLAKE2b-256 6d164944e6c9295c0db7b7c74d6fcfcf803f6f5a957dd140a6488aaacf9e648e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page