Skip to main content

Flexible, ruleset-based tokenizer using regex.

Project description

lex2-py3


lex2 is a library intended for lexical analysis (also called tokenization). String analysis is performed using regular expressions (regex) in user-defined rules. Some additional functions, such as dynamic ruleset stack, provide flexibility to some degree at runtime.

The library is written in platform independent pure Python3, and is portable (no usage of language-specific features) making it straightforward to port to other programming languages. Furthermore, the library is designed to enable the end-user to easily integrate any external regex engine of their choice through a simple to use unified interface.

Getting Started

As per usual, you can install the library from the Python Package Index (PyPI) through pip:

pip install lex2

You can also choose to manually include the library in your project by cloning or downloading a snapshot of the repository from GitHub and copying the lex2 folder to your project's includes/libraries folder.

Usage of the library is relatively simple, demonstrated by the short example below. For more in-depth examples and using a different regex engines of your choice, see the documentation.

import lex2

# Define ruleset and prepare the lexer object instance
ruleset: lex2.RulesetType = [
    #        Identifier     Regex pattern
    lex2.Rule("WORD",        r"[a-zA-Z]+"),
    lex2.Rule("NUMBER",      r"[0-9]+"),
    lex2.Rule("PUNCTUATION", r"[.,:;!?\\-]")
]
lexer: lex2.ILexer = lex2.make_lexer()(ruleset)

# Load input data by opening a file
lexer.open(r"C:/path/to/file.txt")
# Or by directly passing a string
lexer.load("The quick, brown fox jumps over 2 lazy dogs. \nMr. Jock, TV quiz PhD, bags few lynx.")

# Main tokenization loop
token: lex2.Token
while(1):

    # Find the next token in the textstream
    try: token = lexer.get_next_token()
    except lex2.excs.EOF:
        break

    info = [
         "ln: {}".format(token.pos.ln +1),
        "col: {}".format(token.pos.col+1),
        token.id,
        token.data,
    ]
    print("{: <12} {: <15} {: <20} {: <20}".format(*info))

lexer.close()
>>> ln: 1        col: 1          WORD                 The
>>> ln: 1        col: 5          WORD                 quick
>>> ln: 1        col: 10         PUNCTUATION          ,
>>> ln: 1        col: 12         WORD                 brown
>>> ln: 1        col: 18         WORD                 fox
>>> ln: 1        col: 22         WORD                 jumps
>>> ln: 1        col: 28         WORD                 over
>>> ln: 1        col: 33         NUMBER               2
>>> ln: 1        col: 35         WORD                 lazy
>>> ln: 1        col: 40         WORD                 dogs
>>> ln: 1        col: 44         PUNCTUATION          .
>>> ln: 2        col: 1          WORD                 Mr
>>> ln: 2        col: 3          PUNCTUATION          .
>>> ln: 2        col: 5          WORD                 Jock
>>> ln: 2        col: 9          PUNCTUATION          ,
>>> ln: 2        col: 11         WORD                 TV
>>> ln: 2        col: 14         WORD                 quiz
>>> ln: 2        col: 19         WORD                 PhD
>>> ln: 2        col: 22         PUNCTUATION          ,
>>> ln: 2        col: 24         WORD                 bags
>>> ln: 2        col: 29         WORD                 few
>>> ln: 2        col: 33         WORD                 lynx
>>> ln: 2        col: 37         PUNCTUATION          .

Development Dependencies

For development you will need the following dependencies:

  • Python:
    • Version 3.8+
    • Packages can be installed via requirements.txt, using the following command:
      pip install -r requirements.txt
      
  • Documentation (for diagrams via PlantUML)
    • Java
    • Graphiz

Contributing

The repository is hosted at deltarazero/lex2-py3 on GitHub. Contribution is always welcome; you can contribute by satisfying one of the following points of action:

  • Submitting a pull request: to contribute your own changes to the repository. See "Proposing changes to your work with pull requests" for more information on pull requests using GitHub. Furthermore, please follow the guidelines below:

    • File an issue to notify the maintainers about what you're working on.
    • Fork the repo, develop and test your code changes, add docs/unit tests (if applicable).
    • Make sure that your commit messages clearly describe the changes.
    • Send a pull request, using the available template.

    For changes that address core functionality or would require breaking changes (i.e. for a major release), it's best to open an issue to discuss your proposal beforehand.

    Maintaining your own fork of the repository is discouraged. Instead, please submit pull requests and delete your fork afterwards (if applicable). This will make it less confusing for end-users to know which repository is the most up-to-date.

  • Submitting an issue: to report a problem with the library, request a new feature, or to discuss potential changes before a pull request is created. Ensure the issue was not already reported. Furthermore, please use one of the available issue templates if possible.

License

© 2020-2022 DeltaRazero. All rights reserved.

All included scripts, modules, etc. are licensed under the terms of the zlib license, unless stated otherwise in the respective files.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lex2-1.1.1.tar.gz (27.6 kB view details)

Uploaded Source

Built Distributions

lex2-1.1.1-py3.8.egg (74.8 kB view details)

Uploaded Source

lex2-1.1.1-py3-none-any.whl (36.4 kB view details)

Uploaded Python 3

File details

Details for the file lex2-1.1.1.tar.gz.

File metadata

  • Download URL: lex2-1.1.1.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.10

File hashes

Hashes for lex2-1.1.1.tar.gz
Algorithm Hash digest
SHA256 0b2fe765592b359efd1c8f8ba66bc914b9901eaa0001d90b1679e75f2be9aae1
MD5 87e3166b322dd7b1f8f4e08c538ff437
BLAKE2b-256 4b49ad283136470ebabe873f236f05fd1179e18a44e0f131beabb2662a1592bd

See more details on using hashes here.

File details

Details for the file lex2-1.1.1-py3.8.egg.

File metadata

  • Download URL: lex2-1.1.1-py3.8.egg
  • Upload date:
  • Size: 74.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.10

File hashes

Hashes for lex2-1.1.1-py3.8.egg
Algorithm Hash digest
SHA256 a29f895a9fbb983da7099a0a72511b0e0cf31811757de3906c874950cc33d619
MD5 663eec9c1f4583579143c87211ee5fad
BLAKE2b-256 0af9aab1a5664e5d93d60f7fb74db675d472f41524f55f796dcd76e0ced5603f

See more details on using hashes here.

File details

Details for the file lex2-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: lex2-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 36.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.10

File hashes

Hashes for lex2-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fbb01ea728d23e5773a424c9982f0787fb817ca2c51385663577abd2f2977bfd
MD5 783452734758fc366adee29e38b72471
BLAKE2b-256 548d3c178986c05f8e8f6f4961b65b795054854e2dce6fdb7d34f968eb1fdf3b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page