A versatile token stream for handwritten parsers
Project description
tokenstream
A versatile token stream for handwritten parsers.
from tokenstream import TokenStream
def parse_sexp(stream: TokenStream):
"""A basic S-expression parser."""
with stream.syntax(brace=r"\(|\)", number=r"\d+", name=r"\w+"):
brace, number, name = stream.expect(("brace", "("), "number", "name")
if brace:
return [parse_sexp(stream) for _ in stream.peek_until(("brace", ")"))]
elif number:
return int(number.value)
elif name:
return name.value
print(parse_sexp(TokenStream("(hello (world 42))"))) # ['hello', ['world', 42]]
Introduction
Writing recursive-descent parsers by hand can be quite elegant but it's often a bit more verbose than expected, especially when it comes to handling indentation and reporting proper syntax errors. This package provides a powerful general-purpose token stream that addresses these issues and more.
Features
- Define the set of recognizable tokens dynamically with regular expressions
- Transparently skip over irrelevant tokens
- Expressive API for matching, collecting, peeking, and expecting tokens
- Clean error reporting with line numbers and column numbers
- Contextual support for indentation-based syntax
- Checkpoints for backtracking parsers
- Works well with Python 3.10+ match statements
Check out the examples
directory for practical examples.
Installation
The package can be installed with pip
.
pip install tokenstream
Getting started
You can define tokens with the syntax()
method. The keyword arguments associate regular expression patterns to token types. The method returns a context manager during which the specified tokens will be recognized.
stream = TokenStream("hello world")
with stream.syntax(word=r"\w+"):
print([token.value for token in stream]) # ['hello', 'world']
Check out the full API reference for more details.
Expecting tokens
The token stream is iterable and will yield all the extracted tokens one after the other. You can also retrieve tokens from the token stream one at a time by using the expect()
method.
stream = TokenStream("hello world")
with stream.syntax(word=r"\w+"):
print(stream.expect().value) # "hello"
print(stream.expect().value) # "world"
The expect()
method lets you ensure that the extracted token matches a specified type and will raise an exception otherwise.
stream = TokenStream("hello world")
with stream.syntax(number=r"\d+", word=r"\w+"):
print(stream.expect("word").value) # "hello"
print(stream.expect("number").value) # UnexpectedToken: Expected number but got word 'world'
Filtering the stream
Newlines and whitespace are ignored by default. You can reject interspersed whitespace by intercepting the built-in newline
and whitespace
tokens.
stream = TokenStream("hello world")
with stream.syntax(word=r"\w+"), stream.intercept("newline", "whitespace"):
print(stream.expect("word").value) # "hello"
print(stream.expect("word").value) # UnexpectedToken: Expected word but got whitespace ' '
The opposite of the intercept()
method is ignore()
. It allows you to ignore tokens and handle comments pretty easily.
stream = TokenStream(
"""
# this is a comment
hello # also a comment
world
"""
)
with stream.syntax(word=r"\w+", comment=r"#.+$"), stream.ignore("comment"):
print([token.value for token in stream]) # ['hello', 'world']
Indentation
To enable indentation you can use the indent()
method. The stream will now yield balanced pairs of indent
and dedent
tokens when the indentation changes.
source = """
hello
world
"""
stream = TokenStream(source)
with stream.syntax(word=r"\w+"), stream.indent():
stream.expect("word")
stream.expect("indent")
stream.expect("word")
stream.expect("dedent")
To prevent some tokens from triggering unwanted indentation changes you can use the skip
argument.
source = """
hello
# some comment
world
"""
stream = TokenStream(source)
with stream.syntax(word=r"\w+", comment=r"#.+$"), stream.indent(skip=["comment"]):
stream.expect("word")
stream.expect("comment")
stream.expect("indent")
stream.expect("word")
stream.expect("dedent")
Checkpoints
The checkpoint()
method returns a context manager that resets the stream to the current token at the end of the with
statement. You can use the returned commit()
function to keep the state of the stream at the end of the with
statement.
stream = TokenStream("hello world")
with stream.syntax(word=r"\w+"):
with stream.checkpoint():
print([token.value for token in stream]) # ['hello', 'world']
with stream.checkpoint() as commit:
print([token.value for token in stream]) # ['hello', 'world']
commit()
print([token.value for token in stream]) # []
Match statements
Match statements make it very intuitive to process tokens extracted from the token stream. If you're using Python 3.10+ give it a try and see if you like it.
from tokenstream import TokenStream, Token
def parse_sexp(stream: TokenStream):
"""A basic S-expression parser that uses Python 3.10+ match statements."""
with stream.syntax(brace=r"\(|\)", number=r"\d+", name=r"\w+"):
match stream.expect_any(("brace", "("), "number", "name"):
case Token(type="brace"):
return [parse_sexp(stream) for _ in stream.peek_until(("brace", ")"))]
case Token(type="number") as number :
return int(number.value)
case Token(type="name") as name:
return name.value
Contributing
Contributions are welcome. Make sure to first open an issue discussing the problem or the new feature before creating a pull request. The project uses poetry
.
$ poetry install
You can run the tests with poetry run pytest
.
$ poetry run pytest
The project must type-check with pyright
. If you're using VSCode the pylance
extension should report diagnostics automatically. You can also install the type-checker locally with npm install
and run it from the command-line.
$ npm run watch
$ npm run check
$ npm run verifytypes
The code follows the black
code style. Import statements are sorted with isort
.
$ poetry run isort tokenstream examples tests
$ poetry run black tokenstream examples tests
$ poetry run black --check tokenstream examples tests
License - MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tokenstream-1.7.0.tar.gz
.
File metadata
- Download URL: tokenstream-1.7.0.tar.gz
- Upload date:
- Size: 17.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.9.3 readme-renderer/37.3 requests/2.28.1 requests-toolbelt/0.10.1 urllib3/1.26.13 tqdm/4.64.1 importlib-metadata/6.0.0 keyring/23.13.1 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9f98387c7e74d224f7cca874ce77bb775ece6cb585e9ba18441960e2b35fae61 |
|
MD5 | 7ba035d287c5e802dc7cae437db782bc |
|
BLAKE2b-256 | 83330874abc08b5811aab60818b4596ab3463a4044e59061fa5b8bb8cca53b59 |
File details
Details for the file tokenstream-1.7.0-py3-none-any.whl
.
File metadata
- Download URL: tokenstream-1.7.0-py3-none-any.whl
- Upload date:
- Size: 16.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.9.3 readme-renderer/37.3 requests/2.28.1 requests-toolbelt/0.10.1 urllib3/1.26.13 tqdm/4.64.1 importlib-metadata/6.0.0 keyring/23.13.1 rfc3986/2.0.0 colorama/0.4.6 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fdbb20e8a99b07e94ce88016e5f243a1582630a7459bd2e7a1c786f33543fcf9 |
|
MD5 | 76c6c15ca7cf57fc19a122d6cceac691 |
|
BLAKE2b-256 | 40e3ed2623ff7a54bbd45f59c536f9b8923cc797a1125d9d087ea51cc9424e99 |