Various parsers for ECMA standards.
Project description
calmjs.parse
A collection of parsers and helper libraries for understanding ECMAScript; a near feature complete fork of slimit. A CLI front-end for this package is shipped separately as crimp.
Introduction
For any kind of build system that operates with JavaScript code in conjunction with a module system, the ability to understand what modules a given set of sources require or provide is paramount. As the Calmjs project provides a framework that produces and consume these module definitions, the the ability to have a comprehensive understanding of given JavaScript sources is a given. This goal was originally achieved using slimit, a JavaScript minifier library that also provided a comprehensive parser class that was built using Python Lex-Yacc (i.e. ply).
However, as of mid-2017, it was noted that slimit remained in a minimum state of maintenance for more than four years (its most recent release, 0.8.1, was made 2013-03-26), along with a number of serious outstanding issues have left unattended and unresolved for the duration of that time span. As the development of the Calmjs framework require those issues to be rectified as soon as possible, a decision to fork the parser portion of slimit was made. This was done in order to cater to the interests current to Calmjs project at that moment in time.
The fork was initial cut from another fork of slimit (specifically lelit/slimit), as it introduced and aggregated a number of bug fixes from various sources. To ensure a better quality control and assurance, a number of problematic changes introduced by that fork were removed. Also, new tests were created to bring coverage to full, and issues reported on the slimit tracker were noted and formalized into test cases where applicable. Finally, grammar rules were updated to ensure better conformance with the ECMA-262 (ES5) specification.
The goal of calmjs.parse is to provide a similar API that slimit had provided, except done in a much more extensible manner with more correctness checks in place. This however resulted in some operations that might take longer than what slimit had achieved, such as the pretty printing of output.
A CLI front-end that makes use of this package is provided through crimp.
Installation
The following command may be executed to source the latest stable version of calmjs.parse wheel from PyPI for installation into the current Python environment.
$ pip install calmjs.parse
As this package uses ply, it requires the generation of optimization modules for its lexer. The wheel distribution of calmjs.parse does not require this extra step as it contains these pre-generated modules for ply up to version 3.11 (the latest version available at the time of previous release), however the version of ply that is installed is beyond the supported version, the following caveats will apply.
If a more recent release of ply becomes available and the environment upgrades to that version, those pre-generated modules may become incompatible, which may result in a decreased performance and/or errors. A corrective action can be achieved through a manual optimization step if a newer version of calmjs.parse is not available, or ply may be downgraded back to version 3.11 if possible.
Alternatively, install a more recent version of calmjs.parse wheel that has the most complete set of pre-generated modules built.
Once the package is installed, the installation may be tested or be used directly.
Manual installation and packaging requirements
This section is for developers and advanced users; contains important information for package maintainers for OS distributions (e.g. Linux) that will prevent less than ideal experiences for downstream users.
Development is still ongoing with calmjs.parse, for the latest features and bug fixes, the development version may be installed through git like so:
$ pip install ply setuptools # this MUST be done first; see below for reason
$ pip install -e git+https://github.com/calmjs/calmjs.parse.git#egg=calmjs.parse
Note that all dependencies MUST be pre-installed setup.py build step to run, otherwise the build step required to create the pre-generated modules will result in failure.
If ply isn’t installed:
$ python -m pip install -e .
...
running egg_info
...
WARNING: cannot find distribution for 'ply'; using default value,
assuming 'ply==3.11' for pre-generated modules
ERROR: cannot find pre-generated modules for the assumed 'ply'
version from above and/or cannot `import ply` to build generated
modules, aborting build; please either ensure that the source
archive containing the pre-generate modules is being used, or that
the python package 'ply' is installed and available for import
before attempting to use the setup.py to build this package; please
refer to the top level README for further details
If setuptools isn’t installed:
$ python -m pip install -e .
...
running egg_info
...
Traceback (most recent call last):
...
ModuleNotFoundError: No module named 'pkg_resources'
Naturally, the git repository can be cloned directly and execute python setup.py develop while inside the root of the source directory; again, both ply AND setuptools MUST already have be available for import.
As the git repository does NOT contain any pre-generated modules or code, the above message is likely to be seen by developers or distro maintainers who are on their first try at interacting with this software. However, the zip archives released on PyPI starting from version 1.3.0 do contain these modules fully pre-generated, thus they may be used as part of a standard installation step, i.e. without requiring ply be available for import before usage of the setup.py for any purpose. While the same warning message about ply being missing may be shown, the pre-generated modules will allow the build step to proceed as normal.
Manual optimization
As lex and yacc require the generation of symbol tables, a way to optimize the performance is to cache the results. For ply, this is done using an auto-generated module. However, the generated file is marked with a version number, as the results may be specific to the installed version of ply. In calmjs.parse this is handled by giving them a name specific to the version of ply and the major Python version, as both together does result in subtle differences in the outputs and expectations of the auto-generated modules.
Typically, the process for this optimization is automatic and a correct symbol table will be generated, however there are cases where this will fail, so for this reason calmjs.parse provide a helper module and executable that can be optionally invoked to ensure that the correct encoding be used to generate that file. Other reasons where this may be necessary is to allow system administrators to do so for their end users, as they may not have write privileges at that level.
To execute the optimizer from the shell, the provided helper script may be used like so:
$ python -m calmjs.parse.parsers.optimize
If warnings appear that warn that tokens are defined but not used, they may be safely ignored.
This step is generally optionally for users who installed this package from PyPI via a Python wheel, provided the caveats as outlined in the installation section are addressed.
Testing the installation
To ensure that the calmjs.parse installation is functioning correctly, the built-in testsuite can be executed by the following:
$ python -m unittest calmjs.parse.tests.make_suite
If there are failures, please file an issue on the issue tracker with the full traceback, and/or the method of installation. Please also include applicable information about the environment, such as the version of this software, Python version, operating system environments, the version of ply that was installed, plus other information related to the issue at hand.
Usage
As this is a parser library, no executable shell commands are provided. There is however a helper callable object provided at the top level for immediate access to the parsing feature. It may be used like so:
>>> from calmjs.parse import es5
>>> program_source = '''
... // simple program
... var main = function(greet) {
... var hello = "hello " + greet;
... return hello;
... };
... console.log(main('world'));
... '''
>>> program = es5(program_source)
>>> # for a simple repr-like nested view of the ast
>>> program # equivalent to repr(program)
<ES5Program @3:1 ?children=[
<VarStatement @3:1 ?children=[
<VarDecl @3:5 identifier=<Identifier ...>, initializer=<FuncExpr ...>>
]>,
<ExprStatement @7:1 expr=<FunctionCall @7:1 args=<Arguments ...>,
identifier=<DotAccessor ...>>>
]>
>>> # automatic reconstruction of ast into source, without having to
>>> # call something like `.to_ecma()`
>>> print(program) # equivalent to str(program)
var main = function(greet) {
var hello = "hello " + greet;
return hello;
};
console.log(main('world'));
>>>
Please note the change in indentation as the default printer has its own indentation scheme. If comments are needed, the parser can be called using with_comments=True:
>>> program_wc = es5(program_source, with_comments=True)
>>> print(program_wc)
// simple program
var main = function(greet) {
var hello = "hello " + greet;
return hello;
};
console.log(main('world'));
>>>
Also note that there are limitations with the capturing of comments as documented in the Limitations section.
The parser classes are organized under the calmjs.parse.parsers module, with each language being under their own module. A corresponding lexer class with the same name is also provided under the calmjs.parse.lexers module. For the moment, only ES5 support is implemented.
Pretty/minified printing
There is also a set of pretty printing helpers for turning the AST back into a string. These are available as functions or class constructors, and are produced by composing various lower level classes available in the calmjs.parse.unparsers and related modules.
There is a default short-hand helper for turning the previously produced AST back into a string, which can be manually invoked with certain parameters, such as what characters to use for indentation: (note that the __str__ call implicitly invoked through print shown previously is implemented through this).
>>> from calmjs.parse.unparsers.es5 import pretty_print
>>> print(pretty_print(program, indent_str=' '))
var main = function(greet) {
var hello = "hello " + greet;
return hello;
};
console.log(main('world'));
>>>
There is also one for printing without any unneeded whitespaces, works as a source minifier:
>>> from calmjs.parse.unparsers.es5 import minify_print
>>> print(minify_print(program))
var main=function(greet){var hello="hello "+greet;return hello;};...
>>> print(minify_print(program, obfuscate=True, obfuscate_globals=True))
var a=function(b){var a="hello "+b;return a;};console.log(a('world'));
Note that in the second example, the obfuscate_globals option was only enabled to demonstrate the source obfuscation on the global scope, and this is generally not an option that should be enabled on production library code that is meant to be reused by other packages (other sources referencing the original unobfuscated names will be unable to do so).
Alternatively, direct invocation on a raw string can be done using the attributes provided under the same name as the above base objects that were imported initially. Relevant keyword arguments would be diverted to the appropriate underlying functions, for example:
>>> # pretty print without comments being parsed
>>> print(es5.pretty_print(program_source))
var main = function(greet) {
var hello = "hello " + greet;
return hello;
};
console.log(main('world'));
>>> # pretty print with comments parsed
>>> print(es5.pretty_print(program_source, with_comments=True))
// simple program
var main = function(greet) {
var hello = "hello " + greet;
return hello;
};
console.log(main('world'));
>>> # minify print
>>> print(es5.minify_print(program_source, obfuscate=True))
var main=function(b){var a="hello "+b;return a;};console.log(main('world'));
Source map generation
For the generation of source maps, a lower level unparser instance can be constructed through one of the printer factory functions. Passing in an AST node will produce a generator which produces tuples containing the yielded text fragment, plus other information which will aid in the generation of source maps. There are helper functions from the calmjs.parse.sourcemap module can be used like so to write the regenerated source code to some stream, along with processing the results into a sourcemap file. An example:
>>> import json
>>> from io import StringIO
>>> from calmjs.parse.unparsers.es5 import pretty_printer
>>> from calmjs.parse.sourcemap import encode_sourcemap, write
>>> stream_p = StringIO()
>>> print_p = pretty_printer()
>>> rawmap_p, _, names_p = write(print_p(program), stream_p)
>>> sourcemap_p = encode_sourcemap(
... 'demo.min.js', rawmap_p, ['custom_name.js'], names_p)
>>> print(json.dumps(sourcemap_p, indent=2, sort_keys=True))
{
"file": "demo.min.js",
"mappings": "AAEA;IACI;IACA;AACJ;AACA;",
"names": [],
"sources": [
"custom_name.js"
],
"version": 3
}
>>> print(stream_p.getvalue())
var main = function(greet) {
...
Likewise, this works similarly for the minify printer, which provides the ability to create out a minified output with unneeded whitespaces removed and identifiers obfuscated with the shortest possible value.
Note that in previous example, the second return value in the write method was not used and that a custom value was passed in. This is simply due to how the program was generated from a string and thus the sourcepath attribute was not assigned with a usable value for populating the "sources" list in the resulting source map. For the following example, assign a value to that attribute on the program directly.
>>> from calmjs.parse.unparsers.es5 import minify_printer
>>> program.sourcepath = 'demo.js' # say this was opened there
>>> stream_m = StringIO()
>>> print_m = minify_printer(obfuscate=True, obfuscate_globals=True)
>>> sourcemap_m = encode_sourcemap(
... 'demo.min.js', *write(print_m(program), stream_m))
>>> print(json.dumps(sourcemap_m, indent=2, sort_keys=True))
{
"file": "demo.min.js",
"mappings": "AAEA,IAAIA,CAAK,CAAE,SAASC,CAAK,CAAE,CACvB,...,YAAYF,CAAI",
"names": [
"main",
"greet",
"hello"
],
"sources": [
"demo.js"
],
"version": 3
}
>>> print(stream_m.getvalue())
var a=function(b){var a="hello "+b;return a;};console.log(a('world'));
A high level API for working with named streams (i.e. opened files, or stream objects like io.StringIO assigned with a name attribute) is provided by the read and write functions from io module. The following example shows how to use the function to read from a stream and write out the relevant items back out to the write only streams:
>>> from calmjs.parse import io
>>> h4_program_src = open('/tmp/html4.js')
>>> h4_program_min = open('/tmp/html4.min.js', 'w+')
>>> h4_program_map = open('/tmp/html4.min.js.map', 'w+')
>>> h4_program = io.read(es5, h4_program_src)
>>> print(h4_program)
var bold = function(s) {
return '<b>' + s + '</b>';
};
var italics = function(s) {
return '<i>' + s + '</i>';
};
>>> io.write(print_m, h4_program, h4_program_min, h4_program_map)
>>> pos = h4_program_map.seek(0)
>>> print(h4_program_map.read())
{"file": "html4.min.js", "mappings": ..., "version": 3}
>>> pos = h4_program_min.seek(0)
>>> print(h4_program_min.read())
var b=function(a){return'<b>'+a+'</b>';};var a=function(a){...};
//# sourceMappingURL=html4.min.js.map
For a simple concatenation of multiple sources into one file, along with inline source map (i.e. where the sourceMappingURL is a data: URL of the base64 encoding of the JSON string), the following may be done:
>>> files = [open('/tmp/html4.js'), open('/tmp/legacy.js')]
>>> combined = open('/tmp/combined.js', 'w+')
>>> io.write(print_p, (io.read(es5, f) for f in files), combined, combined)
>>> pos = combined.seek(0)
>>> print(combined.read())
var bold = function(s) {
return '<b>' + s + '</b>';
};
var italics = function(s) {
return '<i>' + s + '</i>';
};
var marquee = function(s) {
return '<marquee>' + s + '</marquee>';
};
var blink = function(s) {
return '<blink>' + s + '</blink>';
};
//# sourceMappingURL=data:application/json;base64;...
In this example, the io.write function was provided with the pretty unparser, an generator expression that will produce the two ASTs from the two source files, and then both the target and sourcemap argument are identical, which forces the source map generator to generate the base64 encoding.
Do note that if multiple ASTs were supplied to a minifying printer with globals being obfuscated, the resulting script will have the earlier obfuscated global names mangled by later ones, as the unparsing is done separately by the io.write function.
Extract an AST to a dict
To assist with extracting values from an ast into a dict, the calmjs.parse.unparsers.extractor module provides an ast_to_dict helper function to aid with that. This function will accept any valid ast that was parsed as the argument,
>>> from calmjs.parse.unparsers.extractor import ast_to_dict
>>> configuration = es5('''
... var config = module.exports = {};
...
... var name = "Morgan"
... msg = "Hello, " + name + "! " + "Welcome to the host.";
...
... config.server = {
... host: '0.0.0.0',
... port: process.env.PORT || 8000,
... threads: 4 + 4,
... columns: ['id', 'name', 'description'],
... memory: 1 << 15,
... msg: msg
... };
...
... // default proxy stub
... config.proxy = {
... host: 'localhost',
... port: 8080,
... options: {
... "https": !1,
... "threshold": -100
... }
... };
... ''')
>>> baseconf = ast_to_dict(configuration)
Accessing the values is simply done as a mapping:
>>> print(baseconf['name'])
Morgan
Assignments are bound to the entire expression, i.e. not interpreted down to individual existing assignments.
>>> baseconf['config']
{}
>>> baseconf['config.server']['columns']
['id', 'name', 'description']
>>> baseconf['config.server']['msg']
'msg'
>>> baseconf['config.proxy']['options']['threshold']
-100
Note that the -100 value involves folding the unary expression with the - operator, and by default all other expressions of this type are simply written back out as is.
>>> baseconf['config.proxy']['options']['https']
'!1'
>>> baseconf['msg']
'"Hello, " + name + "! " + "Welcome to the host."'
>>> baseconf['config.server']['threads']
'4 + 4'
To assist with a more generalized usage, the ast_to_dict provides an additional fold_ops argument. When set to True, operator folding will be enabled on supported types; for example, constants will be attempted to be folded into a single value as per how operators are handled in the ECMAScript specification. This is often useful for ensuring concatenated strings are merged, and normalizing short-hand definition of boolean values via !0 or !1, among other commonly seen expressions.
>>> foldedconf = ast_to_dict(configuration, fold_ops=True)
>>> foldedconf['config.server']['threads']
8
>>> foldedconf['config.server']['memory']
32768
>>> foldedconf['config.server']['port']
8000
>>> foldedconf['config.proxy']['options']['https']
False
>>> # variables will remain as is
>>> foldedconf['config.server']['msg']
'msg'
>>> # however, in the context of a concatenated string, it will form
>>> # a format string instead.
>>> foldedconf['msg']
'Hello, {name}! Welcome to the host.'
As noted, any valid AST may serve as the input argument, with any dangling expressions (i.e. those that are not assigned or bound to a name) simply be appened to a list keyed under of its outermost asttype.
>>> from calmjs.parse.asttypes import (
... Identifier, FuncExpr, UnaryExpr)
>>> dict_of_ast = ast_to_dict(es5(u"""
... var i;
... i;
... !'ok';
... function foo(bar) {
... baz = true;
... }
... (function(y) {
... x = 1;
... });
... """), fold_ops=True)
>>> dict_of_ast['i']
>>> dict_of_ast[Identifier]
['i']
>>> dict_of_ast[UnaryExpr] # not simply string or boolean
[False]
>>> dict_of_ast['foo'] # named function resolved
[['bar'], {'baz': True}]
>>> dict_of_ast[FuncExpr]
[[['y'], {'x': 1}]]
Advanced usage
Lower level unparsing API
Naturally, the printers demonstrated previously are constructed using the underlying Unparser class, which in turn bridges together the walk function and the Dispatcher class found in the walker module. The walk function walks through the AST node with an instance of the Dispatcher class, which provides a description of all node types for the particular type of AST node provided, along with the relevant handlers. These handlers can be set up using existing rule provider functions. For instance, a printer for obfuscating identifier names while maintaining indentation for the output of an ES5 AST can be constructed like so:
>>> from calmjs.parse.unparsers.es5 import Unparser
>>> from calmjs.parse.rules import indent
>>> from calmjs.parse.rules import obfuscate
>>> pretty_obfuscate = Unparser(rules=(
... # note that indent must come after, so that the whitespace
... # handling rules by indent will shadow over the minimum set
... # provided by obfuscate.
... obfuscate(obfuscate_globals=False),
... indent(indent_str=' '),
... ))
>>> math_module = es5('''
... (function(root) {
... var fibonacci = function(count) {
... if (count < 2)
... return count;
... else
... return fibonacci(count - 1) + fibonacci(count - 2);
... };
...
... var factorial = function(n) {
... if (n < 1)
... throw new Error('factorial where n < 1 not supported');
... else if (n == 1)
... return 1;
... else
... return n * factorial(n - 1);
... }
...
... root.fibonacci = fibonacci;
... root.factorial = factorial;
... })(window);
...
... var value = window.factorial(5) / window.fibonacci(5);
... console.log('the value is ' + value);
... ''')
>>> print(''.join(c.text for c in pretty_obfuscate(math_module)))
(function(b) {
var a = function(b) {
if (b < 2) return b;
else return a(b - 1) + a(b - 2);
};
var c = function(a) {
if (a < 1) throw new Error('factorial where n < 1 not supported');
else if (a == 1) return 1;
else return a * c(a - 1);
};
b.fibonacci = a;
b.factorial = c;
})(window);
var value = window.factorial(5) / window.fibonacci(5);
console.log('the value is ' + value);
Each of the rules (functions) have specific options that are set using specific keyword arguments, details are documented in their respective docstrings.
At an even lower level, the ruletypes submodule contains the primitives that form the underlying definitions that each Dispatcher implementations currently available. For an example on how this might be extended beyond simply unparsing back to text, see the source for the extractor unparser module.
Tree walking
AST (Abstract Syntax Tree) generic walker classes are defined under the appropriate named modules calmjs.parse.walkers. Two default walker classes are supplied. One of them is the ReprWalker class which was previously demonstrated. The other is the Walker class, which supplies a collection of generic tree walking methods for a tree of AST nodes. The following is an example usage on how one might extract all Object assignments from a given script file:
>>> from calmjs.parse import es5
>>> from calmjs.parse.asttypes import Object, VarDecl, FunctionCall
>>> from calmjs.parse.walkers import Walker
>>> walker = Walker()
>>> declarations = es5('''
... var i = 1;
... var s = {
... a: "test",
... o: {
... v: "value"
... }
... };
... foo({foo: "bar"});
... function bar() {
... var t = {
... foo: "bar",
... };
... return t;
... }
... foo.bar = bar;
... foo.bar();
... ''')
>>> # print out the object nodes that were part of some assignments
>>> for node in walker.filter(declarations, lambda node: (
... isinstance(node, VarDecl) and
... isinstance(node.initializer, Object))):
... print(node.initializer)
...
{
a: "test",
o: {
v: "value"
}
}
{
foo: "bar"
}
>>> # print out all function calls
>>> for node in walker.filter(declarations, lambda node: (
... isinstance(node, FunctionCall))):
... print(node.identifier)
...
foo
foo.bar
Further details and example usage can be consulted from the various docstrings found within the module.
Limitations
Comments currently may be incomplete
Due to the implementation of the lexer/parser along with how the ast node types have been implemented, there are restrictions on where the comments may be exposed if enabled. Currently, such limitations exists for nodes that are created by production rules that consume multiple lexer tokens at once - only comments preceding the first token will be captured, with all remaining comments discarded.
For example, this limitation means that any comments before the else token will be omitted (as the comment will be provided by the if token), as the production rule for an If node consumes both these tokens and the node as implemented only provides a single slot for comments. Likewise, any comments before the : token in a ternary statement will also be discarded as that is the second token consumed by the production rule that produces a Conditional node.
Troubleshooting
Instantiation of parser classes fails with UnicodeEncodeError
For platforms or systems that do not have utf8 configured as the default encoding, the automatic table generation may fail when constructing a parser instance. An example:
>>> from calmjs.parse.parsers import es5
>>> parser = es5.Parser()
Traceback (most recent call last):
...
File "c:\python35\....\ply\lex.py", line 1043, in lex
lexobj.writetab(lextab, outputdir)
File "c:\python35\....\ply\lex.py", line 195, in writetab
tf.write('_lexstatere = %s\n' % repr(tabre))
File "c:\python35\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u02c1' ...
A workaround helper script is provided, it may be executed like so:
$ python -m calmjs.parse.parsers.optimize
Further details on this topic may be found in the manual optimization section of this document.
WARNING: There are unused tokens on import
This indicates that the installation method or source for this package being imported isn’t optimized. A quick workaround is to follow the instructions at the manual optimization section of this document to ensure these messages are no longer generated (and if this warning happens every time the module is imported, it means the symbol tables are regenerated every time that happens and this extra computational overhead should be corrected through the generation of that optimization module).
The optimization modules are included with the wheel release and the source release on PyPI, but it is not part of the source repository as generated code are never committed. Should a binary release made by a third-party results in this warning upon import, their release should be corrected to include the optimization module.
Moreover, there are safeguards in place that prevent this warning from being generated for releases made for releases from 1.3.1 onwards by a more heavy handed enforcement of this optimization step at build time, but persistent (or careless) actors may circumvent this during the build process, but official releases made through PyPI should include the required optimization for all supported ply versions (which are versions 3.6 to 3.11, inclusive).
Alternatively, this issue may also occur via usage of pyinstaller if the package metadata is not copied for ply in versions prior to calmjs.parse-1.3.1 and will always occur if the hidden imports are not declared for those optimization modules. The following hook should may be used to ensure calmjs.parse functions correctly in the compiled binary:
from PyInstaller.utils.hooks import collect_data_files, copy_metadata
from calmjs.parse.utils import generate_tab_names
datas = []
datas.extend(collect_data_files("ply"))
datas.extend(copy_metadata("ply"))
datas.extend(collect_data_files("calmjs.parse"))
datas.extend(copy_metadata("calmjs.parse"))
hiddenimports = []
hiddenimports.extend(generate_tab_names('calmjs.parse.parsers.es5'))
# if running under Python 3 with ply-3.11, above is equivalent to
# hiddenimports = [
# "calmjs.parse.parsers.lextab_es5_py3_ply3_11",
# "calmjs.parse.parsers.yacctab_es5_py3_ply3_11",
# ]
Slow performance
As this program is basically fully decomposed into very small functions, this result in massive performance penalties as compared to other implementations due to function calls being one of the most expensive operations in Python. It may be possible to further optimize the definitions within the description in the Dispatcher by combining all the resolved generator functions for each asttype Node type, however this will may require both the token and layout functions not having arguments with name collisions, and the new function will take in all of those arguments in one go.
ERROR message about import error when trying to install
As noted in the error message, the ply and setuptools package must be installed before attempting to install build the package in the situation where the pre-generated modules are missing. This situation may be caused by building directly using the source provided by the source code repository, or where there is no matching pre-generated module matching with the installed version of ply. Please ensure that ply is installed and available first before installing from source if this error message is sighted.
Contribute
Issue Tracker: https://github.com/calmjs/calmjs.parse/issues
Source Code: https://github.com/calmjs/calmjs.parse
Legal
The calmjs.parse package is copyright (c) 2017 Auckland Bioengineering Institute, University of Auckland. The calmjs.parse package is licensed under the MIT license (specifically, the Expat License), which is also the same license that the package slimit was released under.
The lexer, parser and the other types definitions portions were originally imported from the slimit package; slimit is copyright (c) Ruslan Spivak.
The Calmjs project is copyright (c) 2017 Auckland Bioengineering Institute, University of Auckland.
Changelog
1.3.2 - 2024-10-17
Ensure building from source tree under Python 3.13 works.
Dropped support for building under Python 2 in source tree. [ #44 ]
1.3.1 - 2023-10-28
Modified existing setup.py hook from an install hook to a build hook to ensure the generated module files are present. Should any of those modules are missing and the required dependencies for are not present (i.e. ply and setuptools), the build will result in a non-zero exit status and the documented error message should reflect which of the required dependencies are missing. [ #41 ]
1.3.0 - 2021-10-08
1.2.5 - 2020-07-03
Will now import Iterable from the Python 3.3+ location as the previous location is marked for removal in Python 3.9. The import will still have a fallback to the previous location in order to maintain support for Python 2.7. [ #31 ]
Provide a test case helper to ensure that the generic Program repr signature is provided to assist with test case portability. [ #33 ]
In the calmjs.parse.vlq module, implemented the decode_vlq helper for completeness/symmetry to the encode_vlq helper. [ #33 ]
1.2.4 - 2020-03-17
1.2.2 - 2020-01-18
1.2.1 - 2019-11-21
Fix the issue of failures with regex statement that occur due to lexer being in a state where the disambiguation between REGEX or DIV token types is not immediately possible, as tokens such as RBRACE, PLUSPLUS or MINUSMINUS must be consumed by parser in order to be disambiguated, but due to the lookahead nature done by yacc, the DIV token will be prematurely produced and the only way to achieve this is during the error handling stage. [ #25 #26 ]
Part of the previous fix also removed newline or comment tokens from being reported as part of parsing error messages.
1.2.0 - 2019-08-15
Partial support for parsing of comments. Currently not all comments will be captured during parsing, due to the desire to simplify access of them through the asttypes.Node instances with the generic comments attribute provided by it. [ #24 ]
Enabled by passing with_comments=True to the parser..
The limitation lies in the fact that if a node has multiple token slots (e.g. if...else), the comments that lie immediate before the first will be captured, while the comments that lie immediate to the subsequent ones will be omitted. The fix would involve providing a full syntax tree node types, and that the parser rules would need to be implemented in a more amenable manner such that the generation of such could be done.
All comments that lie immediately before the node are accessible using the comments attribute.
These comments nodes will not be yielded via the children() method.
Various features and methods have been updated to account for comments. Notably, sourcemap generation will be able to deal with source fragments that contain newlines provided that both colno and lineno are provided.
Correctly fail on incorrect hexadecimal/unicode escape sequences while reporting the specific character location; also report on the starting position of an unterminated string literal. [ #23 ]
1.1.3 - 2018-11-08
Correct issues with certain non-optional spaces being omitted for the minify print cases, which caused malformed outputs. [ #22 ]
1.1.2 - 2018-08-20
Default repr on synthetic nodes or nodes without column or row number assigned should no longer error. [ #20 ]
The same line terminator regex introduced in 1.1.0 used for line continuation in strings now applied to the line terminator pattern to the lexer, such that the line numbering is corrected for the Windows specific <CR><LF> sequence. [ #21 ]
1.1.1 - 2018-08-11
Ensure that the accounting of layout rule chunks is done correctly in the case where layout handlers specified a tuple of layout rules for combined handling. [ #19 ]
The issue caused by this error manifest severely in the case where multiple layout rule tokens are produced in a manner that repeats a pattern that also have a layout handler rule for them, which does not typically happen for normal code with the standard printers (as layout chunks are many and they generally do not result in a repeated pattern that gets consumed). However this is severely manifested in the case of minified output with semicolons dropped, as that basically guarantee that any series of closing blocks that fit the pattern to be simply dropped.
1.1.0 - 2018-08-07
Correct the implementation of line continuation in strings. This also meant a change in the minify unparser so that it will continue to remove the line continuation sequences. [ #16 ]
Correct the implementation of ASI (automatic semicolon insertion) by introducing a dedicated token type, such that the production of empty statement can no longer happen and that distinguishes it from production of statements that should not have ASI applied, such that incorrectly successful parsing due to this issue will no longer result. [ #18 rspivak/slimit#29 rspivak/slimit#101 ]
1.0.1 - 2018-04-19
Ensure that the es5 Unparser pass on the prewalk_hooks argument in its constructor.
Minor packaging fixes; also include optimization modules for ply-3.11.
1.0.0 - 2017-09-26
Full support for sourcemaps; changes that make it possible follows:
High level read/write functionality provided by a new io module.
There is now a Deferrable rule type for marking certain Tokens that need extra handling. The support for this has changed the various API that deals with setting up of this.
For support of the sourcemap generation, a number of new ruletypes have been added.
The sourcemap write function had its argument order modified to better support the sourcepath tracking feature of input Nodes. Its return value also now match the ordering of the encode_sourcemap function.
The chunk types in ruletypes have been renamed, and also a new type called StreamFragment is introduced, so that multiple sources output to a single stream can be properly tracked by the source mapping processes.
rspivak/slimit#66 should be fully supported now.
Minify printer now has ability to shorten/obfuscate identifiers:
Provide a name obfuscation function for shortening identifiers, to further achieve minified output. Note that this does not yet fully achieve the level of minification slimit had; future versions may implement this functionality as various AST transformations.
Also provided ability to drop unneeded semicolons.
Other significant changes:
Various changes to internal class and function names for the 1.0.0 release. A non exhaustive listing of changes to modules relative to the root of this package name as compared to previous major release follows:
- asttypes
All slimit compatibility features removed.
Switch (the incorrect version) removed.
SwitchStatement -> Switch
SetPropAssign constructor: parameters -> parameter
UnaryOp -> UnaryExpr
Other general deprecated features also removed.
- factory
Factory -> SRFactory
- visitors
Removed (details follow).
- walkers
visitors.generic.ReprVisitor -> walkers.ReprWalker
- layouts
Module was split and reorganised; the simple base ones can be found in handlers.core, the indentation related features are now in handlers.indentation.
- unparsers.base
.default_layout_handlers -> handlers.core.default_rules
.minimum_layout_handlers -> handlers.core.minimum_rules
- unparsers.prettyprint
Renamed to unparsers.walker.
The implementation was actually standard tree walking, no correctly implemented visitor functions/classes were ever present.
- vlq
.create_sourcemap -> sourcemap.create_sourcemap
Broke up the visitors class as they weren’t really visitors as described. The new implementations (calmjs.parse-0.9.0) were really walkers, so move them to that name and leave it at that. Methods were also renamed to better reflect their implementation and purpose.
Many slimit compatibility modules, classes and incorrectly implemented functionalities removed.
The usage of the Python 3 str type (unicode in Python 2) is now enforced for the parser, to avoid various failure cases where mismatch types occur.
The base Node asttype has a sourcepath attribute which is to be used for tracking the original source of the node; if assigned, all its subnodes without sourcepath defined should be treated as from that source.
Also provide an even higher level function for usage with streams through the calmjs.parse.io module.
Semicolons and braces added as structures to be rendered.
Bug fixes:
Functions starting with a non-word character will now always have a whitespace rendered before it to avoid syntax error.
Correct an incorrect iterator usage in the walk function.
Ensure List separators don’t use the rowcol positions of a subsequent Elision node.
Lexer will only report real lexer tokens on errors (ASI generated tokens are now dropped as they don’t exist in the original source which results in confusing rowcol reporting).
rspivak/slimit#57, as it turns out '\0' is not considered to be octal, but is a <NUL> character, which the rule to parse was not actually included in the lexer patches that were pulled in previous to this version.
rspivak/slimit#75, Option for shadowing of names of named closures, which is now disabled by default (obfuscated named closures will not be shadowed by other obfuscated names in children).
Expressions can no longer contain an unnamed function.
0.10.1 - 2017-08-26
Corrected the line number reporting for the lexer, and correct the propagation of that to the parser and the Node subclasses. Fixes the incorrect implementation added by moses-palmer/slimit@8f9a39c7769 (where the line numbers are tabulated incorrectly when comments are present, and also the yacc tracking added by moses-palmer/slimit@6aa92d68e0 (where the custom lexer class does not provide the position attributes required by ply).
Implemented bookkeeping of column numbers.
Made other various changes to AST but for compatibility reasons (to not force a major semver bump) they are only enabled with a flag to the ES5 parser.
Corrected a fault with how switch/case statements are handled in a way that may break compatibility; fixes are only enabled when flagged. rspivak/slimit#94
The repr form of Node now shows the line/col number info by default; the visit method of the ReprVisitor class have not been changed, only the invocation of it via the callable form has as that is the call target for __repr__. This is a good time to mention that named methods afford the most control for usage as documented already.
Parsers now accept an asttypes module during its construction.
Provide support for source map generation classes.
Introduced a flexible visitor function/state class that accepts a definition of rules for the generation of chunk tuples that are compatible for the source map generation. A new way for pretty printing and minification can be achieved using this module.
0.9.0 - 2017-06-09
Initial release of the fork of slimit.parser and its parent modules as calmjs.parse.
This release brings in a number of bug fixes that were available via other forks of slimit, with modifications or even a complete revamp.
Issues addressed includes:
rspivak/slimit#52, rspivak/slimit#59, rspivak/slimit#81, rspivak/slimit#90 (relating to conformance of ecma-262 7.6 identifier names)
rspivak/slimit#54 (fixed by tracking scope and executable current token in lexer)
rspivak/slimit#57, rspivak/slimit#70 (octal encoding (e.g 0), from redapple/slimit@a93204577f)
rspivak/slimit#62 (formalized into a unittest that passed)
rspivak/slimit#73 (specifically the desire for a better repr; the minifier bits are not relevant to this package)
rspivak/slimit#79 (tab module handling was completely reimplemented)
rspivak/slimit#82 (formalized into a unittest that passed)
Include various changes gathered by rspivak/slimit#65, which may be the source of some of the fixes listed above.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file calmjs.parse-1.3.2.zip
.
File metadata
- Download URL: calmjs.parse-1.3.2.zip
- Upload date:
- Size: 335.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.13.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 06d2eb01c77bfdd8e8014e6c2ff337e73b803ad96089a8a5c7f12dd5fb33b4f6 |
|
MD5 | 61d49a30937ab7cf10dbebda20c901d0 |
|
BLAKE2b-256 | ba84b9c3088bfe44ef966763f4beb85e96a0b3fab4ac259f24bdeb6167f6e0c5 |
File details
Details for the file calmjs.parse-1.3.2-py3-none-any.whl
.
File metadata
- Download URL: calmjs.parse-1.3.2-py3-none-any.whl
- Upload date:
- Size: 296.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.13.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | aa356e557ac12c98c0582d2eee6b6942a1cf69e93ffbcd19d6b57177a276a90c |
|
MD5 | 09e93a3c3d9bb4a862fb6719a5f170d5 |
|
BLAKE2b-256 | e560748ffddbb89c8c2e78c1e9790a4aadeb05a66b1bb12d2b923c55ee10f47a |