Skip to main content

Various parsers for ECMA standards.

Project description

calmjs.parse

A collection of parsers and helper libraries for understanding ECMAScript; a near feature complete fork of slimit. A CLI front-end for this package is shipped separately as crimp.

https://travis-ci.org/calmjs/calmjs.parse.svg?branch=1.2.3 https://ci.appveyor.com/api/projects/status/5dj8dnu9gmj02msu/branch/1.2.3?svg=true https://coveralls.io/repos/github/calmjs/calmjs.parse/badge.svg?branch=1.2.3

Introduction

For any kind of build system that operates with JavaScript code in conjunction with a module system, the ability to understand what modules a given set of sources require or provide is paramount. As the Calmjs project provides a framework that produces and consume these module definitions, the the ability to have a comprehensive understanding of given JavaScript sources is a given. This goal was originally achieved using slimit, a JavaScript minifier library that also provided a comprehensive parser class that was built using Python Lex-Yacc (i.e. ply).

However, as of mid-2017, it was noted that slimit remained in a minimum state of maintenance for more than four years (its most recent release, 0.8.1, was made 2013-03-26), along with a number of serious outstanding issues have left unattended and unresolved for the duration of that time span. As the development of the Calmjs framework require those issues to be rectified as soon as possible, a decision to fork the parser portion of slimit was made. This was done in order to cater to the interests current to Calmjs project at that moment in time.

The fork was initial cut from another fork of slimit (specifically lelit/slimit), as it introduced and aggregated a number of bug fixes from various sources. To ensure a better quality control and assurance, a number of problematic changes introduced by that fork were removed. Also, new tests were created to bring coverage to full, and issues reported on the slimit tracker were noted and formalized into test cases where applicable. Finally, grammar rules were updated to ensure better conformance with the ECMA-262 (ES5) specification.

The goal of calmjs.parse is to provide a similar API that slimit had provided, except done in a much more extensible manner with more correctness checks in place. This however resulted in some operations that might take longer than what slimit had achieved, such as the pretty printing of output.

A CLI front-end that makes use of this package is provided through crimp.

Installation

The following command may be executed to source the latest stable version of calmjs.parse wheel from PyPI for installation into the current Python environment.

$ pip install calmjs.parse

As this package uses ply, it requires the generation of optimization modules for its lexer. The wheel distribution of calmjs.parse does not require this extra step as it contains these pre-generated modules for ply up to version 3.11 (the latest version available at the time of previous release), however the source tarball or if ply version that is installed lies outside of the supported versions, the following caveats will apply.

If a more recent release of ply becomes available and the environment upgrades to that version, those pre-generated modules may become incompatible, which may result in a decreased performance and/or errors. A corrective action can be achieved through a manual optimization step if a newer version of calmjs.parse is not available, or ply may be downgraded back to version 3.11 if possible.

Once the package is installed, the installation may be tested or be used directly.

Alternative installation methods (for developers, advanced users)

Development is still ongoing with calmjs.parse, for the latest features and bug fixes, the development version may be installed through git like so:

$ pip install git+https://github.com/calmjs/calmjs.parse.git#egg=calmjs.parse

Alternatively, the git repository can be cloned directly and execute python setup.py develop while inside the root of the source directory.

A manual optimization step may need to be performed for platforms and systems that do not have utf8 as their default encoding.

Manual optimization

As lex and yacc require the generation of symbol tables, a way to optimize the performance is to cache the results. For ply, this is done using an auto-generated module. However, the generated file is marked with a version number, as the results may be specific to the installed version of ply. In calmjs.parse this is handled by giving them a name specific to the version of ply and the major Python version, as both together does result in subtle differences in the outputs and expectations of the auto-generated modules.

Typically, the process for this optimization is automatic and a correct symbol table will be generated, however there are cases where this will fail, so for this reason calmjs.parse provide a helper module and executable that can be optionally invoked to ensure that the correct encoding be used to generate that file. Other reasons where this may be necessary is to allow system administrators to do so for their end users, as they may not have write privileges at that level.

To execute the optimizer from the shell, the provided helper script may be used like so:

$ python -m calmjs.parse.parsers.optimize

If warnings appear that warn that tokens are defined but not used, they may be safely ignored.

This step is generally optionally for users who installed this package from PyPI via a Python wheel, provided the caveats as outlined in the installation section are addressed.

Testing the installation

To ensure that the calmjs.parse installation is functioning correctly, the built-in testsuite can be executed by the following:

$ python -m unittest calmjs.parse.tests.make_suite

If there are failures, please file an issue on the issue tracker with the full traceback, and/or the method of installation. Please also include applicable information about the environment, such as the version of this software, Python version, operating system environments, the version of ply that was installed, plus other information related to the issue at hand.

Usage

As this is a parser library, no executable shell commands are provided. There is however a helper callable object provided at the top level for immediate access to the parsing feature. It may be used like so:

>>> from calmjs.parse import es5
>>> program_source = u'''
... // simple program
... var main = function(greet) {
...     var hello = "hello " + greet;
...     return hello;
... };
... console.log(main('world'));
... '''
>>> program = es5(program_source)
>>> # for a simple repr-like nested view of the ast
>>> program  # equivalent to repr(program)
<ES5Program @3:1 ?children=[
  <VarStatement @3:1 ?children=[
    <VarDecl @3:5 identifier=<Identifier ...>, initializer=<FuncExpr ...>>
  ]>,
  <ExprStatement @7:1 expr=<FunctionCall @7:1 args=<Arguments ...>,
    identifier=<DotAccessor ...>>>
]>
>>> # automatic reconstruction of ast into source, without having to
>>> # call something like `.to_ecma()`
>>> print(program)  # equivalent to str(program)
var main = function(greet) {
  var hello = "hello " + greet;
  return hello;
};
console.log(main('world'));

>>>

Please note the change in indentation as the default printer has its own indentation scheme. If comments are needed, the parser can be called using with_comments=True:

>>> program_wc = es5(program_source, with_comments=True)
>>> print(program_wc)
// simple program
var main = function(greet) {
  var hello = "hello " + greet;
  return hello;
};
console.log(main('world'));

>>>

Also note that there are limitations with the capturing of comments as documented in the Limitations section.

The parser classes are organized under the calmjs.parse.parsers module, with each language being under their own module. A corresponding lexer class with the same name is also provided under the calmjs.parse.lexers module. For the moment, only ES5 support is implemented.

Pretty/minified printing

There is also a set of pretty printing helpers for turning the AST back into a string. These are available as functions or class constructors, and are produced by composing various lower level classes available in the calmjs.parse.unparsers and related modules.

There is a default short-hand helper for turning the previously produced AST back into a string, which can be manually invoked with certain parameters, such as what characters to use for indentation: (note that the __str__ call implicitly invoked through print shown previously is implemented through this).

>>> from calmjs.parse.unparsers.es5 import pretty_print
>>> print(pretty_print(program, indent_str='    '))
var main = function(greet) {
    var hello = "hello " + greet;
    return hello;
};
console.log(main('world'));

>>>

There is also one for printing without any unneeded whitespaces, works as a source minifier:

>>> from calmjs.parse.unparsers.es5 import minify_print
>>> print(minify_print(program))
var main=function(greet){var hello="hello "+greet;return hello;};...
>>> print(minify_print(program, obfuscate=True, obfuscate_globals=True))
var a=function(b){var a="hello "+b;return a;};console.log(a('world'));

Note that in the second example, the obfuscate_globals option was only enabled to demonstrate the source obfuscation on the global scope, and this is generally not an option that should be enabled on production library code that is meant to be reused by other packages (other sources referencing the original unobfuscated names will be unable to do so).

Alternatively, direct invocation on a raw string can be done using the attributes provided under the same name as the above base objects that were imported initially. Relevant keyword arguments would be diverted to the appropriate underlying functions, for example:

>>> # pretty print without comments being parsed
>>> print(es5.pretty_print(program_source))
var main = function(greet) {
  var hello = "hello " + greet;
  return hello;
};
console.log(main('world'));

>>> # pretty print with comments parsed
>>> print(es5.pretty_print(program_source, with_comments=True))
// simple program
var main = function(greet) {
  var hello = "hello " + greet;
  return hello;
};
console.log(main('world'));

>>> # minify print
>>> print(es5.minify_print(program_source, obfuscate=True))
var main=function(b){var a="hello "+b;return a;};console.log(main('world'));

Source map generation

For the generation of source maps, a lower level unparser instance can be constructed through one of the printer factory functions. Passing in an AST node will produce a generator which produces tuples containing the yielded text fragment, plus other information which will aid in the generation of source maps. There are helper functions from the calmjs.parse.sourcemap module can be used like so to write the regenerated source code to some stream, along with processing the results into a sourcemap file. An example:

>>> import json
>>> from io import StringIO
>>> from calmjs.parse.unparsers.es5 import pretty_printer
>>> from calmjs.parse.sourcemap import encode_sourcemap, write
>>> stream_p = StringIO()
>>> print_p = pretty_printer()
>>> rawmap_p, _, names_p = write(print_p(program), stream_p)
>>> sourcemap_p = encode_sourcemap(
...     'demo.min.js', rawmap_p, ['custom_name.js'], names_p)
>>> print(json.dumps(sourcemap_p, indent=2, sort_keys=True))
{
  "file": "demo.min.js",
  "mappings": "AAEA;IACI;IACA;AACJ;AACA;",
  "names": [],
  "sources": [
    "custom_name.js"
  ],
  "version": 3
}
>>> print(stream_p.getvalue())
var main = function(greet) {
...

Likewise, this works similarly for the minify printer, which provides the ability to create out a minified output with unneeded whitespaces removed and identifiers obfuscated with the shortest possible value.

Note that in previous example, the second return value in the write method was not used and that a custom value was passed in. This is simply due to how the program was generated from a string and thus the sourcepath attribute was not assigned with a usable value for populating the "sources" list in the resulting source map. For the following example, assign a value to that attribute on the program directly.

>>> from calmjs.parse.unparsers.es5 import minify_printer
>>> program.sourcepath = 'demo.js'  # say this was opened there
>>> stream_m = StringIO()
>>> print_m = minify_printer(obfuscate=True, obfuscate_globals=True)
>>> sourcemap_m = encode_sourcemap(
...     'demo.min.js', *write(print_m(program), stream_m))
>>> print(json.dumps(sourcemap_m, indent=2, sort_keys=True))
{
  "file": "demo.min.js",
  "mappings": "AAEA,IAAIA,CAAK,CAAE,SAASC,CAAK,CAAE,CACvB,...,YAAYF,CAAI",
  "names": [
    "main",
    "greet",
    "hello"
  ],
  "sources": [
    "demo.js"
  ],
  "version": 3
}
>>> print(stream_m.getvalue())
var a=function(b){var a="hello "+b;return a;};console.log(a('world'));

A high level API for working with named streams (i.e. opened files, or stream objects like io.StringIO assigned with a name attribute) is provided by the read and write functions from io module. The following example shows how to use the function to read from a stream and write out the relevant items back out to the write only streams:

>>> from calmjs.parse import io
>>> h4_program_src = open('/tmp/html4.js')
>>> h4_program_min = open('/tmp/html4.min.js', 'w+')
>>> h4_program_map = open('/tmp/html4.min.js.map', 'w+')
>>> h4_program = io.read(es5, h4_program_src)
>>> print(h4_program)
var bold = function(s) {
  return '<b>' + s + '</b>';
};
var italics = function(s) {
  return '<i>' + s + '</i>';
};
>>> io.write(print_m, h4_program, h4_program_min, h4_program_map)
>>> pos = h4_program_map.seek(0)
>>> print(h4_program_map.read())
{"file": "html4.min.js", "mappings": ..., "version": 3}
>>> pos = h4_program_min.seek(0)
>>> print(h4_program_min.read())
var b=function(a){return'<b>'+a+'</b>';};var a=function(a){...};
//# sourceMappingURL=html4.min.js.map

For a simple concatenation of multiple sources into one file, along with inline source map (i.e. where the sourceMappingURL is a data: URL of the base64 encoding of the JSON string), the following may be done:

>>> files = [open('/tmp/html4.js'), open('/tmp/legacy.js')]
>>> combined = open('/tmp/combined.js', 'w+')
>>> io.write(print_p, (io.read(es5, f) for f in files), combined, combined)
>>> pos = combined.seek(0)
>>> print(combined.read())
var bold = function(s) {
    return '<b>' + s + '</b>';
};
var italics = function(s) {
    return '<i>' + s + '</i>';
};
var marquee = function(s) {
    return '<marquee>' + s + '</marquee>';
};
var blink = function(s) {
    return '<blink>' + s + '</blink>';
};
//# sourceMappingURL=data:application/json;base64;...

In this example, the io.write function was provided with the pretty unparser, an generator expression that will produce the two ASTs from the two source files, and then both the target and sourcemap argument are identical, which forces the source map generator to generate the base64 encoding.

Do note that if multiple ASTs were supplied to a minifying printer with globals being obfuscated, the resulting script will have the earlier obfuscated global names mangled by later ones, as the unparsing is done separately by the io.write function.

Advanced usage

Lower level unparsing API

Naturally, the printers demonstrated previously are constructed using the underlying Unparser class, which in turn bridges together the walk function and the Dispatcher class found in the walker module. The walk function walks through the AST node with an instance of the Dispatcher class, which provides a description of all node types for the particular type of AST node provided, along with the relevant handlers. These handlers can be set up using existing rule provider functions. For instance, a printer for obfuscating identifier names while maintaining indentation for the output of an ES5 AST can be constructed like so:

>>> from calmjs.parse.unparsers.es5 import Unparser
>>> from calmjs.parse.rules import indent
>>> from calmjs.parse.rules import obfuscate
>>> pretty_obfuscate = Unparser(rules=(
...     # note that indent must come after, so that the whitespace
...     # handling rules by indent will shadow over the minimum set
...     # provided by obfuscate.
...     obfuscate(obfuscate_globals=False),
...     indent(indent_str='    '),
... ))
>>> math_module = es5(u'''
... (function(root) {
...   var fibonacci = function(count) {
...     if (count < 2)
...       return count;
...     else
...       return fibonacci(count - 1) + fibonacci(count - 2);
...   };
...
...   var factorial = function(n) {
...     if (n < 1)
...       throw new Error('factorial where n < 1 not supported');
...     else if (n == 1)
...       return 1;
...     else
...       return n * factorial(n - 1);
...   }
...
...   root.fibonacci = fibonacci;
...   root.factorial = factorial;
... })(window);
...
... var value = window.factorial(5) / window.fibonacci(5);
... console.log('the value is ' + value);
... ''')
>>> print(''.join(c.text for c in pretty_obfuscate(math_module)))
(function(b) {
    var a = function(b) {
        if (b < 2) return b;
        else return a(b - 1) + a(b - 2);
    };
    var c = function(a) {
        if (a < 1) throw new Error('factorial where n < 1 not supported');
        else if (a == 1) return 1;
        else return a * c(a - 1);
    };
    b.fibonacci = a;
    b.factorial = c;
})(window);
var value = window.factorial(5) / window.fibonacci(5);
console.log('the value is ' + value);

Each of the rules (functions) have specific options that are set using specific keyword arguments, details are documented in their respective docstrings.

Tree walking

AST (Abstract Syntax Tree) generic walker classes are defined under the appropriate named modules calmjs.parse.walkers. Two default walker classes are supplied. One of them is the ReprWalker class which was previously demonstrated. The other is the Walker class, which supplies a collection of generic tree walking methods for a tree of AST nodes. The following is an example usage on how one might extract all Object assignments from a given script file:

>>> from calmjs.parse import es5
>>> from calmjs.parse.asttypes import Object, VarDecl, FunctionCall
>>> from calmjs.parse.walkers import Walker
>>> walker = Walker()
>>> declarations = es5(u'''
... var i = 1;
... var s = {
...     a: "test",
...     o: {
...         v: "value"
...     }
... };
... foo({foo: "bar"});
... function bar() {
...     var t = {
...         foo: "bar",
...     };
...     return t;
... }
... foo.bar = bar;
... foo.bar();
... ''')
>>> # print out the object nodes that were part of some assignments
>>> for node in walker.filter(declarations, lambda node: (
...         isinstance(node, VarDecl) and
...         isinstance(node.initializer, Object))):
...     print(node.initializer)
...
{
  a: "test",
  o: {
    v: "value"
  }
}
{
  foo: "bar"
}
>>> # print out all function calls
>>> for node in walker.filter(declarations, lambda node: (
...         isinstance(node, FunctionCall))):
...     print(node.identifier)
...
foo
foo.bar

Further details and example usage can be consulted from the various docstrings found within the module.

Limitations

Comments currently may be incomplete

Due to the implementation of the lexer/parser along with how the ast node types have been implemented, there are restrictions on where the comments may be exposed if enabled. Currently, such limitations exists for nodes that are created by production rules that consume multiple lexer tokens at once - only comments preceding the first token will be captured, with all remaining comments discarded.

For example, this limitation means that any comments before the else token will be omitted (as the comment will be provided by the if token), as the production rule for an If node consumes both these tokens and the node as implemented only provides a single slot for comments. Likewise, any comments before the : token in a ternary statement will also be discarded as that is the second token consumed by the production rule that produces a Conditional node.

Troubleshooting

Instantiation of parser classes fails with UnicodeEncodeError

For platforms or systems that do not have utf8 configured as the default encoding, the automatic table generation may fail when constructing a parser instance. An example:

>>> from calmjs.parse.parsers import es5
>>> parser = es5.Parser()
Traceback (most recent call last):
  ...
  File "c:\python35\....\ply\lex.py", line 1043, in lex
    lexobj.writetab(lextab, outputdir)
  File "c:\python35\....\ply\lex.py", line 195, in writetab
    tf.write('_lexstatere   = %s\n' % repr(tabre))
  File "c:\python35\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u02c1' ...

A workaround helper script is provided, it may be executed like so:

$ python -m calmjs.parse.parsers.optimize

Further details on this topic may be found in the manual optimization section of this document.

Slow performance

As this program is basically fully decomposed into very small functions, this result in massive performance penalties as compared to other implementations due to function calls being one of the most expensive operations in Python. It may be possible to further optimize the definitions within the description in the Dispatcher by combining all the resolved generator functions for each asttype Node type, however this will may require both the token and layout functions not having arguments with name collisions, and the new function will take in all of those arguments in one go.

Contribute

Changelog

1.2.3 - 2020-03-17

  • Correct usage of __doc__ to support level 2 optimized mode. [ #29 #30 ]

1.2.2 - 2020-01-18

  • Correctly include LICENSE file in sdist. [ #27 #28 ]

  • Include the correct test data general form for some previously added test cases to better accommodate already planned future features.

1.2.1 - 2019-11-21

  • Fix the issue of failures with regex statement that occur due to lexer being in a state where the disambiguation between REGEX or DIV token types is not immediately possible, as tokens such as RBRACE, PLUSPLUS or MINUSMINUS must be consumed by parser in order to be disambiguated, but due to the lookahead nature done by yacc, the DIV token will be prematurely produced and the only way to achieve this is during the error handling stage. [ #25 #26 ]

  • Part of the previous fix also removed newline or comment tokens from being reported as part of parsing error messages.

1.2.0 - 2019-08-15

  • Partial support for parsing of comments. Currently not all comments will be captured during parsing, due to the desire to simplify access of them through the asttypes.Node instances with the generic comments attribute provided by it. [ #24 ]

    • Enabled by passing with_comments=True to the parser..

    • The limitation lines in the fact that if a node maps to multiple tokens (e.g. if...else), the comments that lie immediate before the first will be captured, while the comments that lie immediate to the subsequent ones will be omitted. The fix would involve providing a full syntax tree node types, and that the parser rules would need to be implemented in a more amenable manner such that the generation of such could be done.

    • All comments that lie immediately before the node are accessible using the comments attribute.

    • These comments nodes will not be yielded via the children() method.

    • Various features and methods have been updated to account for comments. Notably, sourcemap generation will be able to deal with source fragments that contain newlines provided that both colno and lineno are provided.

  • Correctly fail on incorrect hexadecimal/unicode escape sequences while reporting the specific character location; also report on the starting position of an unterminated string literal. [ #23 ]

1.1.3 - 2018-11-08

  • Correct issues with certain non-optional spaces being omitted for the minify print cases, which caused malformed outputs. [ #22 ]

1.1.2 - 2018-08-20

  • Default repr on synthetic nodes or nodes without column or row number assigned should no longer error. [ #20 ]

  • The same line terminator regex introduced in 1.1.0 used for line continuation in strings now applied to the line terminator pattern to the lexer, such that the line numbering is corrected for the Windows specific <CR><LF> sequence. [ #21 ]

1.1.1 - 2018-08-11

  • Ensure that the accounting of layout rule chunks is done correctly in the case where layout handlers specified a tuple of layout rules for combined handling. [ #19 ]

    • The issue caused by this error manifest severely in the case where multiple layout rule tokens are produced in a manner that repeats a pattern that also have a layout handler rule for them, which does not typically happen for normal code with the standard printers (as layout chunks are many and they generally do not result in a repeated pattern that gets consumed). However this is severely manifested in the case of minified output with semicolons dropped, as that basically guarantee that any series of closing blocks that fit the pattern to be simply dropped.

1.1.0 - 2018-08-07

  • Correct the implementation of line continuation in strings. This also meant a change in the minify unparser so that it will continue to remove the line continuation sequences. [ #16 ]

  • Correct the implementation of ASI (automatic semicolon insertion) by introducing a dedicated token type, such that the production of empty statement can no longer happen and that distinguishes it from production of statements that should not have ASI applied, such that incorrectly successful parsing due to this issue will no longer result. [ #18 rspivak/slimit#29 rspivak/slimit#101 ]

1.0.1 - 2018-04-19

  • Ensure that the es5 Unparser pass on the prewalk_hooks argument in its constructor.

  • Minor packaging fixes; also include optimization modules for ply-3.11.

1.0.0 - 2017-09-26

Full support for sourcemaps; changes that make it possible follows:

  • High level read/write functionality provided by a new io module.

  • There is now a Deferrable rule type for marking certain Tokens that need extra handling. The support for this has changed the various API that deals with setting up of this.

  • For support of the sourcemap generation, a number of new ruletypes have been added.

  • The sourcemap write function had its argument order modified to better support the sourcepath tracking feature of input Nodes. Its return value also now match the ordering of the encode_sourcemap function.

  • The chunk types in ruletypes have been renamed, and also a new type called StreamFragment is introduced, so that multiple sources output to a single stream can be properly tracked by the source mapping processes.

  • rspivak/slimit#66 should be fully supported now.

Minify printer now has ability to shorten/obfuscate identifiers:

  • Provide a name obfuscation function for shortening identifiers, to further achieve minified output. Note that this does not yet fully achieve the level of minification slimit had; future versions may implement this functionality as various AST transformations.

  • Also provided ability to drop unneeded semicolons.

Other significant changes:

  • Various changes to internal class and function names for the 1.0.0 release. A non exhaustive listing of changes to modules relative to the root of this package name as compared to previous major release follows:

    asttypes
    • All slimit compatibility features removed.

    • Switch (the incorrect version) removed.

    • SwitchStatement -> Switch

    • SetPropAssign constructor: parameters -> parameter

    • UnaryOp -> UnaryExpr

    • Other general deprecated features also removed.

    factory
    • Factory -> SRFactory

    visitors
    • Removed (details follow).

    walkers
    • visitors.generic.ReprVisitor -> walkers.ReprWalker

    layouts
    • Module was split and reorganised; the simple base ones can be found in handlers.core, the indentation related features are now in handlers.indentation.

    unparsers.base
    • .default_layout_handlers -> handlers.core.default_rules

    • .minimum_layout_handlers -> handlers.core.minimum_rules

    unparsers.prettyprint
    • Renamed to unparsers.walker.

    • The implementation was actually standard tree walking, no correctly implemented visitor functions/classes were ever present.

    vlq
    • .create_sourcemap -> sourcemap.create_sourcemap

  • Broke up the visitors class as they weren’t really visitors as described. The new implementations (calmjs.parse-0.9.0) were really walkers, so move them to that name and leave it at that. Methods were also renamed to better reflect their implementation and purpose.

  • Many slimit compatibility modules, classes and incorrectly implemented functionalities removed.

  • The usage of the Python 3 str type (unicode in Python 2) is now enforced for the parser, to avoid various failure cases where mismatch types occur.

  • The base Node asttype has a sourcepath attribute which is to be used for tracking the original source of the node; if assigned, all its subnodes without sourcepath defined should be treated as from that source.

  • Also provide an even higher level function for usage with streams through the calmjs.parse.io module.

  • Semicolons and braces added as structures to be rendered.

Bug fixes:

  • Functions starting with a non-word character will now always have a whitespace rendered before it to avoid syntax error.

  • Correct an incorrect iterator usage in the walk function.

  • Ensure List separators don’t use the rowcol positions of a subsequent Elision node.

  • Lexer will only report real lexer tokens on errors (ASI generated tokens are now dropped as they don’t exist in the original source which results in confusing rowcol reporting).

  • rspivak/slimit#57, as it turns out '\0' is not considered to be octal, but is a <NUL> character, which the rule to parse was not actually included in the lexer patches that were pulled in previous to this version.

  • rspivak/slimit#75, Option for shadowing of names of named closures, which is now disabled by default (obfuscated named closures will not be shadowed by other obfuscated names in children).

  • Expressions can no longer contain an unnamed function.

0.10.1 - 2017-08-26

  • Corrected the line number reporting for the lexer, and correct the propagation of that to the parser and the Node subclasses. Fixes the incorrect implementation added by moses-palmer/slimit@8f9a39c7769 (where the line numbers are tabulated incorrectly when comments are present, and also the yacc tracking added by moses-palmer/slimit@6aa92d68e0 (where the custom lexer class does not provide the position attributes required by ply).

  • Implemented bookkeeping of column numbers.

  • Made other various changes to AST but for compatibility reasons (to not force a major semver bump) they are only enabled with a flag to the ES5 parser.

  • Corrected a fault with how switch/case statements are handled in a way that may break compatibility; fixes are only enabled when flagged. rspivak/slimit#94

  • The repr form of Node now shows the line/col number info by default; the visit method of the ReprVisitor class have not been changed, only the invocation of it via the callable form has as that is the call target for __repr__. This is a good time to mention that named methods afford the most control for usage as documented already.

  • Parsers now accept an asttypes module during its construction.

  • Provide support for source map generation classes.

  • Introduced a flexible visitor function/state class that accepts a definition of rules for the generation of chunk tuples that are compatible for the source map generation. A new way for pretty printing and minification can be achieved using this module.

0.9.0 - 2017-06-09

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

calmjs.parse-1.2.3.zip (175.2 kB view details)

Uploaded Source

Built Distributions

calmjs.parse-1.2.3-py3-none-any.whl (271.6 kB view details)

Uploaded Python 3

calmjs.parse-1.2.3-py2-none-any.whl (280.0 kB view details)

Uploaded Python 2

File details

Details for the file calmjs.parse-1.2.3.zip.

File metadata

  • Download URL: calmjs.parse-1.2.3.zip
  • Upload date:
  • Size: 175.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.18.4 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.15.0 CPython/3.6.5

File hashes

Hashes for calmjs.parse-1.2.3.zip
Algorithm Hash digest
SHA256 1b34a7bd566451b360852fd69285ad5a82ab46e93b4cede9ac2d454a9853c1fb
MD5 6f294473cc24ae8694bc79df3f0e175a
BLAKE2b-256 abbf200b2997fe1b16eda12ecaa6ef2c82989c99172947a56e2703738b1ecd82

See more details on using hashes here.

File details

Details for the file calmjs.parse-1.2.3-py3-none-any.whl.

File metadata

  • Download URL: calmjs.parse-1.2.3-py3-none-any.whl
  • Upload date:
  • Size: 271.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.18.4 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.15.0 CPython/3.6.5

File hashes

Hashes for calmjs.parse-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5e04e8ffae075264cb953ae9465d8bb00c0149ad44452bf31a45116690779738
MD5 949663d241b72d28fa1f8f9663a4d27f
BLAKE2b-256 fdc14a0b6ee877f5bf32222e99092934d28a66f48ba9b67970cfc5b4a08fd908

See more details on using hashes here.

File details

Details for the file calmjs.parse-1.2.3-py2-none-any.whl.

File metadata

  • Download URL: calmjs.parse-1.2.3-py2-none-any.whl
  • Upload date:
  • Size: 280.0 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.18.4 setuptools/40.5.0 requests-toolbelt/0.8.0 tqdm/4.15.0 CPython/3.6.5

File hashes

Hashes for calmjs.parse-1.2.3-py2-none-any.whl
Algorithm Hash digest
SHA256 3e5d5f6ae1fcb818a5986ee249c9b9b62931fc1188c982f1278728dbeb3ea3c1
MD5 6f1b100bce7567ff79572fa58e7d1ec4
BLAKE2b-256 9e5ff1d284a5f01247e61f26b7cb191824b5cdf5bc16cbd6766237a154d4232a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page