Skip to main content

Another syntactic complexity analyzer of written English language samples

Project description

NeoSCA

build lint codecov codacy pypi commit support-version platform downloads license

简体中文| 繁體中文| English

NeoSCA is a fork of Xiaofei Lu's L2 Syntactic Complexity Analyzer (L2SCA), with added support for Windows and an improved command-line interface for easier usage. NeoSCA is written by Tan, Long (谭龙)。It accepts written English texts and computes the following measures:

the frequency of 9 structures in the text:
  1. words (W)
  2. sentences (S)
  3. verb phrases (VP)
  4. clauses (C)
  5. T-units (T)
  6. dependent clauses (DC)
  7. complex T-units (CT)
  8. coordinate phrases (CP)
  9. complex nominals (CN), and
14 syntactic complexity indices of the text:
  1. mean length of sentence (MLS)
  2. mean length of T-unit (MLT)
  3. mean length of clause (MLC)
  4. clauses per sentence (C/S)
  5. verb phrases per T-unit (VP/T)
  6. clauses per T-unit (C/T)
  7. dependent clauses per clause (DC/C)
  8. dependent clauses per T-unit (DC/T)
  9. T-units per sentence (T/S)
  10. complex T-unit ratio (CT/T)
  11. coordinate phrases per T-unit (CP/T)
  12. coordinate phrases per clause (CP/C)
  13. complex nominals per T-unit (CN/T)
  14. complex nominals per clause (CP/C)

Contents

Highlights

  • Cross-platform compatibility: Windows, macOS, and Linux
  • Flexible command-line options to serve various needs
  • Supports reading txt/docx/odt files
  • Custom syntactic structure search/calculation

Install

Install NeoSCA

To install NeoSCA, you need to have Python 3.7 or later installed on your system. You can check if you already have Python installed by running the following command in your terminal:

python --version

If Python is not installed, you can download and install it from Python website. Once you have Python installed, you can install neosca using pip:

pip install neosca

If you are in China and having trouble with slow download speeds or network issues, you can use the Tsinghua University PyPI mirror to install neosca:

pip install neosca -i https://pypi.tuna.tsinghua.edu.cn/simple

Install dependencies

NeoSCA depends on Java, Stanford Parser, and Stanford Tregex. NeoSCA provides an option to install all of them:

nsca --check-depends

Called with the --check-depends, NeoSCA will download and unzip archive files of these three to %AppData% (for Windows users, usually C:\\Users\\<username>\\AppData\\Roaming) or ~/.local/share (for macOS and Linux users), and set the environment variable JAVA_HOME, STANFORD_PARSER_HOME, and STANFORD_TREGEX_HOME. If you have previously installed any of the three, you need to manually set the according environment variable.

Usage

NeoSCA is a command-line tool. You can see the help message by running nsca --help in your terminal.

Basic usage

Single input

To analyze a single file, use the command nsca followed by the file path.

nsca ./samples/sample1.txt
nsca ./samples/sample1.docx

Tables, figures, images, and other unrelated elements (except for headers and footers, which will be automatically ignored) should be manually removed before docx/odt files are analyzed.

After running the above command, a result.csv file will be generated in the current directory. You can specify a different output filename using -o/--output-file.

nsca ./samples/sample1.txt -o sample1.csv
# frequency output: ./sample1.csv
When analyzing a file whose name includes spaces, it is important to enclose the file path in single or double quotes. Assume you have a sample 1.txt to analyze:
nsca "./samples/sample 1.txt"

This ensures that the entire filename including the spaces, is interpreted as a single argument. Without the double quotes, the command would interpret "./samples/sample" and "1.txt" as two separate arguments and the analysis would fail.

Multiple input

Specify the input directory after nsca.

nsca samples/              # analyze every txt/docx file under the "samples/" directory
nsca samples/ --ftype txt  # analyze only txt files under "samples/"
nsca samples/ --ftype docx # analyze only docx files under "samples/"

Or simply list each file:

cd ./samples/
nsca sample1.txt sample2.txt

You can also use wildcards to select multiple files at once.

cd ./samples/
nsca sample*.txt                                           # every file whose name starts with "sample" and ends with ".txt"
nsca sample[1-9].txt sample10.txt                          # sample1.txt -- sample10.txt
nsca sample10[1-9].txt sample1[1-9][0-9].txt sample200.txt # sample101.txt -- sample200.txt

Advanced usage

Expand wildcards

Use --expand-wildcards to print all files that match your wildcard pattern. This can help you ensure that your pattern matches all desired files and excludes any unwanted ones. Note that files that do not exist on the computer will not be included in the output, even if they match the specified pattern.

nsca sample10[1-9].txt sample1[1-9][0-9].txt sample200.txt --expand-wildcards

Treat newlines as sentence breaks

Stanford Parser by default does not take newlines as sentence breaks during the sentence segmentation. To achieve this you can use:

nsca sample1.txt --newline-break always

The --newline-break has 3 legal values: never (default), always, and two.

  • never means to ignore newlines for the purpose of sentence splitting. It is appropriate for continuous text with hard line breaks when just the non-whitespace characters should be used to determine sentence breaks.
  • always means to treat a newline as a sentence break, but there still may be more than one sentences per line.
  • two means to take two or more consecutive newlines as a sentence break. It is for text with hard line breaks and a blank line between paragraphs.

Configuration file

You can use a configuration file to define custom syntactic structures to search or calculate.

The default configuration filename for neosca is nsca.json, neosca will try to find nsca.json in current working directory. Alternatively, you can provide your own configuration file with nsca --config <your_config_file>. The configuration file should be in JSON format and named with .json extension.

{
    "structures": [
        {
            "name": "VP1",
            "description": "regular verb phrases",
            "tregex_pattern": "VP > S|SINV|SQ"
        },
        {
            "name": "VP2",
            "description": "verb phrases in inverted yes/no questions or in wh-questions",
            "tregex_pattern": "MD|VBZ|VBP|VBD > (SQ !< VP)"
        },
        {
            "name": "VP",
            "description": "verb phrases",
            "value_source": "VP1 + VP2"
        }
    ]
}

Above is a part of neosca's built-in structure definitions. Each definition follows a key-value pair format, where both the key and value should be enclosed in quotation marks.

There are two approaches to define a structure: using tregex_pattern or value_source. tregex_pattern represents the formal definition in Tregex syntax. Structures defined through tregex_pattern will be searched and counted by running Stanford Tregex against input text. For instructions about how to write a Tregex pattern, see:

value_source specifies an arithmetic operation on values of other structures to calculate the value of the structure being defined. value_source can include names of other structures, integers, decimals, +, -, *, /, ( and ). value_source are tokenized using Python's standard library tokenize, which is specifically designed for Python source code. The name of a structure that is refered to in a value_source should adhere to the naming convention of Python variables (composed of letters, numbers, and underscores, cannot start with a number; letters refer to those characters defined in the Unicode character database as "Letter", such as English letters and Chinese characters), or otherwise the name will not be correctly recognized.

The value_source definition can be nested, which means that dependant structures in turn can also be defined through value_source and rely on others, forming a tree-like relationship. But the terminal structures must be defined by tregex_pattern to avoid circular definition.

Structures can be defined using either tregex_pattern or value_source, but not both simultaneously. The name attribute will be used for --select option. The description attribute is optional, you can omit it for convenience.

Select a subset of measures

NeoSCA by default outputs values of all of the available measures. You can use --select to only analyze measures that you are interested in. To see a full list of available measures, use nsca --list.

nsca --select VP T DC/C -- sample1.txt

To avoid the program taking input filenames as a selected measure and raising an error, use -- to separate them from the measures. All arguments after -- will be considered input filenames. Make sure to specify arguments except for input filenames at the left side of --.

Combine subfiles

Use -c/--combine-subfiles to add up frequencies of the 9 syntactic structures of subfiles and compute values of the 14 syntactic complexity indices for the imaginary parent file. You can use this option multiple times to combine different lists of subfiles respectively. The -- should be used to separate input filenames from input subfile-names.

nsca -c sample1-sub1.txt sample1-sub2.txt
nsca -c sample1-sub*.txt
nsca -c sample1-sub*.txt -c sample2-sub*.txt
nsca -c sample1-sub*.txt -c sample2-sub*.txt -- sample[3-9].txt

Skip long sentences

Use --max-length to only analyze sentences with lengths shorter than or equal to 100, for example.

nsca sample1.txt --max-length 100

When the --max-length is not specified, the program will try to analyze sentences of any lengths, but may run out of memory trying to do so.

Reserve intermediate results

NeoSCA by default only saves frequency output. To reserve the parsed trees, use -p or --reserve-parsed. To reserve matched subtrees, use -m or --reserve-matched.
nsca samples/sample1.txt -p
# frequency output: ./result.csv
# parsed trees:     ./samples/sample1.parsed
nsca samples/sample1.txt -m
# frequency output: ./result.csv
# matched subtrees: ./result_matches/
nsca samples/sample1.txt -p -m
# frequency output: ./result.csv
# parsed trees:     ./samples/sample1.parsed
# matched subtrees: ./result_matches/

Misc

Pass text through the command line

If you want to analyze text that is passed directly through the command line, you can use --text followed by the text.

nsca --text 'The quick brown fox jumps over the lazy dog.'
# frequency output: ./result.csv

JSON output

You can generate a JSON file by:

nsca ./samples/sample1.txt --output-format json
# frequency output: ./result.json
nsca ./samples/sample1.txt -o sample1.json
# frequency output: ./sample1.json

Just parse text and exit

If you only want to save the parsed trees and exit, you can use --no-query. This can be useful if you want to use the parsed trees for other purposes. When --no-query is specified, the --reserve-parsed will be automatically set.

nsca samples/sample1.txt --no-query
# parsed trees: samples/sample1.parsed
nsca --text 'This is a test.' --no-query
# parsed trees: ./cmdline_text.parsed

Parse trees as input

By default, the program expects raw text as input that will be parsed before querying. If you already have parsed input files, use --no-parse to indicate that the program should skip the parsing step and proceed directly to querying. When this flag is set, the is_skip_querying and reserve_parsed are automatically set as False.

nsca samples/sample1.parsed --no-parse

List built-in measures

nsca --list
W: words
S: sentences
VP: verb phrases
C: clauses
T: T-units
DC: dependent clauses
CT: complex T-units
CP: coordinate phrases
CN: complex nominals
MLS: mean length of sentence
MLT: mean length of T-unit
MLC: mean length of clause
C/S: clauses per sentence
VP/T: verb phrases per T-unit
C/T: clauses per T-unit
DC/C: dependent clauses per clause
DC/T: dependent clauses per T-unit
T/S: T-units per sentence
CT/T: complex T-unit ratio
CP/T: coordinate phrases per T-unit
CP/C: coordinate phrases per clause
CN/T: complex nominals per T-unit
CN/C: complex nominals per clause

Tregex interface

NeoSCA has a Tregex command line interface nsca-tregex, which behaves similar as tregex.sh from Tregex package, with additional support for Windows.

Lexical complexity analysis

NeoSCA provides an nsca-lca command to do the lexical complexity analysis mirroring the functionality of LCA (Lexical Complexity Analyzer). Below are the available measures:

Measures of Lexical Density and Sophistication
Measure Formula
Lexical Density Formula
Lexical Sophistication-I Formula
Lexical Sophistication-II Formula
Verb Sophistication-I Formula
Corrected Verb Sophistication-I Formula
Verb Sophistication-II Formula
Measures of Lexical Variation
Measure Formula
Number of Different Words Formula
Number of Different Words (first 50 words) Formula
Number of Different Words (expected random 50) Formula
Number of Different Words (expected sequence 50) Formula
Type-Token Ratio Formula
Mean Segmental Type-Token Ratio (50) Formula
Corrected Type-Token Ratio Formula
Root Type-Token Ratio Formula
Bilogarithmic Type-Token Ratio Formula
Uber Index Formula
Lexical Word Variation Formula
Verb Variation-I Formula
Squared Verb Variation-I Formula
Corrected Verb Variation-I Formula
Verb Variation-II Formula
Noun Variation Formula
Adjective Variation Formula
Adverb Variation Formula
Modifier Variation Formula
nsca-lca sample.txt # single input file
nsca-lca samples/   # multiple input files

Citing

If you use NeoSCA in your research, please cite as follows.

BibTeX
@misc{tan2022neosca,
title        = {NeoSCA: A Fork of L2 Syntactic Complexity Analyzer, version 0.0.55},
author       = {Long Tan},
howpublished = {\url{https://github.com/tanloong/neosca}},
year         = {2022}
}
APA (7th edition)
Tan, L. (2022). NeoSCA (version 0.0.55) [Computer software]. Github. https://github.com/tanloong/neosca
MLA (9th edition)
Tan, Long. NeoSCA. version 0.0.55, GitHub, 2022, https://github.com/tanloong/neosca.

Also, you need to cite Xiaofei's article describing L2SCA.

BibTeX
@article{xiaofei2010automatic,
title     = {Automatic analysis of syntactic complexity in second language writing},
author    = {Xiaofei Lu},
journal   = {International journal of corpus linguistics},
volume    = {15},
number    = {4},
pages     = {474--496},
year      = {2010},
publisher = {John Benjamins Publishing Company},
doi       = {10.1075/ijcl.15.4.02lu},
}
APA (7th edition)
Lu, X. (2010). Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4), 474-496.
MLA (9th edition)
Lu, Xiaofei. "Automatic Analysis of Syntactic Complexity in Second Language Writing." International Journal of Corpus Linguistics, vol. 15, no. 4, John Benjamins Publishing Company, 2010, pp. 474-96.

If you use the lexical complexity analyzing feature, please cite Xiaofei's article about LCA.

BibTeX
@article{xiaofei2012relationship,
author  = {Xiaofei Lu},
title   = {The Relationship of Lexical Richness to the Quality of ESL Learners' Oral Narratives},
journal = {The Modern Language Journal},
volume  = {96},
number  = {2},
pages   = {190-208},
doi     = {https://doi.org/10.1111/j.1540-4781.2011.01232\_1.x},
year    = {2012}
}
APA (7th edition)
Lu, X. (2012). The relationship of lexical richness to the quality of ESL learners' oral narratives. The Modern Language Journal, 96(2), 190-208.
MLA (9th edition)
Lu, Xiaofei. "The Relationship of Lexical Richness to the Quality of ESL Learners' Oral Narratives." The Modern Language Journal, vol. 96, no. 2, Wiley-Blackwell, 2012, pp. 190-208.

Related efforts

License

Distributed under the terms of the GNU General Public License version 2 or later.

Contact

You can send bug reports, feature requests, or any questions via:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neosca-0.0.55.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

neosca-0.0.55-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file neosca-0.0.55.tar.gz.

File metadata

  • Download URL: neosca-0.0.55.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for neosca-0.0.55.tar.gz
Algorithm Hash digest
SHA256 bc91f611be59384db7420999b88f95f4e2bf487f0855261720125539ffdb9608
MD5 f4968e196a425be6308b2cf8059a9fa2
BLAKE2b-256 3e8e7c77490a9e908ce2954967fe9c98e4482fe7c963d9534e9f151d6ff5fcb5

See more details on using hashes here.

File details

Details for the file neosca-0.0.55-py3-none-any.whl.

File metadata

  • Download URL: neosca-0.0.55-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for neosca-0.0.55-py3-none-any.whl
Algorithm Hash digest
SHA256 24e7c82118b8e3d975233b3fe3b66f1f63988c4bc66a6c438bdd2180ce6cd979
MD5 f20a4ce6f92b684b94767002774f04af
BLAKE2b-256 874d7760c30eb910464fe2c3e7f9c9eb780acfb127a9648e263b59117f489b65

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page