Wenyan programming language in Python
Project description
Wenyan Programming Language in Python
Wenyan.py is a Python implementation of the Wenyan programming language. It is packaged as a small, zero-runtime-dependency interpreter/compiler module and ships two command line entry points:
wenyan: run Wenyan programs through the Python implementation.wywy: run through the self-hostedwenyan.wypath.
Installation
Install the released CLI tools into uv's user tool directory:
uv tool install wenyan
For a local checkout, install the current tree the same way:
uv tool install .
You can also install into an active Python environment:
python -m pip install wenyan
Quick Start
Create a small Wenyan program:
吾有一數。曰三。書之。
Save it as hello.wy, then run:
wenyan hello.wy
Expected output:
3
Run the self-hosted path:
wywy hello.wy
From a checkout, you can run without installing:
uv run wenyan.py examples/helloworld.wy
uv run wenyan.py --help
Benchmarks
Current Snapshot
- Current compiler release result:
benchmark/results/wyperformance.release.v2.json - Current runtime release result:
benchmark/results/runtime_matrix.release.json - Generated compiler compare report:
benchmark/results/wyperformance.release.compare.md - Generated runtime compare report:
benchmark/results/runtime_matrix.release.compare.md
Compiler release summary:
Runtime release summary:
Current release highlights:
| area | result |
|---|---|
| compiler full pipeline | compile_total median 0.432147s over 17 workloads |
| compiler pure Python compile stage | compile_code median 0.037830s |
| compiler peak memory | compile_total peak 23847.2 KiB |
| fastest runtime in release profile | cli[bun] at 0.066852s per workload |
fastest wenyan.py runtime |
wenyan.py[py314] at 0.094772s per workload |
| fastest self-host path | wywy[node] at 0.133446s per workload |
Compiler compare notes:
release.v2fixescompile_codeso it now measures only Pythoncompile().- Historical comparisons should therefore use
lexer_only,parse_total,preprocess_total,compile_ast, andcompile_total. - See
benchmark/results/wyperformance.release.compare.mdfor the stable-case comparison. - The report is generated by
scripts/wyperformance.py compare, not hand-edited.
Runtime compare notes:
benchmark/results/runtime_matrix.release.compare.mdis generated byscripts/benchmark_runtime_matrix.py compare.- Its current baseline is the historical
examples_runtime_benchmark.jsoncorpus, so ratios are directional only.
Run the pyperf/pyperformance-style Wenyan benchmark suite:
uv run python scripts/wyperformance.py run
Run the quick CI profile or inspect available workloads:
uv run python scripts/wyperformance.py run --profile ci
uv run python scripts/wyperformance.py list_workloads
List benchmarks and groups:
uv run python scripts/wyperformance.py list
uv run python scripts/wyperformance.py list_groups
Run only selected benchmarks/workloads (supports include/exclude):
uv run python scripts/wyperformance.py run --benchmarks compiler,-compile_code --workloads synthetic
Compare two result JSON files (normal or table style):
uv run python scripts/wyperformance.py compare base.json changed.json
uv run python scripts/wyperformance.py compare base.json changed.json -O table
Generate a Markdown compare report and exclude incompatible benchmark definitions:
uv run python scripts/wyperformance.py compare \
benchmark/results/wyperformance.release.json \
benchmark/results/wyperformance.release.v2.json \
--exclude compile_code \
--note "compile_code benchmark definition changed in release.v2 and now measures only Python compile()." \
--note "Use lexer_only, parse_total, preprocess_total, compile_ast, and compile_total for historical comparison." \
--markdown benchmark/results/wyperformance.release.compare.md
Default output:
benchmark/results/wyperformance.json
Run the runtime matrix benchmark (shared workload manifest):
uv run python scripts/benchmark_runtime_matrix.py
Run the quick runtime profile:
uv run python scripts/benchmark_runtime_matrix.py --profile ci
Compare two runtime matrix results:
uv run python scripts/benchmark_runtime_matrix.py compare base.json contender.json
Generate a Markdown runtime compare report:
uv run python scripts/benchmark_runtime_matrix.py compare \
benchmark/results/examples_runtime_benchmark.json \
benchmark/results/runtime_matrix.release.json \
--output-md benchmark/results/runtime_matrix.release.compare.md \
--note "Baseline uses the historical examples_runtime_benchmark.json corpus (42 examples), while contender uses the new release workload manifest (17 workloads)." \
--note "Per-workload ratios are directional only because the workload set and startup probe rounds differ between the two runs."
Generate README-ready SVG charts through Wenyan + matplotlib:
uv run --with matplotlib wenyan.py benchmark/charts/compiler_summary.wy
uv run --with matplotlib wenyan.py benchmark/charts/runtime_matrix_summary.wy
The chart programs read JSON by default. You can override input/output paths with environment variables:
WENYAN_BENCHMARK_INPUT=benchmark/results/wyperformance.json \
WENYAN_BENCHMARK_OUTPUT=benchmark/results/compiler_benchmark_summary.svg \
uv run --with matplotlib wenyan.py benchmark/charts/compiler_summary.wy
For compiler charts, you can also provide a baseline JSON to render a ratio panel:
WENYAN_BENCHMARK_INPUT=benchmark/results/wyperformance.json \
WENYAN_BENCHMARK_BASELINE=/path/to/older-wyperformance.json \
WENYAN_BENCHMARK_OUTPUT=benchmark/results/compiler_benchmark_summary.svg \
uv run --with matplotlib wenyan.py benchmark/charts/compiler_summary.wy
For runtime charts, you can also provide a baseline JSON to render a ratio panel:
WENYAN_BENCHMARK_INPUT=benchmark/results/examples_runtime_benchmark.json \
WENYAN_BENCHMARK_BASELINE=/path/to/older-runtime-matrix.json \
WENYAN_BENCHMARK_OUTPUT=benchmark/results/runtime_matrix_summary.svg \
uv run --with matplotlib wenyan.py benchmark/charts/runtime_matrix_summary.wy
Include free-threading builds and use 100 startup probe runs:
uv run python scripts/benchmark_runtime_matrix.py --include-free-threading --startup-rounds 100
Outputs:
benchmark/results/examples_runtime_benchmark.jsonbenchmark/results/examples_runtime_benchmark.csvbenchmark/results/examples_runtime_benchmark.mdbenchmark/results/wyperformance.mdbenchmark/results/wyperformance.release.compare.mdbenchmark/results/runtime_matrix.release.compare.mdbenchmark/results/compiler_benchmark_summary.svgbenchmark/results/runtime_matrix_summary.svg
examples_runtime_benchmark.md is formatted for direct embedding into README
as a table/chart-like summary.
wyperformance.json now also includes per-case peak_memory_bytes metadata
when tracemalloc is available, writes a Markdown summary, and the compiler SVG
can render time, peak memory, and optional baseline ratio.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wenyan-0.1.0.tar.gz.
File metadata
- Download URL: wenyan-0.1.0.tar.gz
- Upload date:
- Size: 55.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ede14a7dea51226d85c60f01949036341bc8bdfd9eb4f02b433bd78e42a341ff
|
|
| MD5 |
1118ce0c0823786d445ee55212854897
|
|
| BLAKE2b-256 |
17f46d2d064c060c35951d2a0a5f82d857e06ef80d9736807d1dd8681c429a1e
|
File details
Details for the file wenyan-0.1.0-py3-none-any.whl.
File metadata
- Download URL: wenyan-0.1.0-py3-none-any.whl
- Upload date:
- Size: 56.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
67a5d33af89b1c5715f79c2cf1afe27ddcc6d538060b692a8fbbeb57ff66a07f
|
|
| MD5 |
d9d83b5b7fe6cf097f023334152fecae
|
|
| BLAKE2b-256 |
cd25762bb3b55390a853448d893174ce3caad0c1661023aea556df67516f074a
|