Cross-framework localization audit and translation QA toolkit
Project description
L10n Audit Toolkit
L10n Audit Toolkit is a Python-based localization QA toolkit for auditing translation files, validating runtime-sensitive strings, and producing safe localization review workflows for multilingual applications.
📚 Documentation: 👉 https://wael-daaboul.github.io/L10n-Audit-Toolkit/
pipx install l10n-audit-toolkit
Overview
L10n Audit Toolkit helps engineering and localization teams catch issues before translations ship to production. It combines code usage scanning, locale-file validation, placeholder validation, terminology audit, glossary enforcement, and translation QA reporting in a single repository-oriented workflow.
The project is designed for teams that need repeatable localization audits for i18n and l10n pipelines without rewriting their application structure. It supports JSON locale files and Laravel PHP translation files, generates machine-readable and spreadsheet reports, and keeps risky changes in a review queue instead of auto-applying them.
Problem It Solves
Modern multilingual applications often fail in production because translation QA is fragmented across manual review, ad hoc scripts, and framework-specific checks. Common issues include:
- missing or unused translation keys
- placeholder mismatch detection failures
- glossary drift and terminology inconsistency
- ICU message mistakes
- unsafe formatting cleanup
- review workflows that are hard to trace or apply safely
L10n Audit Toolkit addresses those problems with a structured localization audit pipeline and explicit safe-fix boundaries.
Key Features
- Localization audit workflow for repository-based translation QA
- Static translation usage scanning across supported frameworks
- Placeholder validation for common runtime interpolation styles
- Terminology audit and glossary enforcement
- English and Arabic locale quality checks
- ICU message validation
- Safe localization fixes with a review-required path for risky changes
- Review queue generation in XLSX for human approval
- Final locale export in the original supported format
- JSON, CSV, XLSX, and Markdown outputs for CI or manual review
Supported Frameworks and Formats
Built-in project profiles currently cover:
- Flutter with GetX JSON localization
- Laravel JSON localization
- Laravel PHP localization
- React with i18next JSON
- Vue with
vue-i18nJSON
Current locale format support:
- JSON locale files
- Laravel PHP translation files that use static parseable return arrays such as
return [...]andreturn array(...)
What The Toolkit Detects
The toolkit can report issues such as:
- missing translations
- unused keys
- placeholder mismatch detection problems
- renamed or reordered placeholders
- terminology violations
- glossary enforcement failures
- ICU syntax and branch mismatches
- English locale wording and grammar issues
- Arabic locale spacing, punctuation, and context-sensitive review findings
- Arabic semantic review suggestions for sentence-level meaning loss
- risky review items that require explicit human approval
🚀 Quick Start
The L10n Audit Toolkit now comes with a powerful CLI. To get started in your localization project:
-
Initialize Workspace:
l10n-audit init -
Verify Setup:
l10n-audit doctor -
Run a Fast Audit:
l10n-audit run --stage fast
Primary outputs are written under Results/.
💻 CLI Commands
Here are the main commands you will use daily:
l10n-audit --help- Shows help, usage instructions, and available arguments.l10n-audit --version- Displays the current installed version of the toolkit.l10n-audit init- Discovers your project and creates the.l10n-audit/workspace.l10n-audit run --stage <STAGE>- Runs specific or all audit modules (e.g.,fast,full,autofix).l10n-audit update- Fetches the latest global rules and dictionaries to your local workspace.
🤖 AI-Powered Review
You can enhance your audits with AI (e.g., OpenAI, OpenRouter) to check context, tone, and grammar:
l10n-audit run --stage ai-review \
--ai-enabled \
--ai-api-base "https://openrouter.ai/api/v1" \
--ai-model "openai/gpt-4o-mini"
Note: For deep technical details and developer scripts, check the
docs/folder.
If you are using the repository checkout directly rather than an installed launcher, you can still run:
./bin/run_all_audits.sh --stage fast
Installation
Use the bootstrap script for the fastest setup:
./bootstrap.sh
Manual setup:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-optional.txt
python -m pip install -r requirements-dev.txt
Detailed environment setup is documented in INSTALL.md and docs/quickstart.md.
The repository ships with a neutral example glossary at docs/terminology/glossary.json. Replace it or point glossary_file to your own JSON glossary.
Running Audits
Run the full localization audit pipeline:
l10n-audit run --stage full
Useful stage-specific commands:
l10n-audit run --stage ai-review --ai-enabled
l10n-audit run --stage ai-review --ai-enabled --ai-model gpt-4o-mini --ai-api-base https://api.openai.com/v1
l10n-audit doctor
l10n-audit update --check
To refresh local workspace templates from GitHub or a direct archive URL:
l10n-audit init --from-github --channel stable --repo https://github.com/your-org/l10n-audit-toolkit
l10n-audit update --from-github --channel main --repo https://github.com/your-org/l10n-audit-toolkit
You can also pass a direct .zip archive URL or file://...zip path during testing.
You can also run the basic localization usage audit directly:
./bin/l10n_audit.sh
Safe Fixes and Review Workflow
The toolkit separates deterministic changes from human-reviewed changes.
- Run audits and generate reports.
- Review
Results/final/final_audit_report.md. - Open
Results/review/review_queue.xlsx. - Fill
approved_newfor reviewed rows and setstatustoapproved. - Apply approved fixes with:
python -m fixes.apply_review_fixes
- Use the final locale output from
Results/final_locale/.
Safe auto-fix planning is available with:
./bin/run_all_audits.sh --stage autofix
The review and fix workflow is documented in HOW_TO_USE.md and docs/review_workflow.md.
Example CLI Usage
./bin/run_all_audits.sh --stage full
python -m audits.placeholder_audit
python -m audits.terminology_audit
python -m fixes.apply_safe_fixes
python -m fixes.apply_review_fixes
python -m pytest
Example Outputs
Common outputs include:
Results/per_tool/: raw per-audit findingsResults/normalized/: normalized machine-readable findingsResults/review/review_queue.xlsx: review queue for human approvalResults/fixes/fix_plan.json: safe fix planResults/fixes/safe_fixes_applied_report.json: auto-fix summaryResults/final/final_audit_report.md: aggregated dashboardResults/final_locale/ar.final.json: final reviewed locale
See docs/output_reports.md for report details.
Repository Structure
audits/: audit modules for localization, placeholder, terminology, ICU, and locale QA checkscore/: shared runtime, loaders, exporters, scanners, and validation helpersfixes/: safe-fix and reviewed-fix application logicreports/: report aggregation and final dashboard generationschemas/: JSON schemas for config and generated artifactsconfig/: toolkit configuration and project profilesbin/: shell entry points for common workflowsexamples/: framework-oriented sample layouts and usage notesdocs/: reference documentation for workflows and outputstests/: regression coverage for audits, exports, reports, and fix safety
Detailed directory roles are documented in docs/overview.md.
Documentation
- INSTALL.md: environment and dependency setup
- HOW_TO_USE.md: workflow-oriented usage guide
- docs/quickstart.md: shortest path to first successful run
- docs/audit_modules.md: audit module reference
- docs/review_workflow.md: fix plan and review queue behavior
- docs/ai_usage.md: AI-assisted translation review and CLI options
- docs/output_reports.md: generated outputs and report formats
- docs/configuration.md: detailed configuration schema and profiles
- docs/ci_cd_integration.md: GitHub Actions and GitLab CI setups
- docs/terminology_guide.md: formatting your custom glossary.json
- examples/README.md: supported example layouts
Contributing
Contributions that improve localization audit quality, translation validation, framework coverage, or documentation are welcome. See CONTRIBUTING.md before opening a pull request.
Security
Please report vulnerabilities privately. See SECURITY.md.
License
This repository is released under the MIT License. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file l10n_audit_toolkit-1.0.6.tar.gz.
File metadata
- Download URL: l10n_audit_toolkit-1.0.6.tar.gz
- Upload date:
- Size: 80.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56a02a5f50aa57b0a28845e238c328667fbb75d6ae4a90c564a15934f9c786a8
|
|
| MD5 |
2783005d2bfebf451ea3a0ef1dc124ab
|
|
| BLAKE2b-256 |
59adc5d9592f59aedc4a9134ae10dac7f081d4cd8255abbe4b175e7f0a6bf71d
|