A tool to translate markdown files using GPT-4
Project description
gpt_translate: Translating MD files with GPT-4
This is a tool to translate Markdown files without breaking the structure of the document. It is powered by OpenAI models and has multiple parsing and formatting options. The provided default example is the one we use to translate our documentation website docs.wandb.ai to japanese and korean.
You can click here to see the output of the translation on the screenshot above.
Installation
We have a stable version on PyPI, so you can install it with pip:
$ pip install gpt-translate
or to get latest version from the repo:
$ cd gpt_translate
$ pip install .
Export your OpenAI API key:
export OPENAI_API_KEY=aa-proj-bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
Usage
The library provides a set of commands that you can access as CLI. All the commands start by gpt_translate.:
gpt_translate.file: Translate a single filegpt_translate.folder: Translate a folder recursivelygpt_translate.files: Translate a list of files, accepts.txtlist of files as input.gpt_translate.eval: Evaluate the quality of the translation
Litellm Integration
This project now uses litellm as the default interface for interacting with language models.
Instead of calling the OpenAI API directly, all LLM interactions are performed using litellm.acompletion.
Key features include:
- Asynchronous LLM Calls: Efficient asynchronous completions via
litellm.acompletion. - Pydantic Response Validation: Responses are automatically validated with Pydantic models using
model_validate_json, ensuring that outputs conform to expected schemas. - Enhanced Recursive Handling: The
longer_createfunction recursively handles token-limit scenarios by chaining completions.
These improvements simplify the translation pipeline while ensuring robust response validation and improved handling of long outputs.
We use GPT4 by default. You can change this on configs/config.yaml. The dafault values are:
# Logs:
debug: false # Debug mode
weave_project: "gpt-translate" # Weave project
silence_openai: true # Silence OpenAI logger
# Translation:
language: "ja" # Language to translate to
config_folder: "./configs" # Config folder, where the prompts and dictionaries are
replace: true # Replace existing file
remove_comments: true # Remove comments
do_translate_header_description: true # Translate the header description
max_concurrent_calls: 7 # Max number of concurrent calls to OpenAI
# Files:
input_file: "docs/intro.md" # File to translate
out_file: " intro_ja.md" # File to save the translated file to
input_folder: null # Folder to translate
out_folder: null # Folder to save the translated files to
limit: null # Limit number of files to translate
# Model:
model: "gpt-4o"
temperature: 1.0
max_tokens: 16000
You can override the arguments at runtime or by creating another config.yaml file. You can also use the --config_path flag to specify a different config file.
-
The
--config_folderargument is where the prompts and dictionaries are located, the actualconfig.yamlcould be located somewhere else. Maybe I need a better naming here =P. -
You can add new languages by providing the language translation dictionaries in
configs/language_dicts
Examples
- To translate a single file:
$ gpt_translate.file \
--input_file README.md \
--out_file README_es_.md \
--language es
--config_folder ./configs
- Translate a list of files from
list.txt:
$ gpt_translate.files \
--input_file list.txt \
--input_folder docs \
--out_folder docs_ja \
--language ja
--config_folder ./configs
Note here that we need to pass and input and output folder. This is because we will be using the input folder to get the relative path and create the same folder structure in the output folder. This is tipically what you want for documentation websites that are organized in folders like ./docs.
- Translate a full folder recursively:
$ gpt_translate.folder \
--input_folder docs \
--out_folder docs_ja \
--language ja
--config_folder ./configs
If you don't know what to do, you can always do --help on any of the commands:
$ gpt_translate.* --help
Weave Tracing
The library does a lot! keeping track of every piece of interaction is necessary. We added W&B Weave support to trace every call to the model and underlying processing bits.
You can pass a project name to the CLI to trace the calls:
$ gpt_translate.folder \
--input_folder docs \
--output_folder docs_ja \
--language ja \
--weave_project gpt-translate
--config_folder ./configs
Evaluation
Once the translation is done, you can evaluate the quality of the translation by running:
$ gpt_translate.eval \
--eval_dataset "Translation-ja:latest"
You can iterate on the translation prompts and dictionaries to improve the quality of the translation.
The config for the evaluation shares many similarities with the translation config, which is stored in configs/eval_config.yaml. The configs/evaluation_prompt.txt file contains the prompt used by the LLM Judge to evaluate the translation quality. Feel free to play with it to find better ways to evaluate the quality of the translation according to your needs.
Whenever you run
gpt_translate.filesorgpt_translate.folder, it automatically creates a new Weave Dataset with the name in the formatTranslation-{language}:{timestamp}.
Github Action
We supply an action.yml file to use this library in a Github Action. It is not much tested, but it should work.
- You will need to setup your Weights & Biases API key as a secret in your Github repository as
WANDB_API_KEY.
An example workflow is shown in https://github.com/tcapelle/dummy_docs and the corresponding workflow file
TroubleShooting
If you have any issue, you can always pass the --debug flag to get more information about what is happening:
$ gpt_translate.folder ... --debug
this will get you a very verbose output (calls to models, inputs and outputs, etc.)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gpt_translate-6.0.0.tar.gz.
File metadata
- Download URL: gpt_translate-6.0.0.tar.gz
- Upload date:
- Size: 3.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
263f48687867bc86d7cbe966c7bd6dc41d2354185209efc206c2ea0e3371e02a
|
|
| MD5 |
7b5b07f4ff3d173c7c76eab4f9682851
|
|
| BLAKE2b-256 |
855d421990e69807c7b0567237569b0b47c5871ab3f2bc91bfeebb7771c311b5
|
Provenance
The following attestation bundles were made for gpt_translate-6.0.0.tar.gz:
Publisher:
pypi.yml on tcapelle/gpt_translate
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
gpt_translate-6.0.0.tar.gz -
Subject digest:
263f48687867bc86d7cbe966c7bd6dc41d2354185209efc206c2ea0e3371e02a - Sigstore transparency entry: 591944384
- Sigstore integration time:
-
Permalink:
tcapelle/gpt_translate@ae11849d653b6e19f2f9f6ba8ab101f6b6eb739e -
Branch / Tag:
refs/tags/v6.0.0 - Owner: https://github.com/tcapelle
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@ae11849d653b6e19f2f9f6ba8ab101f6b6eb739e -
Trigger Event:
release
-
Statement type:
File details
Details for the file gpt_translate-6.0.0-py3-none-any.whl.
File metadata
- Download URL: gpt_translate-6.0.0-py3-none-any.whl
- Upload date:
- Size: 21.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
12e24e3b91420cff72f1108cc66cefc692a20f6c0edd53ada0ef19f4e155d284
|
|
| MD5 |
c67c8029d8635ff875365061821a2fc3
|
|
| BLAKE2b-256 |
ec3f0bc32ad942e5045869f0f8e81fd3ea2cf81b00465b1d6362c58dfdf13064
|
Provenance
The following attestation bundles were made for gpt_translate-6.0.0-py3-none-any.whl:
Publisher:
pypi.yml on tcapelle/gpt_translate
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
gpt_translate-6.0.0-py3-none-any.whl -
Subject digest:
12e24e3b91420cff72f1108cc66cefc692a20f6c0edd53ada0ef19f4e155d284 - Sigstore transparency entry: 591944403
- Sigstore integration time:
-
Permalink:
tcapelle/gpt_translate@ae11849d653b6e19f2f9f6ba8ab101f6b6eb739e -
Branch / Tag:
refs/tags/v6.0.0 - Owner: https://github.com/tcapelle
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@ae11849d653b6e19f2f9f6ba8ab101f6b6eb739e -
Trigger Event:
release
-
Statement type: