Skip to main content

This Project helps you to create docs for your projects

Project description

Executive Navigation Tree

To set up the documentation generation workflow, fetch the Windows installer script from raw.githubusercontent.com/Drag-GameStudio/ADG/main/install.ps1 and execute it in PowerShell using | iex. For Linux systems, retrieve the installer from raw.githubusercontent.com/Drag-GameStudio/ADG/main/install.sh and run it with | bash. After installing, add a secret called GROCK_API_KEY to your repository’s GitHub Actions secrets, inserting the API key obtained from the Grock documentation site (grockdocs.com) to enable the workflow.

PowerShell Setup Script (install.ps1)

Responsibility: Generates GitHub workflow files and a minimal autodocconfig.yml for the current repository.
Interactions: Uses PowerShell here‑strings to write .github/workflows/autodoc.yml and autodocconfig.yml; reads the folder name via Get-Item ..
Technical Details: Creates target directory (New-Item -Force), writes static YAML content with embedded secret reference, and prints a success message.
Data Flow: Filesystem paths → created/overwritten YAML files.

Bash Setup Script (install.sh)

Responsibility: Mirrors install.ps1 for Unix‑like shells, creating the same workflow and config files.
Interactions: Uses mkdir -p for directory creation, cat <<EOF redirection to write YAML, and $(basename "$PWD") to insert the project name.
Technical Details: Escapes the ${{…}} placeholder to avoid shell interpolation, then echoes a confirmation.
Data Flow: Filesystem operations → generated .github/workflows/autodoc.yml and autodocconfig.yml. The configuration file is written in YAML and may contain the following top‑level keys:

  • project_name – a string that defines the name of the project.
  • language – a string indicating the documentation language (default “en”).
  • ignore_files – an optional list of glob patterns for files that should be excluded from processing.
  • project_settings – a map with optional settings:
    • save_logs – boolean, when true the generation logs are persisted.
    • log_level – integer specifying the verbosity of logging.
  • project_additional_info – a map where any custom key‑value pairs can be added to enrich the project description (e.g., a “global idea” entry).
  • custom_descriptions – a list of strings; each string is passed to a custom module and can contain arbitrary explanatory text, commands, or references.

When writing the file, ensure proper indentation and use plain YAML syntax. Include only the keys you need; omitted keys will fall back to defaults defined in the generator.

Purpose of ConfigReader

read_config translates a raw YAML string into a fully‑populated Config instance and a list of CustomModule objects. It centralises all project‑wide settings, language choice, ignore patterns and custom description handling for the Auto‑Doc Generator.

Key Function read_config

def read_config(file_data: str) -> tuple[Config, list[CustomModule]]:
  • Parametersfile_data: a YAML‑formatted string (typically the contents of autodocconfig.yml).
  • Returns – a tuple:
    1. Config – holds project metadata, language, ignore patterns, and ProjectConfigSettings.
    2. list[CustomModule] – one module per custom description.

CLI bootstrap and configuration loading

The if __name__ == "__main__": block acts as a tiny command‑line driver:

  1. Reads autodocconfig.yml into a string.
  2. Calls read_config (from auto_runner.config_reader) to obtain a Config instance and a list of custom module objects.
  3. Invokes gen_doc(".", config, custom_modules) and stores the result in output_doc.

No external I/O occurs inside gen_doc; all file interactions are confined to the Manager’s internal cache and the final read_file_by_file_key call.

Configuration constants and prompts

The module defines a set of multi‑line string constants (BASE_SYSTEM_TEXT, BASE_PART_COMPLITE_TEXT, BASE_INTRODACTION_CREATE_TEXT, BASE_INTRO_CREATE, BASE_SETTINGS_PROMPT). Each constant supplies a reusable prompt fragment for the AutoDoc pipeline (system instruction, documentation style, navigation‑tree generation, project‑overview template, and persistent‑memory instruction). These literals are imported by the runner to build the full prompt passed to the LLM.

Environment variable loading and API key validation

load_dotenv()
API_KEY = os.getenv("API_KEY")
if API_KEY is None:
    raise Exception("API_KEY is not set in environment variables.")

The code pulls the OpenAI key from a .env file at runtime. Absence of the key aborts execution, guaranteeing that downstream GPTModel instances always receive valid credentials.

ProjectSettings – Prompt Builder

Responsibility: Holds project‑level metadata and produces a composite system prompt.
Interactions: Accessed by all compression functions via the prompt property.
Technical Details: Starts with BASE_SETTINGS_PROMPT, appends project name and any key/value pairs added via add_info.
Data Flow: ProjectSettingsstr prompt used in LLM calls.

Project Metadata Declaration

The pyproject.toml fragment declares the autodocgenerator package’s identity: name, version, description, authors, license, README, and supported Python range. This information is consumed by packaging tools (Poetry, pip, build back‑ends) to generate distribution metadata (PKG‑INFO, wheel tags) and to surface project details on PyPI.

Dependency Specification

Under [project] the dependencies array enumerates exact version pins for every runtime library (e.g., openai==2.14.0, pydantic==2.12.5). The list drives poetry install and pip install . to resolve a reproducible environment. No optional or development groups are defined here; they would be placed in separate sections ([tool.poetry.dev-dependencies]) if needed.

Build System Configuration

The [build-system] table tells the Python build frontend to use poetry-core (requires = ["poetry-core>=2.0.0"]) with the entry point poetry.core.masonry.api. During python -m build or pip install ., this config triggers Poetry’s PEP‑517 builder, which reads the above metadata and assembles the source distribution and wheel. No custom build steps or hooks are declared, so the process is deterministic and isolated from external scripts.

Entry point for documentation generation (gen_doc)

The gen_doc function is the orchestrator that ties together configuration, language models, and the Manager to produce a complete documentation file. It receives a filesystem root (project_path), a validated Config object, and a list of instantiated custom module objects.

Data flow

  • Inputs: project_path (str), config (Config), custom_modules (list[CustomModule])
  • Outputs: Raw markdown string returned by manager.read_file_by_file_key("output_doc")

Side effects: Initializes two GPT model instances, creates a Manager, triggers a series of generation steps, and clears the internal cache.

Integration with Factory Modules

The function imports CustomModule from autodocgenerator.factory.modules.general_modules. Each entry in the custom_descriptions YAML array is wrapped in a CustomModule, allowing the downstream factory pipeline to treat user‑supplied snippets uniformly with built‑in modules.

Integration points and assumptions

  • Config object must conform to the schema defined in autodocgenerator.auto_runner.config_reader; malformed YAML raises yaml.YAMLError.
  • Custom modules are expected to inherit from CustomModule and be instantiable without arguments.
  • The global API_KEY is imported from autodocgenerator.engine.config.config; absence of a valid key will cause runtime authentication errors.
  • The function is pure from the caller’s perspective – it returns the assembled markdown and leaves the filesystem untouched after execution.

Integration with the documentation pipeline

  1. After order_doc produces the final markdown, custom_intro is imported by the post‑processor stage.
  2. get_all_html_links extracts navigation anchors → fed to get_links_intro.
  3. get_introdaction receives the whole document for a high‑level intro.
  4. generete_custom_discription may be invoked with user‑specified topics to prepend targeted sections.
  5. The returned strings are concatenated and written back to output_doc.md.

All functions are pure apart from logging; they rely solely on the provided Model instance, making them trivially mockable for unit testing.

Assumptions and Side Effects

  • The YAML must be syntactically valid; malformed input raises yaml.YAMLError.
  • Missing optional keys default to empty collections or sensible defaults (language"en").
  • No external I/O occurs; the function purely transforms in‑memory data, leaving the filesystem untouched.

This fragment is the entry point for configuration loading, feeding the rest of the ADG pipeline with a consistent, typed configuration object.

Purpose of custom_intro post‑processor

The module supplies a lightweight post‑processing pipeline that enriches the automatically generated documentation with anchor‑based navigation and optional introductory sections. It operates on the final markdown produced by the core generation flow and prepares ready‑to‑display HTML‑compatible fragments.

BaseModule – abstract generation contract

BaseModule defines the required interface for any documentation fragment generator. It inherits from ABC and mandates a generate(info: dict, model: Model) method, ensuring uniformity across plug‑in modules. Sub‑classes implement their own logic while receiving the raw info payload and a concrete Model instance.

Manager – orchestrating preprocessing, documentation generation, and post‑processing

Responsibility – Coordinates the end‑to‑end documentation pipeline: builds a code‑mix snapshot, splits it into manageable chunks, runs factory modules (e.g., IntroLinks, IntroText), orders the final markdown, and handles cache/log housekeeping.

Interactions

  • Pre‑processors: CodeMix (repo scanning), gen_doc_parts / async_gen_doc_parts (chunk‑wise generation).
  • Post‑processors: split_text_by_anchors, get_order (re‑ordering).
  • Factories: any DocFactory subclass supplying a list of modules that implement generate_doc.
  • Models: synchronous Model or asynchronous AsyncModel supplied at construction.
  • UI: BaseProgress updates progress bars; BaseLogger writes to the cache‑log file.

DocFactory – orchestrator of documentation modules

DocFactory aggregates a sequence of BaseModule objects. Its generate_doc method creates a progress sub‑task, iterates through each module, concatenates their outputs, logs success and module content (verbosity level 2), updates progress, and finally returns the assembled documentation string. Errors propagate from individual modules; the factory itself does not alter content.

factory_generate_doc – applying modular enrichments

  1. Loads current output_doc and the original code_mix.
  2. Builds info dict (language, full_data, code_mix).
  3. Calls doc_factory.generate_doc(info, sync_model, progress_bar).
  4. Prepends the factory result to the existing doc (new_data = f"{result}\\n\\n{curr_doc}") and writes back.

The method is model‑agnostic; any DocFactory with a modules attribute (e.g., IntroLinks, IntroText) can contribute additional sections.

Model instantiation and manager setup

sync_model = GPTModel(API_KEY, use_random=False)
async_model = AsyncGPTModel(API_KEY)

manager = Manager(
    project_path,
    config=config,
    sync_model=sync_model,
    async_model=async_model,
    progress_bar=ConsoleGtiHubProgress(),
)
  • GPTModel / AsyncGPTModel – provide synchronous and asynchronous OpenAI API access, respectively.
  • ConsoleGtiHubProgress – concrete progress‑bar implementation displayed in the terminal.
  • Manager – core engine that holds state, coordinates factories, and writes the final document.

Processing Steps

  1. Parse YAMLyaml.safe_load yields a Python dict.
  2. Instantiate Config – default Config() created.
  3. Populate core fieldslanguage, project_name, project_additional_info.
  4. Load project settingsProjectConfigSettings().load_settings(...) then attached via config.set_pcs.
  5. Register ignore patterns – each pattern from ignore_files added with config.add_ignore_file.
  6. Add supplemental info – key/value pairs from project_additional_info stored via config.add_project_additional_info.
  7. Create custom modules – each string in custom_descriptions wrapped in CustomModule.

Generation pipeline steps

Call Purpose
manager.generate_code_file() Scans the project, extracts source files, and stores a normalized code representation.
manager.generete_doc_parts(max_symbols=5000) Produces raw documentation fragments (function signatures, docstrings, etc.) limited to max_symbols characters per chunk.
manager.factory_generate_doc(DocFactory(*custom_modules)) Runs a DocFactory built from user‑supplied custom_modules to inject bespoke sections (e.g., custom tutorials).
manager.order_doc() Reorders fragments into a logical sequence (intro → modules → API reference).
manager.factory_generate_doc(DocFactory(IntroLinks())) Adds a generated introductory links section using the built‑in IntroLinks module.
manager.clear_cache() Purges temporary files and in‑memory caches to keep the workspace clean.

order_doc – anchor‑based re‑ordering

  • Splits output_doc into sections via split_text_by_anchors.
  • Sends the list to get_order(sync_model, sections) which uses the LLM to compute the optimal sequence.
  • Persists the reordered markdown.

Anchor Extraction & Chunk Splitting (extract_links_from_start, split_text_by_anchors)

Responsibility – Isolate markdown sections that begin with an HTML anchor (<a name="…"></a>) and build a mapping {anchor → section text}.
Interactions – Consumes raw markdown supplied by the post‑processor, emits a dict[str,str] used later by get_order. No external services; only re and the internal logger for debugging.
Technical Details

  • extract_links_from_start scans each pre‑split chunk with ^<a name=["']?(.*?)["']?</a>; anchors shorter than six characters are discarded and a leading “#” is prefixed.
  • split_text_by_anchors uses a positive‑lookahead split ((?=<a name=["']?[^"\'>\s]{6,200}["']?</a>)) to produce clean chunks, strips whitespace, validates a one‑to‑one count between anchors and chunks, and finally assembles the result dictionary.
    Data Flow – Input: full markdown string. Output: { "#anchorName": "section markdown …" } or None on mismatch. Side‑effects: optional InfoLog messages (not shown here).

Semantic Ordering of Documentation Chunks (get_order)

Responsibility – Ask the LLM (model) to reorder the extracted sections so related topics are grouped logically.
Interactions – Receives the anchor‑to‑chunk map from the splitter, builds a single‑turn user prompt, calls model.get_answer_without_history, parses the comma‑separated title list, and concatenates the corresponding markdown blocks. Logging via BaseLogger records the input map, the raw LLM reply, and each block addition.
Technical Details

  • Prompt explicitly requests only a CSV list, preserving the leading “#” in titles.
  • Result string split → new_result list, then ordered markdown assembled in order_output.
    Data Flow – Input: Model instance, dict[str,str]. Output: a single ordered markdown string. No file I/O; side‑effects limited to logger entries.

Data Splitting Logic (split_data)

Responsibility: Breaks a large source‑code string into chunks whose length does not exceed max_symbols.
Interactions: Relies on BaseLogger for progress messages; no external state.
Technical Details:

  • Splits on line breaks, then iteratively halves any segment > 1.5 × max_symbols.
  • Packs the refined segments into split_objects, starting a new chunk when the current one would exceed 1.25 × max_symbols.
    Data Flow: str → list of str (chunks).

History – accumulating system‑ and conversation‑level messages

  • Inputs: optional system_prompt (defaults to BASE_SYSTEM_TEXT).
  • State: self.history – ordered list of {role, content} dicts.
  • Side‑effects: add_to_history appends new entries, used by Model/AsyncModel to build the chat payload.
  • Assumptions: callers respect role strings ("system", "user", "assistant").

ParentModel – randomized model list preparation

During initialization it copies MODELS_NAME, optionally shuffles it (use_random), and stores the sequence in self.regen_models_name. self.current_model_index tracks the active model. This structure enables fail‑over cycling when a model call fails.

GPTModel – synchronous Groq client integration

  • Constructs a Groq client with the supplied api_key.
  • generate_answer selects the current model, attempts client.chat.completions.create(messages, model), and on exception logs a warning, advances current_model_index, and retries until a model succeeds or the list is exhausted (raising ModelExhaustedException).
  • Returns the content of the first choice and logs the result.

AsyncGPTModel – async counterpart using AsyncGroq

Mirrors GPTModel logic but with await on client.chat.completions.create. Logging is identical, and the method signature is async. It enables non‑blocking generation in event‑driven workflows.

Data Flow Summary
Prompt (either full history or raw prompt arg) → History/caller → selected model name → Groq API call → chat_completion object → extracted content → logger → returned string. All errors funnel through the retry loop or raise ModelExhaustedException.

ModelExhaustedException – signaling depletion of model pool

ModelExhaustedException derives from Exception and is raised when regen_models_name becomes empty. It bubbles up to the caller, forcing upstream logic (e.g., factories or UI) to abort or retry with a different configuration.

Compress – Single‑Pass Summarization

Responsibility: Sends a raw text chunk to the LLM with a system prompt built from ProjectSettings and a configurable compression baseline.
Interactions: Calls model.get_answer_without_history; reads project_settings.prompt and get_BASE_COMPRESS_TEXT.
Technical Details: Constructs a three‑message list (system, system, user) and returns the LLM’s answer verbatim.
Data Flow: data: str → LLM request → str answer.

Compress & Compare (synchronous)

Responsibility: Groups input strings into compress_power‑sized batches, compresses each element, and concatenates results per batch.
Interactions: Uses compress; updates a BaseProgress sub‑task.
Technical Details: Pre‑allocates a result list sized ceil(len(data)/compress_power), iterates with index division, appends compressed text plus newline.
Data Flow: list[str] → list of combined batch strings.

Compress to One – Iterative Reduction

Responsibility: Repeatedly compresses the list until a single aggregated summary remains.
Interactions: Switches between sync/async paths based on use_async; each iteration calls the appropriate batch function.
Technical Details: Dynamically lowers compress_power when remaining items < compress_power+1; counts iterations for diagnostics.
Data Flow: list[str] → final str summary.

Async Compress – Concurrency‑Safe Summarization

Responsibility: Same as compress but respects an asyncio.Semaphore to limit parallel LLM calls and updates progress.
Interactions: Awaits model.get_answer_without_history; shares the same prompt structure.
Technical Details: Wrapped in async with semaphore; returns the answer after progress_bar.update_task().
Data Flow: str → async LLM request → str.

Async Compress & Compare

Responsibility: Parallel version of batch compression.
Interactions: Spawns one async_compress task per element, gathers results, then re‑chunks them into batches of compress_power.
Technical Details: Uses a fixed 4‑slot semaphore, creates a progress sub‑task, joins with asyncio.gather.
Data Flow: list[str] → list of batch strings.

get_BASE_COMPRESS_TEXT – dynamic prompt builder

def get_BASE_COMPRESS_TEXT(start, power):
    return f\"\"\"
You will receive a large code snippet (up to ~{start} characters).
...

This helper creates a size‑aware instruction block for summarising large code fragments. Parameters:

  • start – approximate maximum character count of the incoming snippet.
  • power – divisor controlling the length of the summary (~start/power).

The function interpolates these values into a template that directs the model to extract architecture, produce a concise summary, and emit a strict usage example. It returns the formatted string for later concatenation with other prompt pieces.

CustomModule – custom description generator

Initialised with a discription string, CustomModule.generate splits the mixed code (info["code_mix"]) to ≤ 7000 symbols via split_data, then calls generete_custom_discription (post‑processor) with the split data, the provided model, the stored description, and the target language. The returned text becomes the module’s contribution.

generete_custom_discription(splited_data: str, model: Model, custom_description: str, language: str = "en") → str

Responsibility – Iterates over pre‑split documentation fragments, asking the LLM to produce a concise, anchor‑prefixed description for a user‑defined topic (custom_description).
Logic Flow

  1. For each sp_data in splited_data construct a multi‑system‑message prompt:
    • language directive,
    • role description (“Technical Analyst”),
    • strict rule block enforcing zero‑hallucination and mandatory single <a name="…"></a> tag,
    • the fragment context,
    • the task description.
  2. Call model.get_answer_without_history.
  3. If the result does not contain the sentinel !noinfo / “No information found” (or it appears after position 30), break the loop and return the answer; otherwise continue with the next fragment.

Side‑effects – None; all I/O is through the LLM and logging performed implicitly by the model or caller.

Generate Descriptions for Code

Responsibility: Queries the LLM for a structured developer‑facing description of each source file.
Interactions: Sends a fixed instructional system prompt plus the code snippet; logs progress.
Technical Details: Iterates over data, builds a two‑message prompt, collects answers in order.
Data Flow: list[str] (code) → list of LLM‑generated markdown descriptions.

generate_code_file – building the repository snapshot

  1. Logs start.
  2. Instantiates CodeMix(project_directory, config.ignore_files).
  3. Calls cm.build_repo_content → writes the mixed source to code_mix.txt.
  4. Logs completion and advances the progress bar.

Repository Content Aggregation (CodeMix class)

Responsibility – Produce a linear textual representation of a repository’s directory tree and file contents, while respecting an ignore list.
Interactions – Called by the pre‑processor stage; writes to a user‑specified output file. Relies on BaseLogger for ignored‑path notices; does not invoke the LLM.
Technical Details

  • should_ignore evaluates a Path against ignore_patterns using fnmatch on the full relative path, basename, and each path component.
  • build_repo_content iterates twice over root_dir.rglob("*"): first to emit the hierarchical tree (indentation based on depth), second to embed each non‑ignored file inside <file path="…"> tags. Errors while reading files are captured and written inline.
    Data Flow – Input: root directory path, ignore pattern list, optional output filename. Output: side‑effect – a text file (repomix-output.txt by default) containing the structured dump. Logging side‑effects report ignored entries and read errors.

Synchronous Part Documentation Generation (write_docs_by_parts)

Responsibility: Sends a single chunk to the LLM and returns the raw markdown response.
Interactions: Uses BASE_PART_COMPLITE_TEXT, optional prev_info, and a Model instance; logs via BaseLogger.
Technical Details: Builds a 2‑ or 3‑message prompt (system → language/id, system → base prompt, optional system → prior info, user → code). Calls model.get_answer_without_history. Strips surrounding triple back‑ticks if present.
Data Flow: (part_id, part, Model, prev_info?)str (LLM answer).

generete_doc_parts – synchronous chunked documentation

  • Reads the full code‑mix.
  • Calls gen_doc_parts(full_code_mix, max_symbols, sync_model, config.language, progress_bar).
  • Writes the resulting markdown to output_doc.md and updates progress.
  • Provides a clear input‑output contract: input – raw code text; output – partially generated documentation limited by max_symbols.

Asynchronous Part Documentation Generation (async_write_docs_by_parts)

Responsibility: Same as the sync variant but runs under an asyncio.Semaphore to limit concurrent LLM calls.
Interactions: Accepts AsyncModel, optional prev_info, optional update_progress callback, and a shared semaphore.
Technical Details: async with semaphore guards the request; prompt composition mirrors the sync version; result trimming identical; invokes update_progress after the LLM call.
Data Flow: (part, AsyncModel, semaphore, …)awaitstr.

Batch Documentation Generation (Synchronous) (gen_doc_parts)

Responsibility: Orchestrates full‑code documentation by splitting the input, iterating over chunks, and concatenating the LLM outputs.
Interactions: Calls split_data, write_docs_by_parts, and updates a BaseProgress sub‑task.
Technical Details: After each part, retains the last 3000 characters as context for the next call (prev_info). Progress bar is incremented per chunk.
Data Flow: (full_code_mix, max_symbols, Model, language)str (complete documentation).

Batch Documentation Generation (Asynchronous) (async_gen_doc_parts)

Responsibility: Parallel version of gen_doc_parts using asyncio.gather.
Interactions: Shares the same splitter, creates a semaphore (max 4 parallel calls), and updates BaseProgress via a lambda.
Technical Details: Builds a list of async_write_docs_by_parts tasks, gathers results, and concatenates them with double newlines.
Data Flow: (full_code_mix, global_info, max_symbols, AsyncModel, language)awaitstr (full documentation).

IntroLinks – HTML link extraction and intro generation

IntroLinks.generate extracts all HTML links from info["full_data"] using get_all_html_links, then produces a links‑focused introduction via get_links_intro, passing the link list, model, and language. The resulting markdown/HTML snippet is returned.

get_all_html_links(data: str) → list[str]

Responsibility – Scans the supplied markdown for <a name="…"></a> anchors and returns a list of fragment identifiers prefixed with #.
Interactions – Uses BaseLogger to emit progress messages; no external services.
Logic – Compiles a regex r'<a name=["\']?(.*?)["\']?></a>', iterates over re.finditer, keeps anchors longer than five characters, logs count and content, returns the collected list.

IntroText – global introduction assembly

IntroText.generate retrieves a high‑level description from info["global_data"] and creates a narrative introduction with get_introdaction, again using the supplied model and language. The final intro text is emitted for later concatenation.

get_introdaction(global_data: str, model: Model, language: str = "en") → str

Responsibility – Generates a generic project overview based on the complete documentation text (global_data).
Interactions – Prompt comprises BASE_INTRO_CREATE plus the full markdown as user content; result is obtained from the same LLM endpoint as above. No logging inside the function (caller may wrap).

get_links_intro(links: list[str], model: Model, language: str = "en") → str

Responsibility – Calls the supplied LLM (model) to synthesize a short introductory paragraph that references the provided link list.
Interactions – Builds a system‑message prompt containing BASE_INTRODACTION_CREATE_TEXT, adds the link list as user content, forwards the prompt to model.get_answer_without_history. Logs before/after invocation.
Output – Raw LLM response string intended for insertion at the top of the documentation.

Cache folder and file‑path helpers

  • CACHE_FOLDER_NAME = ".auto_doc_cache" and FILE_NAMES map logical keys to filenames (code_mix.txt, global_info.md, etc.).
  • __init__ creates the cache directory if missing, configures a file logger (FileLoggerTemplate) and stores injected config, project_directory, models, and progress bar.
  • get_file_path(key) builds an absolute path inside the cache; read_file_by_file_key(key) returns its UTF‑8 contents.

clear_cache – optional log removal

If config.pcs.save_logs is False, deletes the report.txt file, leaving other cached artifacts untouched.

Data flow summary – Input files → CodeMix → chunk generation → factory enrichment → ordering → final output_doc.md. All steps log progress, respect the user‑provided language setting, and optionally clean up temporary logs.

Logging Infrastructure (BaseLog, BaseLoggerTemplate, FileLoggerTemplate, BaseLogger)

Responsibility: Provides typed log objects (Error/Warning/Info) and a singleton logger that forwards messages to a configurable template (console or file).
Interactions: BaseLogger.set_logger() injects a BaseLoggerTemplate; all calls route through global_log respecting the global log_level.
Technical Details: BaseLog.format() yields the raw message; subclasses prepend a timestamp and severity. BaseLogger.__new__ guarantees a single instance.
Data Flow: BaseLogstr (formatted line) → print or file append.

Rich‑Console Progress (LibProgress)

Responsibility: Wraps rich’s Progress to expose a generic sub‑task API used by the documentation pipeline.
Interactions: Created with a shared Progress object; create_new_subtask registers a child task, update_task advances either the sub‑task or the main task, remove_subtask discards the current child.
Technical Details: Maintains _base_task and _cur_sub_task IDs; advances are atomic calls to Progress.update.
Data Flow: Calls → Progress.update → visual progress bar.

Console‑Based Progress (ConsoleGtiHubProgress & ConsoleTask)

Responsibility: Emits simple stdout progress for environments without rich.
Interactions: ConsoleGtiHubProgress.create_new_subtask spawns a ConsoleTask; update_task increments either the active sub‑task or a generic “General Progress” task.
Technical Details: ConsoleTask.progress() computes percentage and prints a line; removal clears the reference.
Data Flow: Update call → printed percentage line.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autodocgenerator-0.8.9.7.tar.gz (40.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autodocgenerator-0.8.9.7-py3-none-any.whl (36.2 kB view details)

Uploaded Python 3

File details

Details for the file autodocgenerator-0.8.9.7.tar.gz.

File metadata

  • Download URL: autodocgenerator-0.8.9.7.tar.gz
  • Upload date:
  • Size: 40.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.12.12 Linux/6.11.0-1018-azure

File hashes

Hashes for autodocgenerator-0.8.9.7.tar.gz
Algorithm Hash digest
SHA256 88f643e4763c78a8ac66321c73d07de8ce51d562dbdae1726195c65eb2e11884
MD5 1cdecd6eaf14f9d3b773eaa7d0d2b2f4
BLAKE2b-256 a14e8c3a5f7b8647adbde33c00ecd70a82cbaad3b4669fe40ce48f4661813955

See more details on using hashes here.

File details

Details for the file autodocgenerator-0.8.9.7-py3-none-any.whl.

File metadata

  • Download URL: autodocgenerator-0.8.9.7-py3-none-any.whl
  • Upload date:
  • Size: 36.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.12.12 Linux/6.11.0-1018-azure

File hashes

Hashes for autodocgenerator-0.8.9.7-py3-none-any.whl
Algorithm Hash digest
SHA256 2b0c05ee4e06267d755a2dbfcf73f8f0ce7a5f8cc2296ffe256a0da8c04fc16d
MD5 53a9677e872327272f336b87eac3d5af
BLAKE2b-256 4fe90a4115d57765ae4e0016f3abe2dde9edfa83801b9efec8ce75c8d44f6f90

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page