Skip to main content

No project description provided

Project description

Generate your code documentation with doc-comments.ai

Build Publish

Focus on writing your code, let LLMs write the documentation for you.
With just a few keystrokes in your terminal by using the OpenAI API or 100% local LLMs without any data leaks.

Built with langchain, lama.cpp and treesitter.

ezgif-4-53d6e634af

✨ Features

  • 📝 Create documentation comment blocks for all methods in a file
    • e.g. Javadoc, JSDoc, Docstring, Rustdoc etc.
  • ✍️ Create inline documentation comments in method bodies
  • 🌳 Treesitter integration
  • 💻 Local LLM support

[!NOTE]
Documentations will only be added to files without unstaged changes, so nothing is overwritten.

🚀 Usage

Create documentations for any method in the file with GPT-3.5 Turbo model:

aicomments <RELATIVE_FILE_PATH>

Create also documentation comments in the method body:

aicomments <RELATIVE_FILE_PATH> --inline

Use GPT-4 model (Default is GPT-3.5):

aicomments <RELATIVE_FILE_PATH> --gpt4

Guided mode, confirm documentation generation for each method:

aicomments <RELATIVE_FILE_PATH> --guided

Use a local LLM on your machine:

aicomments <RELATIVE_FILE_PATH> --local_model <MODEL_PATH>

[!NOTE]
How to download models from huggingface for local usage see Local LLM usage

[!IMPORTANT]
The results by using a local LLM will highly be affected by your selected model. To get similar results compared to GPT-3.5/4 you need to select very large models which require a powerful hardware.

⚙️ Supported Languages

  • Python
  • Typescript
  • Javascript
  • Java
  • Rust
  • Kotlin
  • Go
  • C++
  • C
  • Scala

📋 Requirements

  • Python >= 3.9

🔧 Installation

1. OpenAI API usage

Create your personal OpenAI API key and add it as $OPENAI_API_KEY to your environment with:

export OPENAI_API_KEY=<YOUR_API_KEY>

Install with pipx:

pipx install doc-comments-ai

It is recommended to use pipx for installation, nonetheless it is also possible to use pip.

2. Local LLM usage

By using a local LLM no API key is required. On first usage of --local_model you will be asked for confirmation to intall llama-cpp-python with its dependencies. The installation process will take care of the hardware-accelerated build tailored to your hardware and OS. For further details see: installation-with-hardware-acceleration

To download a model from huggingface for local usage the most convenient way is using the huggingface-cli:

huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.Q5_K_M.gguf

This will download the codellama-13b-python.Q5_K_M model to ~/.cache/huggingface/. After the download has finished the absolute path of the .gguf file is printed to the console which can be used as the value for --local_model.

[!IMPORTANT]
Since llama.cpp is used the model must be in the .gguf format.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

doc_comments_ai-0.1.10.tar.gz (11.2 kB view hashes)

Uploaded Source

Built Distribution

doc_comments_ai-0.1.10-py3-none-any.whl (16.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page