Automated README file generator, powered by AI.
Project description
README-AI, Your AI-Powered Documentation Assistant
Designed for simplicity, customization, and developer productivity.
🔗 Quick Links
[!IMPORTANT] ✨ See the Official Documentation for more details.
⚡️ Introduction
Objective
README-AI is a developer tool for automatically generating README markdown files using a robust repository processor engine and generative AI. Simply provide a repository URL or local path to your codebase, and a well-structured and detailed README file will be generated for you.
Motivation
This project aims to streamline the documentation process for developers, ensuring projects are properly documented and easy to understand. Whether you're working on an open-source project, enterprise software, or a personal project, README-AI is here to help you create high-quality documentation quickly and efficiently.
👾 Demo
Running from the command line:
Running directly in your browser:
☄️ Features
- Automated Documentation: Synchronize data from third-party sources and generates documentation automatically.
- Customizable Output: Dozens of options for styling/formatting, badges, header designs, and more.
- Language Agnostic: Works across a wide range of programming languages and project types.
- Multi-LLM Support: Compatible with
OpenAI
,Ollama
,Anthropic
,Google Gemini
andOffline Mode
. - Offline Mode: Generate a boilerplate README without calling an external API.
- Markdown Best Practices: Leverage best practices in Markdown formatting for clean, professional-looking docs.
A few combinations of README styles and configurations:
--image custom --badge-color FF4B4B --badge-style flat-square --header-style classic
|
|
--image cloud --header-style compact --toc-style fold
|
|
--align left --badge-style flat-square --image cloud
|
--align left --badge-style flat --image gradient
|
--badge-style flat --image custom
|
--badge-style skills-light --image grey
|
--badge-style flat-square
|
--badge-style flat --image black
|
--image custom --badge-color 00ffe9 --badge-style flat-square --header-style classic
|
|
--image llm --badge-style plastic --header-style classic
|
|
--image custom --badge-color BA0098 --badge-style flat-square --header-style modern --toc-style fold
|
See the Configuration section for a complete list of CLI options.
📍 Overview
Overview ◎ High-level introduction of the project, focused on the value proposition and use-cases, rather than technical aspects. |
✨ Features
Features Table ◎ Generated markdown table that highlights the key technical features and components of the codebase. This table is generated using a structured prompt template. |
📃 Codebase Documentation
Directory Tree ◎ The project's directory structure is generated using pure Python and embedded in the README. See readmeai.generators.tree. for more details. |
File Summaries ◎ Summarizes key modules of the project, which are also used as context for downstream prompts. |
🚀 Quickstart Instructions
Getting Started Guides ◎ Prerequisites and system requirements are extracted from the codebase during preprocessing. The parsers handles the majority of this logic currently. |
Installation Guide ◎ |
🔰 Contributing Guidelines
Contributing Guide ◎ Dropdown section that outlines general process for contributing to your project. ◎ Provides links to your contributing guidelines, issues page, and more resources. ◎ Graph of contributors is also included. |
Additional Sections ◎ |
🛸 Getting Started
System Requirements:
- Python
3.9+
- Package Manager/Container:
pip
,pipx
,docker
- LLM API Service:
OpenAI
,Ollama
,Anthropic
,Google Gemini
,Offline Mode
Repository URL or Path:
Make sure to have a repository URL or local directory path ready for the CLI.
LLM API Service:
- OpenAI: Recommended, requires an account setup and API key.
- Ollama: Free and open-source, potentially slower and more resource-intensive.
- Anthropic: Requires an Anthropic account and API key.
- Google Gemini: Requires a Google Cloud account and API key.
- Offline Mode: Generates a boilerplate README without making API calls.
🔩 Installation
Install readme-ai using your preferred package manager, container, or directly from the source.
Using pip
❯ pip install readmeai
Using pipx
❯ pipx install readmeai
[! TIP]
Use pipx to install and run Python command-line applications without causing dependency conflicts with other packages!
Using docker
Pull the latest Docker image from the Docker Hub repository.
❯ docker pull zeroxeli/readme-ai:latest
From source
Build readme-ai
Using bash
❯ bash setup/setup.sh
Using poetry
- Clone the repository:
❯ git clone https://github.com/eli64s/readme-ai
- Navigate to the
readme-ai
directory:
❯ cd readme-ai
- Install dependencies using
poetry
:
❯ poetry install
- Enter the
poetry
shell environment:
❯ poetry shell
Installing Optional Dependencies
To use the Anthropic and Google Gemini clients, install the optional dependencies.
Anthropic:
❯ pip install readmeai[anthropic]
Google Gemini:
❯ pip install readmeai[gemini]
⚙️ Usage
Environment Variables
OpenAI
Generate a OpenAI API key and set it as the environment variable OPENAI_API_KEY
.
# Using Linux or macOS
❯ export OPENAI_API_KEY=<your_api_key>
# Using Windows
❯ set OPENAI_API_KEY=<your_api_key>
Ollama
Pull your model of choice from the Ollama repository:
❯ ollama pull mistral:latest
Start the Ollama server:
❯ export OLLAMA_HOST=127.0.0.1 && ollama serve
See all available models from Ollama here.
Anthropic
Generate an Anthropic API key and set the following environment variables:
❯ export ANTHROPIC_API_KEY=<your_api_key>
Google Gemini
Generate a Google API key and set the following environment variables:
❯ export GOOGLE_API_KEY=<your_api_key>
Running the CLI
Using pip
With OpenAI:
❯ readmeai --api openai --repository https://github.com/eli64s/readme-ai
[! IMPORTANT] By default, the
gpt-3.5-turbo
model is used. Higher costs may be incurred when more advanced models.
With Ollama:
❯ readmeai --api ollama --model llama3 --repository https://github.com/eli64s/readme-ai
With Anthropic:
❯ readmeai --api anthropic -m claude-3-5-sonnet-20240620 -r https://github.com/eli64s/readme-ai
With Gemini:
❯ readmeai --api gemini -m gemini-1.5-flash -r https://github.com/eli64s/readme-ai
Adding more customization options:
❯ readmeai --repository https://github.com/eli64s/readme-ai \
--output readmeai.md \
--api openai \
--model gpt-4 \
--badge-color A931EC \
--badge-style flat-square \
--header-style compact \
--toc-style fold \
--temperature 0.9 \
--tree-depth 2
--image LLM \
--emojis
Using docker
Running the Docker container with the OpenAI API:
❯ docker run -it \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-v "$(pwd)":/app zeroxeli/readme-ai:latest \
-r https://github.com/eli64s/readme-ai
Using streamlit
Try readme-ai directly in your browser, no installation required. See the readme-ai-streamlit repository for more details.
From source
Using readme-ai
Using bash
❯ conda activate readmeai
❯ python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
Using poetry
❯ poetry shell
❯ poetry run python3 -m readmeai.cli.main -r https://github.com/eli64s/readme-ai
🧪 Testing
The pytest framework and nox automation tool are used for testing the application.
Using pytest
❯ make test
Using nox
❯ make test-nox
[!TIP] Use nox to test application against multiple Python environments and dependencies!
🔡 Configuration
Customize your README generation using these CLI options:
Option | Description | Default |
---|---|---|
--align |
Text align in header | center |
--api |
LLM API service provider | offline |
--badge-color |
Badge color name or hex code | 0080ff |
--badge-style |
Badge icon style type | flat |
--base-url |
Base URL for the repository | v1/chat/completions |
--context-window |
Maximum context window of the LLM API | 3900 |
--emojis |
Adds emojis to the README header sections | False |
--header-style |
Header template style | classic |
--image |
Project logo image | blue |
--model |
Specific LLM model to use | gpt-3.5-turbo |
--output |
Output filename | readme-ai.md |
--rate-limit |
Maximum API requests per minute | 10 |
--repository |
Repository URL or local directory path | None |
--temperature |
Creativity level for content generation | 0.1 |
--toc-style |
Table of contents template style | bullet |
--top-p |
Probability of the top-p sampling method | 0.9 |
--tree-depth |
Maximum depth of the directory tree structure | 2 |
[!TIP] For a full list of options, run
readmeai --help
in your terminal.
🎨 Customization
To see the full list of customization options, check out the Configuration section in the official documentation. This section provides a detailed overview of all available CLI options and how to use them, including badge styles, header templates, and more.
🤖 Examples
Language/Framework | Output File | Input Repository | Description |
---|---|---|---|
Python | readme-python.md | readme-ai | Core readme-ai project |
TypeScript & React | readme-typescript.md | ChatGPT App | React Native ChatGPT app |
PostgreSQL & DuckDB | readme-postgres.md | Buenavista | Postgres proxy server |
Kotlin & Android | readme-kotlin.md | file.io Client | Android file sharing app |
Streamlit | readme-streamlit.md | readme-ai-streamlit | Streamlit UI for readme-ai app |
Rust & C | readme-rust-c.md | CallMon | System call monitoring tool |
Docker & Go | readme-go.md | docker-gs-ping | Dockerized Go app |
Java | readme-java.md | Minimal-Todo | Minimalist todo Java app |
FastAPI & Redis | readme-fastapi-redis.md | async-ml-inference | Async ML inference service |
Jupyter Notebook | readme-mlops.md | mlops-course | MLOps course repository |
Apache Flink | readme-local.md | Local Directory | Example using a local directory |
See additional README files generated by readme-ai here
🏎💨 Project Roadmap
- Release
readmeai 1.0.0
with enhanced documentation management features. - Develop
Vscode Extension
to generate README files directly in the editor. - Develop
GitHub Actions
to automate documentation updates. - Add
badge packs
to provide additional badge styles and options.- Code coverage, CI/CD status, project version, and more.
🔰 Contributing
Contributions are welcome and encouraged! If interested, please begin by reviewing the resources below:
- 💡 Contributing Guide: Learn about our contribution process, coding standards, and how to submit your ideas.
- 💬 Start a Discussion: Have questions or suggestions? Join our community discussions to share your thoughts and engage with others.
- 🐛 Report an Issue: Found a bug or have a feature request? Let us know by opening an issue so we can address it promptly.
📒 Changelog
🎗 License
🙌 Acknowledgments
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file readmeai-0.5.94.tar.gz
.
File metadata
- Download URL: readmeai-0.5.94.tar.gz
- Upload date:
- Size: 148.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d0f866c025aa0611f7c7225634f48767a6f6ac335567b209383105b1ff62b1d9 |
|
MD5 | c150c3e5ccdabad701a8166a629634ad |
|
BLAKE2b-256 | d80e3d20d9bf3e9bbf0337dd2d50646c20340cea7070b247b4257b5f1861134b |
File details
Details for the file readmeai-0.5.94-py3-none-any.whl
.
File metadata
- Download URL: readmeai-0.5.94-py3-none-any.whl
- Upload date:
- Size: 166.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 37721bbba8115bcce1f38769792c61275660c31485b48e68e70847b6e70658a0 |
|
MD5 | fb9bb448a427bb0d0022326bcab5008a |
|
BLAKE2b-256 | 7afc1bf98d55de780a021d78d179eb1992661e73fa8c5ee1ed1c39c299f2e01b |