A new package that simplifies the deployment of Docker and Docker Compose applications to servers without requiring configuration changes or causing downtime. It automates the process, making it acces
Project description
autodeploydocker
autodeploydocker is a tiny Python package that makes it easy to automate the deployment of Docker and Docker‑Compose applications to remote servers.
It drives a language model (LLM) to generate the exact deployment steps, ensuring zero‑downtime deployments without the need for manual configuration changes.
Installation
pip install autodeploydocker
Quick start
from autodeploydocker import autodeploydocker
# Minimal usage – the package will create a ChatLLM7 instance for you.
response = autodeploydocker(
user_input="Deploy the latest version of my web‑app using Docker Compose on server X."
)
print(response) # -> list of strings extracted from the LLM response
Function signature
def autodeploydocker(
user_input: str,
api_key: Optional[str] = None,
llm: Optional[BaseChatModel] = None,
) -> List[str]:
...
| Parameter | Type | Description |
|---|---|---|
| user_input | str |
The natural‑language description of the deployment you want to perform. |
| api_key | Optional[str] |
API key for the default ChatLLM7 backend. If omitted, the function reads LLM7_API_KEY from the environment. |
| llm | Optional[BaseChatModel] |
A LangChain‑compatible LLM instance. If provided, it overrides the default ChatLLM7. |
How it works
The function builds a system prompt (system_prompt) and a human prompt (human_prompt) and sends them to the selected LLM.
The LLM’s output is then validated against a regular‑expression pattern defined in prompts.pattern.
If the output matches, the extracted data (a List[str]) is returned; otherwise a RuntimeError is raised.
Using a custom LLM
You can safely supply any LangChain LLM that follows the BaseChatModel interface.
OpenAI
from langchain_openai import ChatOpenAI
from autodeploydocker import autodeploydocker
llm = ChatOpenAI(model="gpt-4o") # configure as you need
response = autodeploydocker(
user_input="Deploy the staging environment with Docker Compose.",
llm=llm,
)
Anthropic
from langchain_anthropic import ChatAnthropic
from autodeploydocker import autodeploydocker
llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
response = autodeploydocker(
user_input="Roll out a new version of the API service.",
llm=llm,
)
Google Generative AI
from langchain_google_genai import ChatGoogleGenerativeAI
from autodeploydocker import autodeploydocker
llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")
response = autodeploydocker(
user_input="Update the production stack using Docker Compose.",
llm=llm,
)
Default LLM – ChatLLM7
If you do not pass an llm instance, autodeploydocker falls back to ChatLLM7 from the langchain_llm7 package:
pip install langchain_llm7
ChatLLM7 works out‑of‑the‑box with a free tier that is sufficient for most use cases.
To use a personal key, set the environment variable LLM7_API_KEY or pass the key directly:
response = autodeploydocker(
user_input="Deploy …",
api_key="my-llm7-key",
)
You can obtain a free API key by registering at https://token.llm7.io/.
Contributing & Support
If you encounter any issues, have a feature request, or want to contribute, please open an issue on GitHub:
https://github....
Author
Eugene Evstafev – chigwell
✉️ Email: hi@euegne.plus
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file autodeploydocker-2025.12.21144536.tar.gz.
File metadata
- Download URL: autodeploydocker-2025.12.21144536.tar.gz
- Upload date:
- Size: 6.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bf2e17275253a331183b1df7dd737adff081ff80d8f5429043e63ac2ba9070f2
|
|
| MD5 |
3fb82eb877c79a66e22a23923059014a
|
|
| BLAKE2b-256 |
53f33022126c1bdd8e1c07817c6dd90341ed9e90a04913de2c894ab076a2750b
|
File details
Details for the file autodeploydocker-2025.12.21144536-py3-none-any.whl.
File metadata
- Download URL: autodeploydocker-2025.12.21144536-py3-none-any.whl
- Upload date:
- Size: 6.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3d0ccccf2b47a1f74915fd1d9ce9ff12e1b5bccb858c154e9a7b3ddfc541b64c
|
|
| MD5 |
2a95c34f4f8a147b6dd9041f4fe573ef
|
|
| BLAKE2b-256 |
31a1a03f658d668ba58554fb560918ad2357fa310d82645a911fc95e1257ec26
|