Skip to main content

Utilizes source code repository files, such as dependency manifests, to generate container image code like Dockerfile and entrypoint shell script using LangChain GenAI.

Project description

pre-commit Poetry PyPI - Version PyPI - Downloads PyPI - License PyPI - Status PyPI - Python Version GitHub Repo stars GitHub Issues or Pull Requests

devops-container-image-code-generator

Utilizes source code repository files, such as dependency manifests, to generate container image code like Dockerfile and entrypoint shell script using LangChain GenAI.

Approach

  • Developers write source code, unit test code, dependency manifests like pom.xml, package.json, requirements.txt and static assets on their machine and checkin to the source code repository
  • devops-container-image-code-generator uses devops-code-generator package to checkout the source code repository and identify language, dependency manifest and dependency management tool from the dependency manifest checked into the source code repository
  • It then uses langchain genai middleware chain to identify the middleware from the dependency manifest
  • It then uses routing function to route to the langchain genai subchain corresponding to the identified middleware to generate container image code like Dockerfile and entrypoint shell script for the source code repository.

This approach shall be used to generate other DevOps code like pipeline code, infrastructure code, database code, deployment code, container deployment code, etc.

Constraints

Currently only works for below constraints

  • language : java
  • dependency management tool : apache_maven
  • middleware : spring_boot_version_2.3.0_and_above middleware.

Future Work

  • Add templates for other languages, dependency management tools and middlewares.
  • Use other files in the source code repository like README.md, etc. to update the generated container image code.
  • Use low level design document and images to update the generated container image code.

Environment Setup

It uses OpenAI gpt-4o Model. Set the OPENAI_API_KEY environment variable to access the OpenAI models. System Git should have access to the input git source code repository.

Usage

To use this package, you should first have the LangChain CLI installed:

pip install -U langchain-cli

Then spin up a LangServe instance directly by:

langchain serve

This will start the FastAPI app with a server is running locally at http://127.0.0.1:8000

We can see all openapi specification at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/playground

We can access the api from code with:

from langserve.client import RemoteRunnable

runnable = RemoteRunnable("http://127.0.0.1:8000")

Opentelemetry

Workaround for Opentelemetry autoinstrumentation to work with uvicorn

https://github.com/open-telemetry/opentelemetry-python-contrib/issues/385#issuecomment-808794045

Add below code as first line after documentation in the function subprocess_started in file site-packages/uvicorn/_subprocess.py

from opentelemetry.instrumentation.auto_instrumentation import sitecustomize

For Example :

def subprocess_started(
    config: Config,
    target: Callable[..., None],
    sockets: List[socket],
    stdin_fileno: Optional[int],
) -> None:
    """
    Called when the child process starts.

    * config - The Uvicorn configuration instance.
    * target - A callable that accepts a list of sockets. In practice this will
               be the `Server.run()` method.
    * sockets - A list of sockets to pass to the server. Sockets are bound once
                by the parent process, and then passed to the child processes.
    * stdin_fileno - The file number of sys.stdin, so that it can be reattached
                     to the child process.
    """
    from opentelemetry.instrumentation.auto_instrumentation import sitecustomize
    # Re-open stdin.
    if stdin_fileno is not None:
        sys.stdin = os.fdopen(stdin_fileno)

    # Logging needs to be setup again for each child.
    config.configure_logging()

    # Now we can call into `Server.run(sockets=sockets)`
    target(sockets=sockets)

Run below command to use Opentelemetry autoinstrumentation

opentelemetry-instrument langchain serve

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file devops_container_image_code_generator-1.20240703.1.tar.gz.

File metadata

File hashes

Hashes for devops_container_image_code_generator-1.20240703.1.tar.gz
Algorithm Hash digest
SHA256 59207b58a7da821d0e4f5f8bb20ee821c940a71cc65eef99f63d6ce2a245b890
MD5 9fda109399cb54713dec5b3b16805f82
BLAKE2b-256 9594c3b1ca60173b500362d3a47d1f03c5c540fa4659267e3498c5704677acef

See more details on using hashes here.

File details

Details for the file devops_container_image_code_generator-1.20240703.1-py3-none-any.whl.

File metadata

File hashes

Hashes for devops_container_image_code_generator-1.20240703.1-py3-none-any.whl
Algorithm Hash digest
SHA256 de55fe7de9cb50462b50fe70c12f265877c88ae3b77aab5fa1fdc5f8cc9827e4
MD5 67798402f4a1e61d08b3bb86f4c15e16
BLAKE2b-256 89d3548b2a29010ac2899743ee305fc2e89b5f344c5125f6122ef4012e5bd69e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page