Skip to main content

Collaboratively build an entire codebase for any project with the help of an AI

Project description

GPT Synthesizer

Collaboratively implement an entire software project with the help of an AI.

GPT-Synthesizer walks you through the problem statement and explores the design space with you through a carefully moderated interview process. If you have no idea where to start and how to describe your software project, GPT Synthesizer can be your best friend.

What makes GPT Synthesizer unique?

The design philosophy of GPT Synthesizer is rooted in the core, and rather contrarian, belief that a single prompt is not enough to build a complete codebase for a complex software. This is mainly due to the fact that, even in the presence of powerful LLMs, there are still many crucial details in the design specification which cannot be effectively captured in a single prompt. Attempting to include every bit of detail in a single prompt, if not impossible, would cause losing efficiency of the LLM engine. Powered by LangChain, GPT Synthesizer captures the design specification, step by step, through an AI-directed dialogue that explores the design space with the user.

GPT Synthesizer interprets the initial prompt as a high-level description of a programming task. Then, through a process, which we name “prompt synthesis”, GPT Synthesizer compiles the initial prompt into multiple program components that the user might need for implementation. This step essentially turns 'unknown unknowns' into 'known unknowns', which can be very helpful for novice programmers who want to understand an overall flow of their desired implementation. Next, GPT Synthesizer and the user collaboratively find out the design details that will be used in the implementation of each program component.

Different users might prefer different levels of interactivity depending on their unique skill set, their level of expertise, as well as the complexity of the task at hand. GPT Synthesizer distinguishes itself from other LLM-based code generation tools by finding the right balance between user participation and AI autonomy.

Installation

  • pip install gpt-synthesizer

  • For development:

    • git clone https://github.com/RoboCoachTechnologies/GPT-Synthesizer.git
    • cd gpt-synthesizer
    • pip install -e .

Usage

GPT Sythesizer is easy to use. It provides you with an intuitive AI assistant in your command-line interface. See our demo for an example of using GPT Synthesizer.

GPT Synthesizer uses OpenAI's gpt-3.5-turbo-16k as the default LLM.

  • Setup your OpenAI API key: export OPENAI_API_KEY=[your api key]

Run:

  • Start GPT Synthesizer by typing gpt-synthesizer in the terminal.
  • Briefly describe your programming task and the implementation language:
    • Programming task: *I want to implement an edge detection method from live camera feed.*
    • Programming language: *python*
  • GPT Synthesizer will analyze your task and suggest a set of components needed for the implementation.
    • You can add more components by listing them in quotation marks: Components to be added: *Add 'component 1: what component 1 does', 'component 2: what component 2 does', and 'component 3: what component 3 does' to the list of components.*
    • You can remove any redundant component in a similar manner: Components to be removed: *Remove 'component 1' and 'component 2' from the list of components.*
  • After you are done with modifying the component list, GPT Synthsizer will start asking questions in order to find all the details needed for implementing each component.
  • When GPT Synthesizer learns about your specific requirements for each component, it will write the code for you!
  • You can find the implementation in the workspace directory.

Make your own GPT Synthesizer!

GPT Synthesizer’s code is easy to read and understand. Anyone can customize the code for a specific application. The codebase is tightly integrated with LangChain, allowing utilization of various tools such as internet search and vector databases.

GPT Synthesizer's hierarchical strategy to build the codebase allows OpenAI’s GPT3.5 to be a viable option for the backend LLM. We believe GPT3.5 provides a good trade-off between cost and contextual understanding, while GPT4 might be too expensive for many use cases. Nevertheless, switching to another LLM is made easy thanks to LangChain integration.

Roadmap

GPT Synthesizer will be actively maintained as an open-source project. We welcome everyone to contribute to our community of building systems for human-in-the-loop code generation!

Here is a (non-exhaustive) list of our future plans for GPT Synthesizer:

  • An additional step in code generation that ensures creating a main/entrypoint.
  • Creating setup instructions based on the programming language, e.g. CMakelists.txt for C++ and setup.py+requirements.txt for python.
  • Adding benchmarks and testing scripts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt-synthesizer-0.0.3.tar.gz (14.0 kB view details)

Uploaded Source

Built Distribution

gpt_synthesizer-0.0.3-py3-none-any.whl (14.1 kB view details)

Uploaded Python 3

File details

Details for the file gpt-synthesizer-0.0.3.tar.gz.

File metadata

  • Download URL: gpt-synthesizer-0.0.3.tar.gz
  • Upload date:
  • Size: 14.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for gpt-synthesizer-0.0.3.tar.gz
Algorithm Hash digest
SHA256 89c2fb52964be93aa3efc823dd5f3031120adf37b85ed711a5125a426fe7f518
MD5 9327272d1bf8ebbcfd3c0cf8c745c901
BLAKE2b-256 0e0fe6933f31507fb9009366d48cbb67a24409560fe3d74d3be2ebc4a31ea280

See more details on using hashes here.

File details

Details for the file gpt_synthesizer-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for gpt_synthesizer-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 97059b8f576c46fad7ddddde1754c206ec3d3f9df750817ceb6db971c4a24654
MD5 925445da982765c5091a850334bdf72b
BLAKE2b-256 9aab35fc59c43f8034258c8933a7a84b8d48d7f98c772d85f9354433a444689e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page