Skip to main content

Automatic prompt optimization by iteratively refining them using feedback

Project description

Tiny TextGrad

Automatic prompt optimization by iteratively refining them using feedback

PyPI version CI Rye GitHub license GitHub issues GitHub stars Twitter

TinyTextGrad is an educational package inspired by the TextGrad: Automatic "Differentiation" via Text paper, designed to provide hands-on understanding of how concepts from PyTorch and deep learning can be applied to generative AI, with a particular focus on automatic prompt optimization (APO).

This lightweight implementation applies backpropagation methods specifically tailored for text-based feedback, enabling effective optimization of the generation process for both generative and discriminative tasks. TinyTextGrad aims to demystify the inner workings of language model optimization and prompt engineering.

Key Features

  1. Automatic Prompt Optimization (APO): Enhance reasoning capabilities by optimizing prompt phrases provided to language models (LLMs), improving accuracy in complex analytical tasks and decision-making processes.

  2. PyTorch-like Syntax: Utilize familiar abstractions and syntax similar to PyTorch, making it easier for those with prior deep learning experience to adapt quickly.

  3. User-Friendly Design: Accessible to end-users who may not have extensive background knowledge in machine learning or optimization algorithms.

  4. Educational Focus: Built for learning and experimentation, allowing users to gain first-hand experience with concepts like automatic reverse mode differentiation, backpropagation, and gradient descent in the context of text generation.

Installation

You can install TinyTextGrad using pip:

pip install tinytextgrad

Usage

Here's a simple example of how to use TinyTextGrad for automatic prompt optimization:

from tinytextgrad import optimize_prompt
from textwrap import dedent

initial_prompt = dedent("""
Analyze the given sentence and determine its primary emotion.
Respond with a single word: happy, sad, angry, or neutral.
""").strip()

inputs = [
    "I can't believe I won the lottery!",
    "The rain ruined our picnic plans.",
    "This traffic is making me late for work.",
    "The sky is cloudy today.",
    "She surprised me with tickets to my favorite band!",
    "I dropped my phone and cracked the screen.",
    "The customer service was incredibly rude.",
    "I'm going to the grocery store later.",
    "My best friend is moving away next month.",
    "The movie was exactly what I expected it to be.",
]

result = optimize_prompt(
    initial_prompt,
    "gpt-3.5-turbo",
    "gpt-4",
    inputs,
    num_iterations=3,
)

print("\n\nFinal optimized EMOTION_ANALYSIS_PROMPT:")
print(result.variable.value)

EMOTION_ANALYSIS_PROMPT = result

This example demonstrates how to use TinyTextGrad to optimize a prompt for emotion analysis. The optimize_prompt function takes an initial prompt, specifies the models to use for generation and evaluation, provides a list of input sentences, and performs optimization over a specified number of iterations.

The resulting optimized prompt can then be used for more accurate emotion analysis tasks.

Requirements

  • Python 3.7+
  • Dependencies listed in pyproject.toml

You'll need to set up an OpenAI API key to use the default models.

export OPENAI_API_KEY=your_api_key_here

See LiteLLM docs for more information on setting up API keys.

TODO

We're constantly working to improve TinyTextGrad. Here are some features and enhancements we're planning to implement:

  • Split training data into training and validation sets

    • Extend the system to split the training data into training and validation sets, allowing for more robust model training and evaluation.
  • Automatic training set generation

    • Develop a system to automatically generate diverse and relevant training sets for prompt optimization.
  • Momentum and gradient context

    • Implement momentum-based optimization techniques to potentially improve convergence speed and stability.
    • Introduce gradient context to better handle long-term dependencies in prompt optimization.
  • Computation graph

    • Maintain a computation graph for multi-step operations, allowing for more complex optimizations.
  • New loss functions

    • Develop and integrate additional loss functions tailored for specific tasks.
    • Allow users to easily define and use custom loss functions for their unique use cases.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contact

Donald Thompson - @dt_public - witt3rd@witt3rd.com

Project Link: https://github.com/witt3rd/tinytextgrad

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinytextgrad-0.2.5.tar.gz (28.9 kB view details)

Uploaded Source

Built Distribution

tinytextgrad-0.2.5-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file tinytextgrad-0.2.5.tar.gz.

File metadata

  • Download URL: tinytextgrad-0.2.5.tar.gz
  • Upload date:
  • Size: 28.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for tinytextgrad-0.2.5.tar.gz
Algorithm Hash digest
SHA256 f83d4d02da9c88e4b95c12efbfb23717f167ae0ee28374d6c583cb258ffd7277
MD5 b20a9c50b9cfb7026751771ea425f660
BLAKE2b-256 d3ff77dc9e6459b0fee3863a9417c87a1ef3f801e366c492480a853cde7f8ad6

See more details on using hashes here.

File details

Details for the file tinytextgrad-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: tinytextgrad-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 11.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for tinytextgrad-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 fff1c842a2ad15ab738cf0853a76f3a0fa6ab9c29001696b3856fa5bccbebd40
MD5 95c76d5c606122b90abac0e299b47a42
BLAKE2b-256 d32acd9faa2e840ba5c85ed3696aff46fd1aa78952e8857e1ea9611d1c73447c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page