PromptWeaver streamlines prompt development and management in Generative AI workflows
Project description
PromptWeaver is a Python library that streamlines prompt development and management in Generative AI workflows. It decouples prompts from Python scripts, enhancing portability, maintainability, and scalability for developers working with multiple LLMs or complex prompting workflows.
This project is currently under active development and may undergo significant changes. We welcome your feedback and contributions!
Features
- Separate prompts from code for better modularity.
- Supports multimodal input (text, images, audio, video).
- Integration with large language models such as Gemini.
- YAML-based prompt configuration with sample and default values.
- Extensible to support other LLM APIs.
Installation
PromptWeaver can be easily installed using pip:
pip install promptweaver
Note: PromptWeaver requires Python 3.8 or higher.
Usage
First, you need to start by creating a .yml.j2
PromptWeaver template.
You will find template examples in our promptweaver gallery.
name: Hello World
description: A quickstart prompt showcasing how to answer a simple user message using gemini.
model:
model_name: gemini-1.5-flash-001
generation_config:
temperature: 0.3
max_output_tokens: 250
system_instruction: You are an AI model trained to answer questions. Be kind and objective in your answers.
# ---
variables:
user_message:
sample: Hi!
# ---
user:
- text: {{ user_message }}
Now you can call one of the supported LLM clients using the promptweaver template.
from promptweaver.core.prompt_template import PromptConfig
from promptweaver.clients.gemini.gemini_client import GeminiClient
# Initialize the Gemini client
gemini_client = GeminiClient(project="project_id", location="project_location")
# Load the prompt configuration
example_prompt = PromptConfig.from_file_with_sample_values("samples/example.yml.j2")
# Generate content
generate_content = gemini_client.generate_content(example_prompt)
print(generate_content.text)
Function Calling
For adding function calling support to your LLM requests you need to parametrize the .generate_content()
method with all your tools.
Note: Currently, tools can't be declared in a
.yml.j2
PromptWeaver template.
Example
from promptweaver.core.prompt_template import PromptConfig
from promptweaver.clients.gemini.gemini_client import GeminiClient
from vertexai.generative_models import Tool, grounding
# Initialize the Gemini client
gemini_client = GeminiClient(project="your_project", location="your_location")
# Load the prompt configuration
prompt_variables = {
'user_message': 'What\'s the weather like tomorrow in San Francisco? Should I bring a raincoat?'
}
example_prompt = PromptConfig.from_file("samples/01-hello-world-text.yml.j2", prompt_variables)
# Define your tools, such as: GoogleSearchRetrieval (grounding) or FunctionDeclaration (custom tools)
tool = Tool.from_google_search_retrieval(grounding.GoogleSearchRetrieval())
# The generate_content() method takes **kwargs, supporting any additional keyword arguments from vertexai
generate_content = gemini_client.generate_content(example_prompt, tools=[tool])
print(generate_content.text)
Contributing
We welcome contributions! Please read our contributing guide for details on how to get started. The project can be found on GitHub.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file promptweaver-0.1.16.tar.gz
.
File metadata
- Download URL: promptweaver-0.1.16.tar.gz
- Upload date:
- Size: 14.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: pdm/2.20.0.post1 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f484ce74ad71d5c5ab589c84d78f6ae7a8d0dab241f0a3230fa095f417e1b675 |
|
MD5 | 1bd1163f6e31231a63ed39248f2719be |
|
BLAKE2b-256 | 3748777b794659d5ca679ec036cb9e138c110ecce982d03bf63bc3e3db854a2b |
File details
Details for the file promptweaver-0.1.16-py3-none-any.whl
.
File metadata
- Download URL: promptweaver-0.1.16-py3-none-any.whl
- Upload date:
- Size: 20.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: pdm/2.20.0.post1 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 738b10be472b47a5a9425a60bbd8a179d04e19a0203aab2b02d20f778eb97827 |
|
MD5 | 070a19887dc50a8fca088cd106380962 |
|
BLAKE2b-256 | 950ac273bf86a21cf1b10e8d3ee1be16070ccced100bcc06ac1a4b096e155cf0 |