Skip to main content

No project description provided

Project description

Flow Prompt

Introduction

Flow Prompt is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates budget-aware operations, dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models.

Features

  • CI/CD testing: Generates tests based on the context and ideal answer (usually written by the human).
  • Dynamic Prompt Development: Avoid budget exceptions with dynamic data.
  • Multi-Model Support: Seamlessly integrate with various LLMs like OpenAI, Anthropic, and more.
  • Real-Time Insights: Monitor interactions, request/response metrics in production.
  • Prompt Testing and Evolution: Quickly test and iterate on prompts using historical data.

Installation

Install Flow Prompt using pip:

pip install flow-prompt

Authentication

OpenAI Keys

# setting as os.env
os.setenv('OPENAI_API_KEY', 'your_key_here')
# or creating flow_prompt obj
FlowPrompt(openai_key="your_key", openai_org="your_org")

Azure Keys

Add Azure keys to accommodate multiple realms:

# setting as os.env
os.setenv('AZURE_KEYS', '{"name_realm":{"url": "https://baseurl.azure.com/","key": "secret"}}')
# or creating flow_prompt obj
FlowPrompt(azure_keys={"realm_name":{"url": "https://baseurl.azure.com/", "key": "your_secret"}})

Model Agnostic:

Mix models easily, and districute the load across models. The system will automatically distribute your load based on the weights. We support:

  • Claude
  • Gemini
  • OpenAI (Azure OpenAI models)
def_behaviour = behaviour.AIModelsBehaviour(attempts=[
    AttemptToCall(
        ai_model=OpenAIModel(
                model='gpt-4o',
                max_tokens=128_000,
            ),
        weight=100
    ),
    AttemptToCall(
        ai_model=AzureAIModel(
            realm='useast,
            deployment_id='gpt-4o',
            max_tokens=128_000,
        ),
        weight=100
    ),
    AttemptToCall(
        ai_model=ClaudeAIModel(
            model = 'claude-3-5-sonnet-20240620',
            max_tokens=200_000,
        ),
        weight=100
    ),
    AttemptToCall(
        ai_model=GeminiAIModel(
            model = 'gemini-1.5-pro',
            max_tokens=1_000_000,
        ),
        weight=100
    )
])

response_llm = fp.call(agent.id, context, def_behaviour)

FlowPrompt Keys

Obtain an API token from Flow Prompt and add it:

# As an environment variable:
os.setenv('FLOW_PROMPT_API_TOKEN', 'your_token_here')
# Via code: 
FlowPrompt(api_token='your_api_token')

Add Behavious:

  • use OPENAI_BEHAVIOR
  • or add your own Behaviour, you can set max count of attempts, if you have different AI Models, if the first attempt will fail because of retryable error, the second will be called, based on the weights.
from flow_prompt import OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR
flow_behaviour = OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR

or:

from flow_prompt import behaviour
flow_behaviour = behaviour.AIModelsBehaviour(
    attempts=[
        AttemptToCall(
            ai_model=AzureAIModel(
                realm='us-east-1',
                deployment_name="gpt-4-1106-preview",
                max_tokens=C_128K,
                support_functions=True,
            ),
            weight=100,
        ),
        AttemptToCall(
            ai_model=OpenAIModel(
                model="gpt-4-1106-preview",
                max_tokens=C_128K,
                support_functions=True,
            ),
            weight=100,
        ),
    ]
)

Usage Examples:

from flow_prompt import FlowPrompt, PipePrompt

# Initialize and configure FlowPrompt
flow = FlowPrompt(openai_key='your_api_key', openai_org='your_org')

# Create a prompt
prompt = PipePrompt('greet_user')
prompt.add("You're {name}. Say Hello and ask what's their name.", role="system")

# Call AI model with FlowPrompt
context = {"name": "John Doe"}
# test_data -  optional parameter used for generating tests
response = flow.call(prompt.id, context, flow_behaviour, test_data={
    'ideal_answer': "Hello, I'm John Doe. What's your name?", 
    'behavior_name': "gemini"
    }
)
print(response.content)
  • To review your created tests and score please go to https://cloud.flow-prompt.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt.

  • To review logs please proceed to https://cloud.flow-prompt.com/logs, there you can see metrics like latency, cost, tokens;

Best Security Practices

For production environments, it is recommended to store secrets securely and not directly in your codebase. Consider using a secret management service or encrypted environment variables.

Contributing

We welcome contributions! Please see our Contribution Guidelines for more information on how to get involved.

License

This project is licensed under the Apache2.0 License - see the LICENSE file for details.

Contact

For support or contributions, please contact us via GitHub Issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flow_prompt-0.1.31.tar.gz (26.9 kB view details)

Uploaded Source

Built Distribution

flow_prompt-0.1.31-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file flow_prompt-0.1.31.tar.gz.

File metadata

  • Download URL: flow_prompt-0.1.31.tar.gz
  • Upload date:
  • Size: 26.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.5

File hashes

Hashes for flow_prompt-0.1.31.tar.gz
Algorithm Hash digest
SHA256 318a7cba025b2791e92d12d1d6008d51407e78062db8f4b009ff59c55643dbab
MD5 fce07ee2606637105fcc012d986be4c7
BLAKE2b-256 dd6cc90c691bb5ef5c0cbed1198179894ea30fba664088fca05b35af68aa77ab

See more details on using hashes here.

File details

Details for the file flow_prompt-0.1.31-py3-none-any.whl.

File metadata

  • Download URL: flow_prompt-0.1.31-py3-none-any.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.5

File hashes

Hashes for flow_prompt-0.1.31-py3-none-any.whl
Algorithm Hash digest
SHA256 5c7976032a779bcd6489cd9b89623a04ad18657acfb4357b98b04a0b316b13b7
MD5 4dc313f807ce40a56185fed2da1ceb62
BLAKE2b-256 50d817bb1070558eec122a86ffa38c27fb12bb8d8f897603f2b59eb9d63a72fb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page