Skip to main content

Prompt flow Python SDK - build high-quality LLM apps

Project description

Prompt flow

Python package Python PyPI - Downloads CLI vsc extension

Doc Issue Discussions CONTRIBUTING License: MIT

Welcome to join us to make Prompt flow better by participating discussions, opening issues, submitting PRs.

Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

With prompt flow, you will be able to:

  • Create and iteratively develop flow
    • Create executable flows that link LLMs, prompts, Python code and other tools together.
    • Debug and iterate your flows, especially the interaction with LLMs with ease.
  • Evaluate flow quality and performance
    • Evaluate your flow's quality and performance with larger datasets.
    • Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
  • Streamlined development cycle for production
    • Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
    • (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of Prompt flow in Azure AI.

Installation

Ensure you have a python environment, python=3.9 is recommended.

pip install promptflow promptflow-tools

Quick Start ⚡

Create a chatbot with prompt flow

Run the command to initiate a prompt flow from a chat template, it creates folder named my_chatbot and generates required files within it:

pf flow init --flow ./my_chatbot --type chat

Setup a connection for your API key

For OpenAI key, establish a connection by running the command, using the openai.yaml file in the my_chatbot folder, which stores your OpenAI key:

# Override keys with --set to avoid yaml file changes
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection

For Azure OpenAI key, establish the connection by running the command, using the azure_openai.yaml file:

pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection

Chat with your flow

In the my_chatbot folder, there's a flow.dag.yaml file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc

Note that in the chat node, we're using a connection named open_ai_connection (specified in connection field) and the gpt-35-turbo model (specified in deployment_name field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.

Interact with your chatbot by running: (press Ctrl + C to end the session)

pf flow test --flow ./my_chatbot --interactive

Continue to delve deeper into Prompt flow.

Release History

0.1.0b8 (2023.10.26)

Features Added

  • [Executor] Add average execution time and estimated execution time to batch run logs
  • [SDK/CLI] Support pfazure run archive/restore/update.
  • [SDK/CLI] Support custom strong type connection.
  • [SDK/CLI] Enable telemetry and won't collect by default, use pf config set cli.telemetry_enabled=true to opt in.
  • [SDK/CLI] Exposed function from promptflow import load_run to load run object from local YAML file.
  • [Executor] Support ToolProvider for script tools.

Bugs Fixed

  • pf config set:
    • Fix bug for workspace connection.provider=azureml doesn't work as expected.
  • [SDK/CLI] Fix the bug that using sdk/cli to submit batch run did not display the log correctly.
  • [SDK/CLI] Fix encoding issues when input is non-English with pf flow test.
  • [Executor] Fix the bug can't read file containing "Private Use" unicode character.
  • [SDK/CLI] Fix string type data will be converted to integer/float.
  • [SDK/CLI] Remove the max rows limitation of loading data.
  • [SDK/CLI] Fix the bug --set not taking effect when creating run from file.

Improvements

  • [SDK/CLI] Experience improvements in pf run visualize page:
    • Add column status.
    • Support opening flow file by clicking run id.

0.1.0b7.post1 (2023.09.28)

Bug Fixed

  • Fix extra dependency bug when importing promptflow without azure-ai-ml installed.

0.1.0b7 (2023.09.27)

Features Added

  • pf flow validate: support validate flow
  • pf config set: support set user-level promptflow config.
    • Support workspace connection provider, usage: pf config set connection.provider=azureml:/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace_name>
  • Support override openai connection's model when submitting a flow. For example: pf run create --flow ./ --data ./data.jsonl --connection llm.model=xxx --column-mapping url='${data.url}'

Bugs Fixed

  • [Flow build] Fix flow build file name and environment variable name when connection name contains space.
  • Reserve .promptflow folder when dump run snapshot.
  • Read/write log file with encoding specified.
  • Avoid inconsistent error message when executor exits abnormally.
  • Align inputs & outputs row number in case partial completed run will break pfazure run show-details.
  • Fix bug that failed to parse portal url for run data when the form is an asset id.
  • Fix the issue of process hanging for a long time when running the batch run.

Improvements

  • [Executor][Internal] Improve error message with more details and actionable information.
  • [SDK/CLI] pf/pfazure run show-details:
    • Add --max-results option to control the number of results to display.
    • Add --all-results option to display all results.
  • Add validation for azure PFClient constructor in case wrong parameter is passed.

0.1.0b6 (2023.09.15)

Features Added

  • [promptflow][Feature] Store token metrics in run properties

Bugs Fixed

  • Refine error message body for flow_validator.py
  • Refine error message body for run_tracker.py
  • [Executor][Internal] Add some unit test to improve code coverage of log/metric
  • [SDK/CLI] Update portal link to remove flight.
  • [Executor][Internal] Improve inputs mapping's error message.
  • [API] Resolve warnings/errors of sphinx build

0.1.0b5 (2023.09.08)

Features Added

  • pf run visualize: support lineage graph & display name in visualize page

Bugs Fixed

  • Add missing requirement psutil in setup.py

0.1.0b4 (2023.09.04)

Features added

  • Support pf flow build commands

0.1.0b3 (2023.08.30)

  • Minor bug fixes.

0.1.0b2 (2023.08.29)

  • First preview version with major CLI & SDK features.

Features added

  • pf flow: init/test/serve/export
  • pf run: create/update/stream/list/show/show-details/show-metrics/visualize/archive/restore/export
  • pf connection: create/update/show/list/delete
  • Azure AI support:
    • pfazure run: create/list/stream/show/show-details/show-metrics/visualize

0.1.0b1 (2023.07.20)

  • Stub version in Pypi.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

promptflow-0.1.0b8-py3-none-any.whl (1.3 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page