Skip to main content

The Library to Build and Auto-optimize Any LLM Task Pipeline

Project description

AdalFlow logo

Try Quickstart in Colab

All Documentation | Models | Retrievers | Agents | Trainer & Optimizers

PyPI Version PyPI Downloads PyPI Downloads GitHub stars Open Issues License discord-invite

⚡ The Library to Build and to Auto-optimize LLM Applications ⚡

AdalFlow helps developers build and optimize LLM task pipelines. Embracing similar design pattern to PyTorch, AdalFlow is light, modular, and robust, with a 100% readable codebase.

Why AdalFlow?

LLMs are like water; they can be shaped into anything, from GenAI applications such as chatbots, translation, summarization, code generation, and autonomous agents to classical NLP tasks like text classification and named entity recognition. They interact with the world beyond the model’s internal knowledge via retrievers, memory, and tools (function calls). Each use case is unique in its data, business logic, and user experience.

Because of this, no library can provide out-of-the-box solutions. Users must build towards their own use case. This requires the library to be modular, robust, and have a clean, readable codebase. The only code you should put into production is code you either 100% trust or are 100% clear about how to customize and iterate.

Further reading: How We Started, Introduction, Design Philosophy and Class hierarchy.

AdalFlow Task Pipeline

We will ask the model to respond with explanation and example of a concept. To achieve this, we will build a simple pipeline to get the structured output as QAOutput.

Well-designed Base Classes

This leverages our two and only powerful base classes: Component as building blocks for the pipeline and DataClass to ease the data interaction with LLMs.

from dataclasses import dataclass, field

from adalflow.core import Component, Generator, DataClass
from adalflow.components.model_client import GroqAPIClient
from adalflow.components.output_parsers import JsonOutputParser

@dataclass
class QAOutput(DataClass):
    explanation: str = field(
        metadata={"desc": "A brief explanation of the concept in one sentence."}
    )
    example: str = field(metadata={"desc": "An example of the concept in a sentence."})



qa_template = r"""<SYS>
You are a helpful assistant.
<OUTPUT_FORMAT>
{{output_format_str}}
</OUTPUT_FORMAT>
</SYS>
User: {{input_str}}
You:"""

class QA(Component):
    def __init__(self):
        super().__init__()

        parser = JsonOutputParser(data_class=QAOutput, return_data_class=True)
        self.generator = Generator(
            model_client=GroqAPIClient(),
            model_kwargs={"model": "llama3-8b-8192"},
            template=qa_template,
            prompt_kwargs={"output_format_str": parser.format_instructions()},
            output_processors=parser,
        )

    def call(self, query: str):
        return self.generator.call({"input_str": query})

    async def acall(self, query: str):
        return await self.generator.acall({"input_str": query})

Run the following code for visualization and calling the model.

qa = QA()
print(qa)

# call
output = qa("What is LLM?")
print(output)

Clear Pipeline Structure

Simply by using print(qa), you can see the pipeline structure, which helps users understand any LLM workflow quickly.

QA(
  (generator): Generator(
    model_kwargs={'model': 'llama3-8b-8192'},
    (prompt): Prompt(
      template: <SYS>
      You are a helpful assistant.
      <OUTPUT_FORMAT>
      {{output_format_str}}
      </OUTPUT_FORMAT>
      </SYS>
      User: {{input_str}}
      You:, prompt_kwargs: {'output_format_str': 'Your output should be formatted as a standard JSON instance with the following schema:\n```\n{\n    "explanation": "A brief explanation of the concept in one sentence. (str) (required)",\n    "example": "An example of the concept in a sentence. (str) (required)"\n}\n```\n-Make sure to always enclose the JSON output in triple backticks (```). Please do not add anything other than valid JSON output!\n-Use double quotes for the keys and string values.\n-Follow the JSON formatting conventions.'}, prompt_variables: ['output_format_str', 'input_str']
    )
    (model_client): GroqAPIClient()
    (output_processors): JsonOutputParser(
      data_class=QAOutput, examples=None, exclude_fields=None, return_data_class=True
      (json_output_format_prompt): Prompt(
        template: Your output should be formatted as a standard JSON instance with the following schema:
        ```
        {{schema}}
        ```
        {% if example %}
        Examples:
        ```
        {{example}}
        ```
        {% endif %}
        -Make sure to always enclose the JSON output in triple backticks (```). Please do not add anything other than valid JSON output!
        -Use double quotes for the keys and string values.
        -Follow the JSON formatting conventions., prompt_variables: ['schema', 'example']
      )
      (output_processors): JsonParser()
    )
  )
)

The Output

We structure the output to both track the data and potential errors if any part of the Generator component fails. Here is what we get from print(output):

GeneratorOutput(data=QAOutput(explanation='LLM stands for Large Language Model, which refers to a type of artificial intelligence designed to process and generate human-like language.', example='For instance, LLMs are used in chatbots and virtual assistants, such as Siri and Alexa, to understand and respond to natural language input.'), error=None, usage=None, raw_response='```\n{\n  "explanation": "LLM stands for Large Language Model, which refers to a type of artificial intelligence designed to process and generate human-like language.",\n  "example": "For instance, LLMs are used in chatbots and virtual assistants, such as Siri and Alexa, to understand and respond to natural language input."\n}', metadata=None)

Focus on the Prompt

Use the following code will let us see the prompt after it is formatted:

qa2.generator.print_prompt(
        output_format_str=qa2.generator.output_processors.format_instructions(),
        input_str="What is LLM?",
)

The output will be:

<SYS>
You are a helpful assistant.
<OUTPUT_FORMAT>
Your output should be formatted as a standard JSON instance with the following schema:
```
{
    "explanation": "A brief explanation of the concept in one sentence. (str) (required)",
    "example": "An example of the concept in a sentence. (str) (required)"
}
```
-Make sure to always enclose the JSON output in triple backticks (```). Please do not add anything other than valid JSON output!
-Use double quotes for the keys and string values.
-Follow the JSON formatting conventions.
</OUTPUT_FORMAT>
</SYS>
User: What is LLM?
You:

Model-agnostic

You can switch to any model simply by using a different model_client (provider) and model_kwargs. Let's use OpenAI's gpt-3.5-turbo model.

from adalflow.components.model_client import OpenAIClient

self.generator = Generator(
    model_client=OpenAIClient(),
    model_kwargs={"model": "gpt-3.5-turbo"},
    template=qa_template,
    prompt_kwargs={"output_format_str": parser.format_instructions()},
    output_processors=parser,
)

Quick Install

Install AdalFlow with pip:

pip install adalflow

Please refer to the full installation guide for more details.

Documentation

AdalFlow full documentation available at adalflow.sylph.ai:

AdalFlow: A Tribute to Ada Lovelace

AdalFlow is named in honor of Ada Lovelace, the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.

Contributors

contributors

Citation

@software{Yin2024AdalFlow,
  author = {Li Yin},
  title = {{AdalFlow: The Library for Large Language Model (LLM) Applications}},
  month = {7},
  year = {2024},
  doi = {10.5281/zenodo.12639531},
  url = {https://github.com/SylphAI-Inc/LightRAG}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adalflow-0.2.4.tar.gz (196.7 kB view details)

Uploaded Source

Built Distribution

adalflow-0.2.4-py3-none-any.whl (244.3 kB view details)

Uploaded Python 3

File details

Details for the file adalflow-0.2.4.tar.gz.

File metadata

  • Download URL: adalflow-0.2.4.tar.gz
  • Upload date:
  • Size: 196.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Darwin/23.5.0

File hashes

Hashes for adalflow-0.2.4.tar.gz
Algorithm Hash digest
SHA256 36875949f52bf47c865cfc498dc95d503df53a94187284a65a8eee07253ac9d4
MD5 b4388e2cb427ea2f7f83649bcfd19373
BLAKE2b-256 4d6dc7db20f174f163527bc4f4d9f8b1c8a44b5f170630eeb79e93556b254825

See more details on using hashes here.

File details

Details for the file adalflow-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: adalflow-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 244.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Darwin/23.5.0

File hashes

Hashes for adalflow-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fd2cdad9ba79a92ea3e0cacc53cd33cb758b97af305de76410b7a0fe84e86823
MD5 6013233edf1ae296c306ee4de493594f
BLAKE2b-256 21d42b6fa4db7fddb249d4f88f95b869f6c1e0077513d4a6a6c4a6621a7a5fd2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page