Towards automated general intelligence.
Project description
PyPI | Documentation | Discord
Documentation for v0.0.300+ is in progress
To contribute, you need to make a fork first, and then make pull request from your fork.
LionAGI
Powerful Intelligent Workflow Automation
It is an intelligent agentic workflow automation framework. It introduces advanced ML models into any existing workflows and data infrastructure.
Currently, it can
- interact with almost any models including local*
- run interactions in parallel for most models (OpenRouter, OpenAI, Ollama, litellm...)
- produce structured pydantic outputs with flexible usage**
- automate workflow via graph based agents
- use advanced prompting techniques, i.e. ReAct (reason-action)
- …
It aims to:
- provide a centralized agent-managed framework for, "ML-powered tools coordination".
- The ways of coordination and possible path can occur among nodes is what we also refers to as
workflow
(the concept of workflow is still in design). - such that, people can utilize intelligence to solve their problems in real life.
- achieve the goal by dramatically lowering the barrier of entries for creating use-case/domain specific tools.
All notebooks should run, as of 0.0.313,
* if there are models on providers that have not been configured, you can do so by configuring your own AI providers, and endpoints.
** Structured Input/Output, Graph based agent system, as well as more advanced prompting techniques are undergoing fast interations...
Why Automating Workflows?
Intelligent AI models such as Large Language Model (LLM), introduced new possibilities of human-computer interaction. LLMs is drawing a lot of attention worldwide due to its “one model fits all”, and incredible performance. One way of using LLM is to use as search engine, however, this usage is complicated by the fact that LLMs hallucinate.
What goes inside of a LLM is more akin to a black-box, lacking interpretability, meaning we don’t know how it reaches certain answer or conclusion, thus we cannot fully trust/rely the output from such a system.
Another approach of using LLM is to treat them as intelligent agent, that are equipped with various tools and data sources. A workflow conducted by such an intelligent agent have clear steps, and we can specify, observe, evaluate and optimize the logic for each decision that the agent
made to perform actions. This approach, though we still cannot pinpoint how LLM output what it outputs, but the flow itself is explainable.
LionAGI agent
can manage and direct other agents, can also use multiple different tools in parallel.
Install LionAGI with pip:
pip install lionagi
Download the .env_template
file, input your appropriate API_KEY
, save the file, rename as .env
and put in your project's root directory.
by default we use OPENAI_API_KEY
.
Quick Start
The following example shows how to use LionAGI's Session
object to interact with gpt-4-turbo
model:
# define system messages, context and user instruction
system = "You are a helpful assistant designed to perform calculations."
instruction = {"Addition":"Add the two numbers together i.e. x+y"}
context = {"x": 10, "y": 5}
model="gpt-4-turbo-preview"
# in interactive environment (.ipynb for example)
from lionagi import Session
calculator = Session(system)
result = await calculator.chat(instruction, context=context, model=model)
print(f"Calculation Result: {result}")
# or otherwise, you can use
import asyncio
from dotenv import load_dotenv
load_dotenv()
from lionagi import Session
async def main():
calculator = Session(system)
result = await calculator.chat(instruction, context=context, model=model)
print(f"Calculation Result: {result}")
if __name__ == "__main__":
asyncio.run(main())
Visit our notebooks for examples.
LionAGI is designed to be asynchronous
only, please check python official documentation on how async
work: here
Notice:
- calling API with maximum throughput over large set of data with advanced models i.e. gpt-4 can get EXPENSIVE IN JUST SECONDS,
- please know what you are doing, and check the usage on OpenAI regularly
- default rate limits are set to be 1,000 requests, 100,000 tokens per miniute, please check the OpenAI usage limit documentation you can modify token rate parameters to fit different use cases.
- if you would like to build from source, please download the latest release,
Community
We encourage contributions to LionAGI and invite you to enrich its features and capabilities. Engage with us and other community members Join Our Discord
Citation
When referencing LionAGI in your projects or research, please cite:
@software{Li_LionAGI_2023,
author = {Haiyang Li},
month = {12},
year = {2023},
title = {LionAGI: Towards Automated General Intelligence},
url = {https://github.com/lion-agi/lionagi},
}
Requirements
Python 3.10 or higher.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lionagi-0.0.316.tar.gz
.
File metadata
- Download URL: lionagi-0.0.316.tar.gz
- Upload date:
- Size: 146.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f05707c076813b92bc5860bacf8676624bf38dec012561592959b1a0333f5287 |
|
MD5 | 0635acac8aa2aac0fde560671591bfce |
|
BLAKE2b-256 | 95485b697454746b7f04dd1a5b920ac3f001a63f47548b4aa2a2ee6c09330806 |
File details
Details for the file lionagi-0.0.316-py3-none-any.whl
.
File metadata
- Download URL: lionagi-0.0.316-py3-none-any.whl
- Upload date:
- Size: 186.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 54879a8acfd5a49171a4eaa266458aa375e8b3bf662c731295095be86f343ba0 |
|
MD5 | dd05391cb7f25eee3ed61ce23992c995 |
|
BLAKE2b-256 | c8514f386c6f1c46183d64feddc0c7f9ce9ee3cab85f6ee1b37af8f501ab070f |