Towards automated general intelligence.
Project description
PyPI | Documentation | Discord
Documentation for v0.0.300+ is in progress
To contribute, you need to make a fork first, and then make pull request from your fork.
LionAGI
Powerful Intelligent Workflow Automation
It is an intelligent agentic workflow automation framework. It introduces advanced ML models into any existing workflows and data infrastructure.
Currently, it can
- interact with almost any models including local*
- run interactions in parallel for most models (OpenRouter, OpenAI, Ollama, litellm...)
- produce structured pydantic outputs with flexible usage**
- automate workflow via graph based agents
- use advanced prompting techniques, i.e. ReAct (reason-action)
- …
It aims to:
- provide a centralized agent-managed framework for, "ML-powered tools coordination".
- The ways of coordination and possible path can occur among nodes is what we also refers to as
workflow
(the concept of workflow is still in design). - such that, people can utilize intelligence to solve their problems in real life.
- achieve the goal by dramatically lowering the barrier of entries for creating use-case/domain specific tools.
All notebooks should run, as of 0.0.313,
* if there are models on providers that have not been configured, you can do so by configuring your own AI providers, and endpoints.
** Structured Input/Output, Graph based agent system, as well as more advanced prompting techniques are undergoing fast interations...
Why Automating Workflows?
Intelligent AI models such as Large Language Model (LLM), introduced new possibilities of human-computer interaction. LLMs is drawing a lot of attention worldwide due to its “one model fits all”, and incredible performance. One way of using LLM is to use as search engine, however, this usage is complicated by the fact that LLMs hallucinate.
What goes inside of a LLM is more akin to a black-box, lacking interpretability, meaning we don’t know how it reaches certain answer or conclusion, thus we cannot fully trust/rely the output from such a system.
Another approach of using LLM is to treat them as intelligent agent, that are equipped with various tools and data sources. A workflow conducted by such an intelligent agent have clear steps, and we can specify, observe, evaluate and optimize the logic for each decision that the agent
made to perform actions. This approach, though we still cannot pinpoint how LLM output what it outputs, but the flow itself is explainable.
LionAGI agent
can manage and direct other agents, can also use multiple different tools in parallel.
Install LionAGI with pip:
pip install lionagi
Download the .env_template
file, input your appropriate API_KEY
, save the file, rename as .env
and put in your project's root directory.
by default we use OPENAI_API_KEY
.
Quick Start
The following example shows how to use LionAGI's Session
object to interact with gpt-4-turbo
model:
# define system messages, context and user instruction
system = "You are a helpful assistant designed to perform calculations."
instruction = {"Addition":"Add the two numbers together i.e. x+y"}
context = {"x": 10, "y": 5}
model="gpt-4-turbo-preview"
# in interactive environment (.ipynb for example)
from lionagi import Session
calculator = Session(system)
result = await calculator.chat(instruction, context=context, model=model)
print(f"Calculation Result: {result}")
# or otherwise, you can use
import asyncio
from dotenv import load_dotenv
load_dotenv()
from lionagi import Session
async def main():
calculator = Session(system)
result = await calculator.chat(instruction, context=context, model=model)
print(f"Calculation Result: {result}")
if __name__ == "__main__":
asyncio.run(main())
Visit our notebooks for examples.
LionAGI is designed to be asynchronous
only, please check python official documentation on how async
work: here
Notice:
- calling API with maximum throughput over large set of data with advanced models i.e. gpt-4 can get EXPENSIVE IN JUST SECONDS,
- please know what you are doing, and check the usage on OpenAI regularly
- default rate limits are set to be 1,000 requests, 100,000 tokens per miniute, please check the OpenAI usage limit documentation you can modify token rate parameters to fit different use cases.
- if you would like to build from source, please download the latest release,
Community
We encourage contributions to LionAGI and invite you to enrich its features and capabilities. Engage with us and other community members Join Our Discord
Citation
When referencing LionAGI in your projects or research, please cite:
@software{Li_LionAGI_2023,
author = {Haiyang Li},
month = {12},
year = {2023},
title = {LionAGI: Towards Automated General Intelligence},
url = {https://github.com/lion-agi/lionagi},
}
Requirements
Python 3.10 or higher.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lionagi-0.1.2.tar.gz
.
File metadata
- Download URL: lionagi-0.1.2.tar.gz
- Upload date:
- Size: 207.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7f38e10bbace810450030ad8f69460e1f3f9d00e7040b42790b0c1178c1ace8a |
|
MD5 | 2e77a4ae882d86095655876ae46bb8fd |
|
BLAKE2b-256 | 502991c8e6569bc3bc13891a06316833232f115713e6d9e6f1962ff7793fb9c8 |
File details
Details for the file lionagi-0.1.2-py3-none-any.whl
.
File metadata
- Download URL: lionagi-0.1.2-py3-none-any.whl
- Upload date:
- Size: 277.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e2b62db27714c3abc0f77a3a30cb86772aab123a0447f25dea81a44e386804c5 |
|
MD5 | 8bff7a640ca3e7eff8265c8b74e79f3f |
|
BLAKE2b-256 | 0b91fca4f816fdb646242cbbd2a3b09bf11ddb136f1d6ef0d099481cf8e1281c |