The SDK for agenta is an open-source LLMOps platform.
Project description
Home Page | Slack | Documentation
Collaborate on prompts, evaluate, and deploy LLM applications with confidence
The open-source LLM developer platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps.Quick Start • Features • Documentation • Enterprise • Roadmap • Join Our Slack • Contributing
⭐️ Why Agenta?
Agenta is an end-to-end LLM developer platform. It provides the tools for prompt engineering and management, ⚖️ evaluation, human annotation, and :rocket: deployment. All without imposing any restrictions on your choice of framework, library, or model.
Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time.
With Agenta, you can:
- 🧪 Experiment and compare prompts on any LLM workflow (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...)
- ✍️ Collect and annotate golden test sets for evaluation
- 📈 Evaluate your application with pre-existing or custom evaluators
- 🔍 Annotate and A/B test your applications with human feedback
- 🤝 Collaborate with product teams for prompt engineering and evaluation
- 🚀 Deploy your application in one-click in the UI, through CLI, or through github workflows.
Works with any LLM app workflow
Agenta enables prompt engineering and evaluation on any LLM app architecture:
- Chain of prompts
- RAG
- Agents
- ...
It works with any framework such as Langchain, LlamaIndex and any LLM provider (openAI, Cohere, Mistral).
Jump here to see how to use your own custom application with agenta
Quick Start
Get started for free
Explore the Docs
Create your first application in one-minute
Create an application using Langchain
Self-host agenta
Check the Cookbook
Features
Playground | Evaluation |
---|---|
Compare and version prompts for any LLM app, from single prompt to agents. |
Define test sets, then evaluate manually or programmatically your different variants. |
Human annotation | Deployment |
Use Human annotator to A/B test and score your LLM apps. |
When you are ready, deploy your LLM applications as APIs in one click. |
Enterprise Support
Contact us here for enterprise support and early access to agenta self-managed enterprise with Kubernetes support.
Disabling Anonymized Tracking
By default, Agenta automatically reports anonymized basic usage statistics. This helps us understand how Agenta is used and track its overall usage and growth. This data does not include any sensitive information.
To disable anonymized telemetry, follow these steps:
- For web: Set
TELEMETRY_TRACKING_ENABLED
tofalse
in youragenta-web/.env
file. - For CLI: Set
telemetry_tracking_enabled
tofalse
in your~/.agenta/config.toml
file.
After making this change, restart Agenta Compose.
Contributing
We warmly welcome contributions to Agenta. Feel free to submit issues, fork the repository, and send pull requests.
We are usually hanging in our Slack. Feel free to join our Slack and ask us anything
Check out our Contributing Guide for more information.
Contributors ✨
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!
Attribution: Testing icons created by Freepik - Flaticon
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for agenta-0.20.0a7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 856d093b8faa17b5f8bd57ccef1ec1aa0f234e3b6aea7e11add8d5ba9b19de2e |
|
MD5 | 3371a400fb5f4bfacf9e3fce44157ca6 |
|
BLAKE2b-256 | 856d684722d700eaf297465f689cc32a82a4440e14cf6c59a3b68a9fadb15e56 |